Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,432)

Search Parameters:
Keywords = concept representation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1102 KB  
Review
The Impact of Organizational Dysfunction on Employees’ Fertility and Economic Outcomes: A Scoping Review
by Daniele Virgillito and Caterina Ledda
Adm. Sci. 2025, 15(11), 416; https://doi.org/10.3390/admsci15110416 - 27 Oct 2025
Viewed by 65
Abstract
Background/Purpose: Reproductive health and fertility outcomes are essential but often overlooked aspects of occupational well-being. Organizational dysfunction, demanding workloads, and limited workplace accommodations may negatively affect fertility, while supportive policies and inclusive cultures can mitigate risks. This review aimed to map current evidence [...] Read more.
Background/Purpose: Reproductive health and fertility outcomes are essential but often overlooked aspects of occupational well-being. Organizational dysfunction, demanding workloads, and limited workplace accommodations may negatively affect fertility, while supportive policies and inclusive cultures can mitigate risks. This review aimed to map current evidence on these relationships and their economic consequences. Methodology/Approach: A scoping review was conducted using the PCC (Population–Concept–Context) framework. Systematic searches across multiple databases identified 30 eligible studies, including quantitative, qualitative, and mixed-method designs, spanning different sectors and international contexts. Findings: Four main domains emerged: shift work and circadian disruption, organizational stress and burnout, workplace flexibility and accommodations, and fertility-related policies and organizational support. Hazardous working conditions, long hours, and psychosocial stressors were consistently associated with impaired fertility, reduced fecundability, and pregnancy complications. Conversely, flexible scheduling, fertility benefits, and supportive organizational cultures were linked to improved well-being, retention, and productivity. Originality/Value: This review integrates evidence across occupational health, organizational psychology, and labor economics, offering a comprehensive overview of workplace influences on reproductive health. It highlights gaps in equity and representation—particularly for men, LGBTQ+ employees, and workers in precarious jobs—and calls for longitudinal, interdisciplinary, and intervention-based studies to inform effective workplace policies. Full article
(This article belongs to the Special Issue Human Capital Development—New Perspectives for Diverse Domains)
Show Figures

Figure 1

33 pages, 1433 KB  
Article
Hybrid Time Series Transformer–Deep Belief Network for Robust Anomaly Detection in Mobile Communication Networks
by Anita Ershadi Oskouei, Mehrdad Kaveh, Francisco Hernando-Gallego and Diego Martín
Symmetry 2025, 17(11), 1800; https://doi.org/10.3390/sym17111800 - 25 Oct 2025
Viewed by 235
Abstract
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, [...] Read more.
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, and degraded accuracy under heterogeneous and imbalanced real-world conditions. To overcome these limitations, a hybrid time series transformer–deep belief network (HTST-DBN) is introduced, integrating the sequential modeling strength of TST with the hierarchical feature representation of DBN, while an improved orchard algorithm (IOA) performs adaptive hyper-parameter optimization. The framework also embodies the concept of symmetry and asymmetry. The IOA introduces controlled symmetry-breaking between exploration and exploitation, while the TST captures symmetric temporal patterns in network traffic whose asymmetric deviations often indicate anomalies. The proposed method is evaluated across four benchmark datasets (ToN-IoT, 5G-NIDD, CICDDoS2019, and Edge-IoTset) that capture diverse network environments, including 5G core traffic, IoT telemetry, mobile edge computing, and DDoS attacks. Experimental evaluation is conducted by benchmarking HTST-DBN against several state-of-the-art models, including TST, bidirectional encoder representations from transformers (BERT), DBN, deep reinforcement learning (DRL), convolutional neural network (CNN), and random forest (RF) classifiers. The proposed HTST-DBN achieves outstanding performance, with the highest accuracy reaching 99.61%, alongside strong recall and area under the curve (AUC) scores. The HTST-DBN framework presents a scalable and reliable solution for anomaly detection in next-generation mobile networks. Its hybrid architecture, reinforced by hyper-parameter optimization, enables effective learning in complex, dynamic, and heterogeneous environments, making it suitable for real-world deployment in future 5G/6G infrastructures. Full article
(This article belongs to the Special Issue AI-Driven Optimization for EDA: Balancing Symmetry and Asymmetry)
Show Figures

Figure 1

19 pages, 1073 KB  
Article
Material Degradation and NaTech Risk: A Case Study to Discuss Bidirectional Vulnerability in Industrial Systems
by Morena Vitale, Micaela Demichela and Antonello A. Barresi
Appl. Sci. 2025, 15(21), 11361; https://doi.org/10.3390/app152111361 - 23 Oct 2025
Viewed by 168
Abstract
Material degradation is a critical factor in assessing the vulnerability of industrial infrastructure, particularly in the presence of extreme natural events. This study shows that the relation between material degradation and NaTech risk is both bidirectional and systemic, with important implications for industrial [...] Read more.
Material degradation is a critical factor in assessing the vulnerability of industrial infrastructure, particularly in the presence of extreme natural events. This study shows that the relation between material degradation and NaTech risk is both bidirectional and systemic, with important implications for industrial safety. Through the analysis of an emblematic case study, it was demonstrated that latent defects, originating during the construction phase, can remain silent for decades and manifest critically under the action of extreme natural events. The objective is to provide a useful methodological tool for the early diagnosis of systemic risk conditions and for planning preventive and resilient strategies. The proposed approach overcomes the traditional separation between degradation analysis and environmental risk assessment, promoting a holistic and adaptive view of vulnerability. Specifically, integrating the concept of structural obsolescence into NaTech risk models allows for a more realistic representation of systemic exposure and supports the planning of more effective prevention strategies. The case study analysis highlights the interaction between latent structural defects and environmental stresses, offering insights for interpreting vulnerability in complex and multifactorial scenarios. The outcome provides perspectives for the integration of quantitative indicators into NaTech risk models. Full article
(This article belongs to the Special Issue Safety and Risk Assessment in Industrial Systems)
Show Figures

Figure 1

23 pages, 16607 KB  
Article
Few-Shot Class-Incremental SAR Target Recognition with a Forward-Compatible Prototype Classifier
by Dongdong Guan, Rui Feng, Yuzhen Xie, Xiaolong Zheng, Bangjie Li and Deliang Xiang
Remote Sens. 2025, 17(21), 3518; https://doi.org/10.3390/rs17213518 - 23 Oct 2025
Viewed by 265
Abstract
In practical Synthetic Aperture Radar (SAR) applications, new-class objects can appear at any time as the rapid accumulation of large-scale and high-quantity SAR imagery and are usually supported by limited instances in most cooperative scenarios. Hence, powering advanced deep-learning (DL)-based SAR Automatic Target [...] Read more.
In practical Synthetic Aperture Radar (SAR) applications, new-class objects can appear at any time as the rapid accumulation of large-scale and high-quantity SAR imagery and are usually supported by limited instances in most cooperative scenarios. Hence, powering advanced deep-learning (DL)-based SAR Automatic Target Recognition (SAR ATR) systems with the ability to continuously learn new concepts from few-shot samples without forgetting the old ones is important. In this paper, we tackle the Few-Shot Class-Incremental Learning (FSCIL) problem in the SAR ATR field and propose a Forward-Compatible Prototype Classifier (FCPC) by emphasizing the model’s forward compatibility to incoming targets before and after deployment. Specifically, the classifier’s sensitivity to diversified cues of emerging targets is improved in advance by a Virtual-class Semantic Synthesizer (VSS), considering the class-agnostic scattering parts of targets in SAR imagery and semantic patterns of the DL paradigm. After deploying the classifier in dynamic worlds, since novel target patterns from few-shot samples are highly biased and unstable, the model’s representability to general patterns and its adaptability to class-discriminative ones are balanced by a Decoupled Margin Adaptation (DMA) strategy, in which only the model’s high-level semantic parameters are timely tuned by improving the similarity of few-shot boundary samples to class prototypes and the dissimilarity to interclass ones. For inference, a Nearest-Class-Mean (NCM) classifier is adopted for prediction by comparing the semantics of unknown targets with prototypes of all classes based on the cosine criterion. In experiments, contributions of the proposed modules are verified by ablation studies, and our method achieves considerable performance on three FSCIL of SAR ATR datasets, i.e., SAR-AIRcraft-FSCIL, MSTAR-FSCIL, and FUSAR-FSCIL, compared with numerous benchmarks, demonstrating its superiority and effectiveness in dealing with the FSCIL of SAR ATR. Full article
Show Figures

Figure 1

28 pages, 1211 KB  
Article
Information-Theoretic Reliability Analysis of Consecutive r-out-of-n:G Systems via Residual Extropy
by Anfal A. Alqefari, Ghadah Alomani, Faten Alrewely and Mohamed Kayid
Entropy 2025, 27(11), 1090; https://doi.org/10.3390/e27111090 - 22 Oct 2025
Viewed by 194
Abstract
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex [...] Read more.
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex lifetime models, providing effective tools when closed-form expressions are unavailable. Preservation properties under classical stochastic orders and aging notions are examined, together with monotonicity and characterization results that offer deeper insights into system uncertainty. A conditional formulation, in which all components are assumed operational at a given time, is also investigated, yielding new theoretical findings. From an inferential perspective, we propose a maximum likelihood estimator of residual extropy under exponential lifetimes, supported by simulation studies and real-world reliability data. These contributions highlight residual extropy as a powerful information-theoretic tool for modeling, estimation, and decision-making in multicomponent reliability systems, thereby aligning with the objectives of statistical inference through entropy-like measures. Full article
(This article belongs to the Special Issue Recent Progress in Uncertainty Measures)
Show Figures

Figure 1

41 pages, 1736 KB  
Review
A Review of an Ontology-Based Digital Twin to Enable Condition-Based Maintenance for Aircraft Operations
by Darren B. Macer, Ian K. Jennions and Nicolas P. Avdelidis
Appl. Sci. 2025, 15(20), 11136; https://doi.org/10.3390/app152011136 - 17 Oct 2025
Viewed by 298
Abstract
The concept of digital twins has been studied for over two decades and the core tenet lies in it being a “digital representation of a connected physical object”. Utilization of digital twins promises to enable superior decision-making, enhanced operational understanding and future predictions [...] Read more.
The concept of digital twins has been studied for over two decades and the core tenet lies in it being a “digital representation of a connected physical object”. Utilization of digital twins promises to enable superior decision-making, enhanced operational understanding and future predictions to enable levels of Condition Based Maintenance (CBM) through Integrated Vehicle Health Management (IVHM) which exceeds existing capabilities. Digital twins are being embraced by many industries, including aviation, and are often depicted as electronic images of an asset of interest. However, in a less visually appealing manner, they can also be described simply as a collection of data in an organized and easily accessible format from across the lifecycle which describes a feature that addresses a specific use case. This review demonstrates how the creation and maintenance of digital twins will play a critical role in enhancing IVHM to enable CBM within the aerospace industry. Through a literature review, this paper demonstrates the need for digital twins, of a sufficient level of fidelity, to facilitate the transition to being condition based through deeper levels of operational and component understanding. It emphasizes how detailed knowledge, represented through ontologies, regarding component design, manufacturing, and operational history aid in achieving the desired fidelity levels. By synthesizing insights from various industries with a focus on aerospace applications, this paper aims to provide a comprehensive understanding, focused on the aviation industry, of digital twin definitions, their creation processes, fidelity measurement, and their implications for CBM, while acknowledging the limitations of the current research landscape. Full article
Show Figures

Figure 1

17 pages, 1416 KB  
Article
Visual Multiplication Through Stick Intersections: Enhancing South African Elementary Learners’ Mathematical Understanding
by Terungwa James Age and Masilo France Machaba
Educ. Sci. 2025, 15(10), 1383; https://doi.org/10.3390/educsci15101383 - 16 Oct 2025
Viewed by 292
Abstract
This paper presents a novel visual approach to teaching multiplication to elementary school pupils using stick intersections. Within the South African context, where students consistently demonstrate low mathematics achievement, particularly in foundational arithmetic operations, this research explores an alternative pedagogical strategy that transforms [...] Read more.
This paper presents a novel visual approach to teaching multiplication to elementary school pupils using stick intersections. Within the South African context, where students consistently demonstrate low mathematics achievement, particularly in foundational arithmetic operations, this research explores an alternative pedagogical strategy that transforms abstract multiplication concepts into visual, concrete, countable representations. Building on theories of embodied cognition and visual mathematics, this study implemented and evaluated the stick intersection method with 45 Grade 4 students in Polokwane, Limpopo Province. Using a mixed-methods approach combining quantitative assessments with qualitative observations, the results revealed statistically significant improvements in multiplication performance across all complexity levels, with particularly substantial gains among previously low-performing students (61.3% improvement, d = 1.87). Qualitative findings demonstrated enhanced student engagement, deeper conceptual understanding of place value, and overwhelmingly positive learner perceptions of the method. The visual approach proved especially valuable in the multilingual South African classroom context, where it transcended language barriers by providing direct visual access to mathematical concepts. High retention rates (94.9%) one-month post-intervention suggest the method facilitated lasting conceptual understanding rather than temporary procedural knowledge. This research contributes to mathematics education by demonstrating how visually oriented, culturally responsive pedagogical approaches can address persistent challenges in developing mathematics proficiency, particularly in resource-constrained educational environments. Full article
Show Figures

Figure 1

27 pages, 5279 KB  
Article
Concept-Guided Exploration: Building Persistent, Actionable Scene Graphs
by Noé José Zapata Cornejo, Gerardo Pérez, Alejandro Torrejón, Pedro Núñez and Pablo Bustos
Appl. Sci. 2025, 15(20), 11084; https://doi.org/10.3390/app152011084 - 16 Oct 2025
Viewed by 272
Abstract
The perception of 3D space by mobile robots is rapidly moving from flat metric grid representations to hybrid metric-semantic graphs built from human-interpretable concepts. While most approaches first build metric maps and then add semantic layers, we explore an alternative, concept-first architecture in [...] Read more.
The perception of 3D space by mobile robots is rapidly moving from flat metric grid representations to hybrid metric-semantic graphs built from human-interpretable concepts. While most approaches first build metric maps and then add semantic layers, we explore an alternative, concept-first architecture in which spatial understanding emerges from asynchronous concept agents that directly instantiate and manage semantic entities. Our robot employs two spatial concepts—room and door—implemented as autonomous processes within a cognitive distributed architecture. These concept agents cooperatively build a shared scene graph representation of indoor layouts through active exploration and incremental validation. The key architectural principle is hierarchical constraint propagation: Room instantiation provides geometric and semantic priors to guide and support door detection within wall boundaries. The resulting structure is maintained by a complementary functional principle based on prediction-matching loops. This approach is designed to yield an actionable, human-interpretable spatial representation without relying on any pre-existing global metric map, supporting scalable operation and persistent, task-relevant understanding in structured indoor environments. Full article
(This article belongs to the Special Issue Advances in Cognitive Robotics and Control)
Show Figures

Figure 1

28 pages, 869 KB  
Article
Local Fractional Perspective on Weddle’s Inequality in Fractal Space
by Yuanheng Wang, Usama Asif, Muhammad Uzair Awan, Muhammad Zakria Javed, Awais Gul Khan, Mona Bin-Asfour and Kholoud Saad Albalawi
Fractal Fract. 2025, 9(10), 662; https://doi.org/10.3390/fractalfract9100662 - 14 Oct 2025
Viewed by 234
Abstract
The Yang local fractional setting provides the generalized framework to explore the non-differentiable mappings considering the local properties. Due to the dominance of these concepts, mathematicians have investigated multiple problems, including mathematical modelling, optimization, and inequalities. Incorporating these useful concepts, this study aims [...] Read more.
The Yang local fractional setting provides the generalized framework to explore the non-differentiable mappings considering the local properties. Due to the dominance of these concepts, mathematicians have investigated multiple problems, including mathematical modelling, optimization, and inequalities. Incorporating these useful concepts, this study aims to derive Weddle-type integral inequalities within the context of fractal space. To achieve the intended results, we establish a new local fractional identity. By using this identity, the convexity property, the bounded property of mappings, the L-Lipschitzian property of mappings, and other famous inequalities, we develop numerous upper bounds. Additionally, we provide 2D and 3D graphical representations and numerous applications, which show the significance of our main findings. To the best of our knowledge, this is the first study concerning error inequalities of Weddle’s quadrature formulation within the fractal space. Full article
(This article belongs to the Special Issue Advances in Fractional Integral Inequalities: Theory and Applications)
Show Figures

Figure 1

11 pages, 645 KB  
Perspective
Applying Race and Ethnicity in Health Disparities Research
by Keith C. Norris, Matthew F. Hudson, M. Roy Wilson, Genevieve L. Wojcik, Elizabeth O. Ofili and Jerris R. Hedges
Int. J. Environ. Res. Public Health 2025, 22(10), 1561; https://doi.org/10.3390/ijerph22101561 - 14 Oct 2025
Viewed by 321
Abstract
Health professionals commonly reference race and ethnicity to inform health care and administrative decisions. However, health researchers (and, arguably, society at large) misapply race and ethnicity when assuming an inherent relationship of these concepts with biological and health outcomes of interest. Misapplication of [...] Read more.
Health professionals commonly reference race and ethnicity to inform health care and administrative decisions. However, health researchers (and, arguably, society at large) misapply race and ethnicity when assuming an inherent relationship of these concepts with biological and health outcomes of interest. Misapplication of race potentially results from socially embedded identification predicated upon race essentialism, the belief that people within a racial group share “inherent” biological traits that define them as distinct from other racial groups. This false belief is often associated with implied racial hierarchies obscuring authentic causal disease relationships. Similarly, ethnicity is a socially and politically constructed group descriptor for people from a similar national or regional background who may share common cultural, historical, and social experiences. Thus, as for race, no inherent biological information is contained within such group definitions. This article summarizes the Research Centers for Minority Institutions (RCMI) 2025 Annual Grantee Meeting keynote session on Race and Ethnicity in Medicine. The session described how society originated and subsequently applied/misapplied race and ethnicity in various domains of policy and public health. The keynote session also considered the use of race and ethnicity in describing and envisioning biomedical research, clinical trials, clinical practice, and health services research. The authors summarize a more tenable use of race and ethnicity to advance biomedical research and health by focusing upon social and environmental drivers of health, population representation in clinical trials, and other factors. Associated recommendations from the keynote session are provided. Full article
Show Figures

Figure 1

26 pages, 4555 KB  
Article
Modeling the Mutual Dynamic Correlations of Words in Written Texts Using Multivariate Hawkes Processes
by Hiroshi Ogura, Yasutaka Hanada, Keitaro Osakabe and Masato Kondo
J 2025, 8(4), 40; https://doi.org/10.3390/j8040040 - 14 Oct 2025
Viewed by 269
Abstract
The occurrence patterns of important words found in six texts (one historical pamphlet and five renowned academic books) are analyzed using both univariate and multivariate Hawkes processes. By treating the occurrence patterns as binary time-series data along the texts, we investigate how effectively [...] Read more.
The occurrence patterns of important words found in six texts (one historical pamphlet and five renowned academic books) are analyzed using both univariate and multivariate Hawkes processes. By treating the occurrence patterns as binary time-series data along the texts, we investigate how effectively univariate and multivariate Hawkes processes capture the characteristics of these word occurrence signals. Through maximum likelihood estimation and subsequent simulations, we found that the multivariate Hawkes process clearly outperforms the univariate Hawkes process in modeling word occurrence signals. Moreover, we found that the multivariate Hawkes process can provide a Hawkes graph, which serves as an intuitive representation of the relationships between concepts appearing in the analyzed text. Furthermore, our study demonstrates that the importance of concepts within a given text can be quantitatively estimated based on the optimized parameter values of the multivariate Hawkes process. Full article
(This article belongs to the Special Issue Feature Papers of J—Multidisciplinary Scientific Journal in 2025)
Show Figures

Figure 1

18 pages, 686 KB  
Article
Towards Evolving Actor–Network Ontologies: Enabling Reflexive Digital Twins for Cultural Heritage
by George Pavlidis, Vasileios Arampatzakis, Vasileios Sevetlidis, Anestis Koutsoudis, Fotis Arnaoutoglou, George Alexis Ioannakis and Chairi Kiourt
Information 2025, 16(10), 892; https://doi.org/10.3390/info16100892 - 13 Oct 2025
Viewed by 696
Abstract
This paper introduces the concept of evolving actor–network ontologies (EANO) as a new paradigm for cultural digital twins. Building on actor–network theory, EANO reframes ontologies from static representations into reflexive, dynamic structures in which semantic interpretations are continuously negotiated among heterogeneous actors. We [...] Read more.
This paper introduces the concept of evolving actor–network ontologies (EANO) as a new paradigm for cultural digital twins. Building on actor–network theory, EANO reframes ontologies from static representations into reflexive, dynamic structures in which semantic interpretations are continuously negotiated among heterogeneous actors. We propose a five-layer architecture that operationalizes this principle, embedding reflexivity, actor salience, and systemic parameters such as resistance and volatility directly into the ontological model. To illustrate this approach, we present minimal simulations that demonstrate how different actor constellations and systemic conditions lead to distinct patterns of semantic evolution, ranging from expert erosion to contested equilibria and balanced coexistence. Rather than serving as predictive models, these simulations exemplify how EANO captures semantic plurality and contestation within a transparent and interpretable framework. The contribution of this work is thus twofold: it provides a conceptual foundation for evolving ontologies in digital heritage and a lightweight demonstration of how such models can be instantiated and explored computationally. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Figure 1

20 pages, 304 KB  
Article
Investigating Popular Representations of Postmodernism as Beliefs—A Psychological Analysis and Empirical Verification
by Ryszard Klamut and Andrzej Sołtys
Religions 2025, 16(10), 1288; https://doi.org/10.3390/rel16101288 - 10 Oct 2025
Viewed by 302
Abstract
This article is an attempt to empirically establish a new category of social beliefs defined as postmodern beliefs. They are cognitive categorizations of social and media messages regarding ways of understanding the world which are based on the basic assumptions of postmodernism, quite [...] Read more.
This article is an attempt to empirically establish a new category of social beliefs defined as postmodern beliefs. They are cognitive categorizations of social and media messages regarding ways of understanding the world which are based on the basic assumptions of postmodernism, quite widely recognised as fundamental. The theoretical model adopted in the article assumes the existence of three beliefs: antifundamentalism, absolutization of freedom and relativization of truth. The hypothesised concept was operationalized as Postmodern Beliefs Questionnaire (PMBQ). Verification studies were carried out on three groups of over 600 people. The verification of the tool was carried out by using exploratory factor analysis (EFA) to select the appropriate pool of statements, then data in two subsequent datasets was analysed using Confirmatory Factor Analysis (CFA) to empirically verify the selected set of statements and estimate relevant parameters. The tool constructed allows for investigating the distinguished beliefs at a satisfactory level of reliability and validity. It can be used to measure the extent to which the representations that make up the popular understanding of postmodernism have been recognised and built into the overall belief system about the world of the respondents. The distinguished postmodern beliefs differ in terms of relations with other social beliefs of the respondents, such as anthropocentrism, traditionalism, faith in a just world, as well as the attitude of individuals to material values or their individualistic orientation. Full article
30 pages, 1778 KB  
Article
AI, Ethics, and Cognitive Bias: An LLM-Based Synthetic Simulation for Education and Research
by Ana Luize Bertoncini, Raul Matsushita and Sergio Da Silva
AI Educ. 2026, 1(1), 3; https://doi.org/10.3390/aieduc1010003 - 4 Oct 2025
Viewed by 1427
Abstract
This study examines how cognitive biases may shape ethical decision-making in AI-mediated environments, particularly within education and research. As AI tools increasingly influence human judgment, biases such as normalization, complacency, rationalization, and authority bias can lead to ethical lapses, including academic misconduct, uncritical [...] Read more.
This study examines how cognitive biases may shape ethical decision-making in AI-mediated environments, particularly within education and research. As AI tools increasingly influence human judgment, biases such as normalization, complacency, rationalization, and authority bias can lead to ethical lapses, including academic misconduct, uncritical reliance on AI-generated content, and acceptance of misinformation. To explore these dynamics, we developed an LLM-generated synthetic behavior estimation framework that modeled six decision-making scenarios with probabilistic representations of key cognitive biases. The scenarios addressed issues ranging from loss of human agency to biased evaluations and homogenization of thought. Statistical summaries of the synthetic dataset indicated that 71% of agents engaged in unethical behavior influenced by biases like normalization and complacency, 78% relied on AI outputs without scrutiny due to automation and authority biases, and misinformation was accepted in 65% of cases, largely driven by projection and authority biases. These statistics are descriptive of this synthetic dataset only and are not intended as inferential claims about real-world populations. The findings nevertheless suggest the potential value of targeted interventions—such as AI literacy programs, systematic bias audits, and equitable access to AI tools—to promote responsible AI use. As a proof-of-concept, the framework offers controlled exploratory insights, but all reported outcomes reflect text-based pattern generation by an LLM rather than observed human behavior. Future research should validate and extend these findings with longitudinal and field data. Full article
Show Figures

Figure 1

16 pages, 319 KB  
Article
Fuzzy Graphic Binary Matroid Approach to Power Grid Communication Network Analysis
by Jing Li, Buvaneswari Rangasamy, Saranya Shanmugavel and Aysha Khan
Symmetry 2025, 17(10), 1628; https://doi.org/10.3390/sym17101628 - 2 Oct 2025
Viewed by 278
Abstract
Matroid is a mathematical structure that extends the concept of independence. The fuzzy graphic binary matroid serves as a generalization of linear dependence, and its properties are examined. Power grid networks, which manage the generation, transmission, and distribution of electrical energy from power [...] Read more.
Matroid is a mathematical structure that extends the concept of independence. The fuzzy graphic binary matroid serves as a generalization of linear dependence, and its properties are examined. Power grid networks, which manage the generation, transmission, and distribution of electrical energy from power plants to consumers, are inherently a complex system. A key objective in analyzing these networks is to ensure a reliable and uninterrupted supply of electricity. However, several critical issues must be addressed, including uncertainty in communication links, detection of redundant or sensitive circuits, evaluation of network resilience under partial failures, and optimization of reliability in interconnected network systems. To support this goal, the concept of a fuzzy graphic binary matroid is applied in the analysis of power grid communication network, offering a framework that not only incorporates fuzziness and binary conditions but also enables systematic identification of weak circuits, redundancy planning, and reliability enhancement. This approach provides a more realistic representation of operational conditions, ensuring better fault tolerance and improved efficiency of the grid. Full article
Show Figures

Figure 1

Back to TopTop