Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (369)

Search Parameters:
Keywords = digital interpretation systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 8763 KiB  
Article
An Integrated Approach to Real-Time 3D Sensor Data Visualization for Digital Twin Applications
by Hyungki Kim and Hyowon Suh
Electronics 2025, 14(15), 2938; https://doi.org/10.3390/electronics14152938 - 23 Jul 2025
Abstract
Digital twin technology is emerging as a core technology that models physical objects or systems in a digital space and links real-time data to accurately reflect the state and behavior of the real world. For the effective operation of such digital twins, high-performance [...] Read more.
Digital twin technology is emerging as a core technology that models physical objects or systems in a digital space and links real-time data to accurately reflect the state and behavior of the real world. For the effective operation of such digital twins, high-performance visualization methods that support an intuitive understanding of the vast amounts of data collected from sensors and enable rapid decision-making are essential. The proposed system is designed as a balanced 3D monitoring solution that prioritizes intuitive, real-time state observation. Conventional 3D-simulation-based systems, while offering high physical fidelity, are often unsuitable for real-time monitoring due to their significant computational cost. Conversely, 2D-based systems are useful for detailed analysis but struggle to provide an intuitive, holistic understanding of multiple assets within a spatial context. This study introduces a visualization approach that bridges this gap. By leveraging sensor data, our method generates a physically plausible representation 3D CAD models, enabling at-a-glance comprehension in a visual format reminiscent of simulation analysis, without claiming equivalent physical accuracy. The proposed method includes GPU-accelerated interpolation, the user-selectable application of geodesic and Euclidean distance calculations, the automatic resolution of CAD model connectivity issues, the integration of Physically Based Rendering (PBR), and enhanced data interpretability through ramp shading. The proposed system was implemented in the Unity3D environment. Through various experiments, it was confirmed that the system maintained high real-time performance, achieving tens to hundreds of Frames Per Second (FPS), even with complex 3D models and numerous sensor data. Moreover, the application of geodesic distance yielded a more intuitive representation of surface-based phenomena, while PBR integration significantly enhanced visual realism, thereby enabling the more effective analysis and utilization of sensor data in digital twin environments. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 313 KiB  
Article
Survey on the Role of Mechanistic Interpretability in Generative AI
by Leonardo Ranaldi
Big Data Cogn. Comput. 2025, 9(8), 193; https://doi.org/10.3390/bdcc9080193 - 23 Jul 2025
Abstract
The rapid advancement of artificial intelligence (AI) and machine learning has revolutionised how systems process information, make decisions, and adapt to dynamic environments. AI-driven approaches have significantly enhanced efficiency and problem-solving capabilities across various domains, from automated decision-making to knowledge representation and predictive [...] Read more.
The rapid advancement of artificial intelligence (AI) and machine learning has revolutionised how systems process information, make decisions, and adapt to dynamic environments. AI-driven approaches have significantly enhanced efficiency and problem-solving capabilities across various domains, from automated decision-making to knowledge representation and predictive modelling. These developments have led to the emergence of increasingly sophisticated models capable of learning patterns, reasoning over complex data structures, and generalising across tasks. As AI systems become more deeply integrated into networked infrastructures and the Internet of Things (IoT), their ability to process and interpret data in real-time is essential for optimising intelligent communication networks, distributed decision making, and autonomous IoT systems. However, despite these achievements, the internal mechanisms that drive LLMs’ reasoning and generalisation capabilities remain largely unexplored. This lack of transparency, compounded by challenges such as hallucinations, adversarial perturbations, and misaligned human expectations, raises concerns about their safe and beneficial deployment. Understanding the underlying principles governing AI models is crucial for their integration into intelligent network systems, automated decision-making processes, and secure digital infrastructures. This paper provides a comprehensive analysis of explainability approaches aimed at uncovering the fundamental mechanisms of LLMs. We investigate the strategic components contributing to their generalisation abilities, focusing on methods to quantify acquired knowledge and assess its representation within model parameters. Specifically, we examine mechanistic interpretability, probing techniques, and representation engineering as tools to decipher how knowledge is structured, encoded, and retrieved in AI systems. Furthermore, by adopting a mechanistic perspective, we analyse emergent phenomena within training dynamics, particularly memorisation and generalisation, which also play a crucial role in broader AI-driven systems, including adaptive network intelligence, edge computing, and real-time decision-making architectures. Understanding these principles is crucial for bridging the gap between black-box AI models and practical, explainable AI applications, thereby ensuring trust, robustness, and efficiency in language-based and general AI systems. Full article
Show Figures

Figure 1

33 pages, 9781 KiB  
Article
Spatial Narrative Optimization in Digitally Gamified Architectural Scenarios
by Deshao Wang, Jieqing Xu and Luwang Chen
Buildings 2025, 15(15), 2597; https://doi.org/10.3390/buildings15152597 - 23 Jul 2025
Abstract
Currently, exploring digital immersive experiences is a new trend in the innovation and development of cultural tourism. This study addresses the growing demand for digital immersion in cultural tourism by examining the integration of spatial narrative and digitally gamified architectural scenarios. This study [...] Read more.
Currently, exploring digital immersive experiences is a new trend in the innovation and development of cultural tourism. This study addresses the growing demand for digital immersion in cultural tourism by examining the integration of spatial narrative and digitally gamified architectural scenarios. This study synthesizes an optimized framework for narrative design in digitally gamified architectural scenarios, integrating spatial narrative theory and feedback-informed design. The proposed model comprises four key components: (1) developing spatial narrative design methods for such scenarios; (2) constructing a spatial language system for spatial narratives using linguistic principles to organize narrative expression; (3) building a preliminary digitally gamified scenario based on the “Wuhu Jiaoji Temple Renovation Project” after architectural and environmental enhancements; and (4) optimization through thermal feedback experiments—collecting visitor trajectory heatmaps, eye-tracking heatmaps, and oculometric data. The results show that the optimized design, validated in the original game Dreams of Jiaoji, effectively enhanced spatial narrative execution by refining both on-site and in-game architectural scenarios. Post-optimization visitor feedback confirmed the validity of the proposed optimization strategies and principles, providing theoretical and practical references for innovative digital cultural tourism models and architectural design advancements. In the context of site-specific architectural conservation, this approach achieves two key objectives: the generalized interpretation of architectural cultural resources and their visual representation through gamified interactions. This paradigm not only enhances public engagement through enabling a multidimensional understanding of historical building cultures but also accelerates the protective reuse of heritage sites, allowing heritage value to be maximized through contemporary reinterpretation. The interdisciplinary methodology promotes sustainable development in the digital transformation of cultural tourism, fostering user-centered experiences and contributing to rural revitalization. Ultimately, this study highlights the potential use of digitally gamified architectural scenarios as transformative tools for heritage preservation, cultural dissemination, and rural community revitalization. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

18 pages, 246 KiB  
Article
Adaptive Epistemology: Embracing Generative AI as a Paradigm Shift in Social Science
by Gabriella Punziano
Societies 2025, 15(7), 205; https://doi.org/10.3390/soc15070205 - 21 Jul 2025
Viewed by 214
Abstract
This paper examines the epistemological transformation prompted by the integration of generative artificial intelligence technologies into social science research, proposing the “adaptive epistemology” paradigm. In today’s post-digital era—characterized by pervasive infrastructures and non-human agents endowed with generative capabilities—traditional research approaches have become inadequate. [...] Read more.
This paper examines the epistemological transformation prompted by the integration of generative artificial intelligence technologies into social science research, proposing the “adaptive epistemology” paradigm. In today’s post-digital era—characterized by pervasive infrastructures and non-human agents endowed with generative capabilities—traditional research approaches have become inadequate. Through a critical review of historical and discursive paradigms (positivism, interpretivism, critical realism, pragmatism, transformative paradigms, mixed and digital methods), here I show how the advent of digital platforms and large language models reconfigures the boundaries between data collection, analysis, and interpretation. Employing a theoretical–conceptual framework that draws on sociotechnical systems theory, platform studies, and the philosophy of action, the core features of adaptive epistemology are identified: dynamism, co-construction of meaning between researcher and system, and the capacity to generate methodological solutions in response to rapidly evolving contexts. The findings demonstrate the need for reasoning in terms of an adaptive epistemology that could offer a robust theoretical and methodological framework for guiding social science research in the post-digital society, emphasizing flexibility, reflexivity, and ethical sensitivity in the deployment of generative tools. Full article
26 pages, 23038 KiB  
Article
Geometry and Kinematics of the North Karlik Tagh Fault: Implications for the Transpressional Tectonics of Easternmost Tian Shan
by Guangxue Ren, Chuanyou Li, Chuanyong Wu, Kai Sun, Quanxing Luo, Xuanyu Zhang and Bowen Zou
Remote Sens. 2025, 17(14), 2498; https://doi.org/10.3390/rs17142498 - 18 Jul 2025
Viewed by 265
Abstract
Quantifying the slip rate along geometrically complex strike-slip faults is essential for understanding kinematics and strain partitioning in orogenic systems. The Karlik Tagh forms the easternmost terminus of Tian Shan and represents a critical restraining bend along the sinistral strike-slip Gobi-Tian Shan Fault [...] Read more.
Quantifying the slip rate along geometrically complex strike-slip faults is essential for understanding kinematics and strain partitioning in orogenic systems. The Karlik Tagh forms the easternmost terminus of Tian Shan and represents a critical restraining bend along the sinistral strike-slip Gobi-Tian Shan Fault System. The North Karlik Tagh Fault (NKTF) is an important fault demarcating the north boundary of the Karlik Tagh. While structurally significant, it is poorly understood in terms of its late Quaternary tectonic activity. In this study, we analyze the offset geomorphology based on interpretations of satellite imagery, field survey, and digital elevation models derived from structure-from-motion (SfM), and we provide the first quantitative constraints on the late-Quaternary slip rate using the abandonment age of deformed fan surfaces and river terraces constrained by the 10Be cosmogenic dating method. Our results reveal that the NKTF can be divided into the Yanchi and Xiamaya segments based on along-strike variations. The NW-striking Yanchi segment exhibits thrust faulting with a 0.07–0.09 mm/yr vertical slip, while the NE-NEE-striking Xiamaya segment displays left-lateral slip at 1.1–1.4 mm/yr since 180 ka. In easternmost Tian Shan, the interaction between thrust and sinistral strike-slip faults forms a transpressional regime. These left-lateral faults, together with those in the Gobi Altai, collectively facilitate eastward crustal escape in response to ongoing Indian indentation. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

22 pages, 11043 KiB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 - 17 Jul 2025
Viewed by 187
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

24 pages, 2173 KiB  
Article
A Novel Ensemble of Deep Learning Approach for Cybersecurity Intrusion Detection with Explainable Artificial Intelligence
by Abdullah Alabdulatif
Appl. Sci. 2025, 15(14), 7984; https://doi.org/10.3390/app15147984 - 17 Jul 2025
Viewed by 301
Abstract
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and [...] Read more.
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and respond to complex and evolving attacks. To address these challenges, Artificial Intelligence and machine learning have emerged as powerful tools for enhancing the accuracy, adaptability, and automation of IDS solutions. This study presents a novel, hybrid ensemble learning-based intrusion detection framework that integrates deep learning and traditional ML algorithms with explainable artificial intelligence for real-time cybersecurity applications. The proposed model combines an Artificial Neural Network and Support Vector Machine as base classifiers and employs a Random Forest as a meta-classifier to fuse predictions, improving detection performance. Recursive Feature Elimination is utilized for optimal feature selection, while SHapley Additive exPlanations (SHAP) provide both global and local interpretability of the model’s decisions. The framework is deployed using a Flask-based web interface in the Amazon Elastic Compute Cloud environment, capturing live network traffic and offering sub-second inference with visual alerts. Experimental evaluations using the NSL-KDD dataset demonstrate that the ensemble model outperforms individual classifiers, achieving a high accuracy of 99.40%, along with excellent precision, recall, and F1-score metrics. This research not only enhances detection capabilities but also bridges the trust gap in AI-powered security systems through transparency. The solution shows strong potential for application in critical domains such as finance, healthcare, industrial IoT, and government networks, where real-time and interpretable threat detection is vital. Full article
Show Figures

Figure 1

22 pages, 538 KiB  
Article
Meaning in the Algorithmic Museum: Towards a Dialectical Modelling Nexus of Virtual Curation
by Huining Guan and Pengbo Chen
Heritage 2025, 8(7), 284; https://doi.org/10.3390/heritage8070284 - 17 Jul 2025
Viewed by 138
Abstract
The rise of algorithm-driven virtual museums presents a philosophical challenge for how cultural meaning is constructed and critiqued in digital curation. Prevailing approaches highlight important but partial aspects: the loss of aura and authenticity in digital reproductions, efforts to maintain semiotic continuity with [...] Read more.
The rise of algorithm-driven virtual museums presents a philosophical challenge for how cultural meaning is constructed and critiqued in digital curation. Prevailing approaches highlight important but partial aspects: the loss of aura and authenticity in digital reproductions, efforts to maintain semiotic continuity with physical exhibits, optimistic narratives of technological democratisation, and critical technopessimist warnings about commodification and bias. Yet none provides a unified theoretical model of meaning-making under algorithmic curation. This paper proposes a dialectical-semiotic framework to synthesise and transcend these positions. The Dialectical Modelling Nexus (DMN) is a new conceptual structure that views meaning in virtual museums as emerging from the dynamic interplay of original and reproduced contexts, human and algorithmic sign systems, personal interpretation, and ideological framing. Through a critique of prior theories and a synthesis of their insights, the DMN offers a comprehensive model to diagnose how algorithms mediate museum content and to guide critical curatorial practice. The framework illuminates the dialectical tensions at the heart of algorithmic cultural mediation and suggests principles for preserving authentic, multi-layered meaning in the digital museum milieu. Full article
(This article belongs to the Special Issue Digital Museology and Emerging Technologies in Cultural Heritage)
Show Figures

Figure 1

19 pages, 2785 KiB  
Article
Implementing an AI-Based Digital Twin Analysis System for Real-Time Decision Support in a Custom-Made Sportswear SME
by Tõnis Raamets, Kristo Karjust, Jüri Majak and Aigar Hermaste
Appl. Sci. 2025, 15(14), 7952; https://doi.org/10.3390/app15147952 - 17 Jul 2025
Viewed by 165
Abstract
Small and medium-sized enterprises (SMEs) in the manufacturing sector often struggle to make effective use of production data due to fragmented systems and limited digital infrastructure. This paper presents a case study of implementing an AI-enhanced digital twin in a custom sportswear manufacturing [...] Read more.
Small and medium-sized enterprises (SMEs) in the manufacturing sector often struggle to make effective use of production data due to fragmented systems and limited digital infrastructure. This paper presents a case study of implementing an AI-enhanced digital twin in a custom sportswear manufacturing SME developed under the AI and Robotics Estonia (AIRE) initiative. The solution integrates real-time production data collection using the Digital Manufacturing Support Application (DIMUSA); data processing and control; clustering-based data analysis; and virtual simulation for evaluating improvement scenarios. The framework was applied in a live production environment to analyze workstation-level performance, identify recurring bottlenecks, and provide interpretable visual insights for decision-makers. K-means clustering and DBSCAN were used to group operational states and detect process anomalies, while simulation was employed to model production flow and assess potential interventions. The results demonstrate how even a lightweight AI-driven system can support human-centered decision-making, improve process transparency, and serve as a scalable foundation for Industry 5.0-aligned digital transformation in SMEs. Full article
Show Figures

Figure 1

27 pages, 10631 KiB  
Article
Sensor-Based Yield Prediction in Durum Wheat Under Semi-Arid Conditions Using Machine Learning Across Zadoks Growth Stages
by Süreyya Betül Rufaioğlu, Ali Volkan Bilgili, Erdinç Savaşlı, İrfan Özberk, Salih Aydemir, Amjad Mohamed Ismael, Yunus Kaya and João P. Matos-Carvalho
Remote Sens. 2025, 17(14), 2416; https://doi.org/10.3390/rs17142416 - 12 Jul 2025
Viewed by 429
Abstract
Yield prediction in wheat cultivated under semi-arid climatic conditions is gaining increasing importance for sustainable production strategies and decision support systems. In this study, a time-series-based modeling approach was implemented using sensor-based data (SPAD, NSPAD, NDVI, INSEY, and plant height measurements collected at [...] Read more.
Yield prediction in wheat cultivated under semi-arid climatic conditions is gaining increasing importance for sustainable production strategies and decision support systems. In this study, a time-series-based modeling approach was implemented using sensor-based data (SPAD, NSPAD, NDVI, INSEY, and plant height measurements collected at four different Zadoks growth stages (ZD24, ZD30, ZD31, and ZD32). Five different machine learning algorithms (Random Forest, Gradient Boosting, AdaBoost, LightGBM, and XGBoost) were tested individually for each stage, and the model performances were evaluated using statistical metrics such as R2%, RMSE t/ha, and MAE t/ha. Modeling results revealed that the ZD31 stage (first node detectable) was identified as the most successful phase for prediction accuracy, with the XGBoost model achieving the highest R2% score (81.0). In the same model, RMSE and MAE values were calculated as 0.49 and 0.37, respectively. The LightGBM model also showed remarkable performance during the ZD30 stage, achieving an R2% of 78.0, an RMSE of 0.52, and an MAE of 0.40. The SHAP (SHapley Additive exPlanations) method used to interpret feature importance revealed that the NDVI and INSEY indices contributed the most significant values to prediction accuracy for yield. This study demonstrates that phenology-sensitive yield prediction approaches offer high potential for sensor-based digital applications. Furthermore, the integration of timing, model selection, and explainability provided valuable insights for the development of advanced decision support systems. Full article
(This article belongs to the Special Issue Cropland and Yield Mapping with Multi-source Remote Sensing)
Show Figures

Figure 1

14 pages, 2707 KiB  
Article
Implantation of an Artificial Intelligence Denoising Algorithm Using SubtlePET™ with Various Radiotracers: 18F-FDG, 68Ga PSMA-11 and 18F-FDOPA, Impact on the Technologist Radiation Doses
by Jules Zhang-Yin, Octavian Dragusin, Paul Jonard, Christian Picard, Justine Grangeret, Christopher Bonnier, Philippe P. Leveque, Joel Aerts and Olivier Schaeffer
J. Imaging 2025, 11(7), 234; https://doi.org/10.3390/jimaging11070234 - 11 Jul 2025
Viewed by 210
Abstract
This study assesses the clinical deployment of SubtlePET™, a commercial AI-based denoising algorithm, across three radiotracers—18F-FDG, 68Ga-PSMA-11, and 18F-FDOPA—with the goal of improving image quality while reducing injected activity, technologist radiation exposure, and scan time. A retrospective analysis on [...] Read more.
This study assesses the clinical deployment of SubtlePET™, a commercial AI-based denoising algorithm, across three radiotracers—18F-FDG, 68Ga-PSMA-11, and 18F-FDOPA—with the goal of improving image quality while reducing injected activity, technologist radiation exposure, and scan time. A retrospective analysis on a digital PET/CT system showed that SubtlePET™ enabled dose reductions exceeding 33% and time savings of over 25%. AI-enhanced images were rated interpretable in 100% of cases versus 65% for standard low-dose reconstructions. Notably, 85% of AI-enhanced scans received the maximum Likert quality score (5/5), indicating excellent diagnostic confidence and noise suppression, compared to only 50% with conventional reconstruction. The quantitative image quality improved significantly across all tracers, with SNR and CNR gains of 50–70%. Radiotracer dose reductions were particularly substantial in low-BMI patients (up to 41% for FDG), and the technologist exposure decreased for high-exposure roles. The daily patient throughput increased by an average of 4.84 cases. These findings support the robust integration of SubtlePET™ into routine clinical PET practice, offering improved efficiency, safety, and image quality without compromising lesion detectability. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

25 pages, 9813 KiB  
Article
Digital Twin Approach for Fault Diagnosis in Photovoltaic Plant DC–DC Converters
by Pablo José Hueros-Barrios, Francisco Javier Rodríguez Sánchez, Pedro Martín Sánchez, Carlos Santos-Pérez, Ariya Sangwongwanich, Mateja Novak and Frede Blaabjerg
Sensors 2025, 25(14), 4323; https://doi.org/10.3390/s25144323 - 10 Jul 2025
Viewed by 267
Abstract
This article presents a hybrid fault diagnosis framework for DC–DC converters in photovoltaic (PV) systems, combining digital twin (DT) modelling and detection with machine learning anomaly classification. The proposed method addresses both hardware faults such as open and short circuits in insulated-gate bipolar [...] Read more.
This article presents a hybrid fault diagnosis framework for DC–DC converters in photovoltaic (PV) systems, combining digital twin (DT) modelling and detection with machine learning anomaly classification. The proposed method addresses both hardware faults such as open and short circuits in insulated-gate bipolar transistors (IGBTs) and diodes and sensor-level false data injection attacks (FDIAs). A five-dimensional DT architecture is employed, where a virtual entity implemented using FMI-compliant FMUs interacts with a real-time emulated physical plant. Fault detection is performed by comparing the real-time system behaviour with DT predictions, using dynamic thresholds based on power, voltage, and current sensors errors. Once a discrepancy is flagged, a second step classifier processes normalized time-series windows to identify the specific fault type. Synthetic training data are generated using emulation models under normal and faulty conditions, and feature vectors are constructed using a compact, interpretable set of statistical and spectral descriptors. The model was validated using OPAL-RT Hardware in the Loop emulations. The results show high classification accuracy, robustness to environmental fluctuations, and transferability across system configurations. The framework also demonstrates compatibility with low-cost deployment hardware, confirming its practical applicability for fault diagnosis in real-world PV systems. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

34 pages, 338 KiB  
Article
Systemic Gaps in Circular Plastics: A Role-Specific Assessment of Quality and Traceability Barriers in Australia
by Benjamin Gazeau, Atiq Zaman, Roberto Minunno and Faiz Shaikh
Sustainability 2025, 17(14), 6323; https://doi.org/10.3390/su17146323 - 10 Jul 2025
Viewed by 231
Abstract
The effective adoption of quality assurance and traceability systems is increasingly recognised as a critical enabler of circular economy (CE) outcomes in the plastics sector. This study examines the factors that influence the implementation of such systems within Australia’s recycled plastics industry, with [...] Read more.
The effective adoption of quality assurance and traceability systems is increasingly recognised as a critical enabler of circular economy (CE) outcomes in the plastics sector. This study examines the factors that influence the implementation of such systems within Australia’s recycled plastics industry, with a focus on how these factors vary by company size, supply chain role, and adoption of CE strategy. Recycled plastics are defined here as post-consumer or post-industrial polymers that have been reprocessed for reintegration into manufacturing applications. A mixed-methods survey was conducted with 65 stakeholders across the Australian plastics value chain, comprising recyclers, compounders, converters, and end-users. Respondents assessed a structured set of regulatory, technical, economic, and systemic factors, identifying whether each currently operates as an enabler or barrier in their organisational context. The analysis employed a comparative framework adapted from a 2022 European study, enabling a cross-regional interpretation of patterns and a comparison between CE-aligned and non-CE firms. The results show that firms with CE strategies report greater alignment with innovation-oriented enablers such as digital traceability, standardisation, and closed-loop models. However, these firms also express heightened sensitivity to systemic weaknesses, particularly in areas such as infrastructure limitations, inconsistent material quality, and data fragmentation. Small- and medium-sized enterprises (SMEs) highlighted compliance costs and operational uncertainty as primary barriers, while larger firms frequently cited frustration with regulatory inconsistency and infrastructure underperformance. These findings underscore the need for differentiated policy mechanisms that account for sectoral and organisational disparities in capacity, scale, and readiness for traceability. The study also cautions against the direct transfer of European circular economy models into the Australian context without consideration of local structural, regulatory, and geographic complexities. Full article
23 pages, 481 KiB  
Article
Reframing Technostress for Organizational Resilience: The Mediating Role of Techno-Eustress in the Performance of Accounting and Financial Reporting Professionals
by Sibel Fettahoglu and Ibrahim Yikilmaz
Systems 2025, 13(7), 550; https://doi.org/10.3390/systems13070550 - 7 Jul 2025
Viewed by 223
Abstract
This study examines how employees perceive technology-based demands during the digital transformation process and how these perceptions affect job performance. The research utilized data obtained from 388 experts in the accounting and financial reporting profession, a knowledge-intensive field that heavily employs new technologies [...] Read more.
This study examines how employees perceive technology-based demands during the digital transformation process and how these perceptions affect job performance. The research utilized data obtained from 388 experts in the accounting and financial reporting profession, a knowledge-intensive field that heavily employs new technologies (e.g., ERP systems, digital audit tools). The data collected through a convenience sampling method was analyzed using SPSS 27 and SmartPLS 4 software. The findings reveal that the direct effect of technostress on job performance is not significant; however, this stress indirectly contributes to performance through techno-eustress. In this study, techno-eustress refers to the cognitive appraisal of technology-related demands as development-enhancing challenges rather than threats. This concept is theoretically grounded in the broader eustress framework, which views stressors as potentially motivating and growth-promoting when positively interpreted. The model is based on Cognitive Evaluation Theory, the Job Demands–Resources Model, and Self-Determination Theory. This study demonstrates that digital transformation can promote not only operational improvements but also organizational resilience by enhancing employees’ psychological resources and adaptive capacities. By highlighting the mediating role of techno-eustress, this research offers a nuanced perspective on how human-centered cognitive mechanisms can strategically support performance and sustainability in the face of technological disruption—an increasingly relevant area for organizations striving to thrive amid uncertainty. Full article
(This article belongs to the Special Issue Strategic Management Towards Organisational Resilience)
Show Figures

Figure 1

22 pages, 814 KiB  
Article
When Institutions Cannot Keep up with Artificial Intelligence: Expiration Theory and the Risk of Institutional Invalidation
by Victor Frimpong
Adm. Sci. 2025, 15(7), 263; https://doi.org/10.3390/admsci15070263 - 7 Jul 2025
Viewed by 415
Abstract
As Artificial Intelligence systems increasingly surpass or replace traditional human roles, institutions founded on beliefs in human cognitive superiority, moral authority, and procedural oversight encounter a more profound challenge than mere disruption: expiration. This paper posits that, instead of being outperformed, many legacy [...] Read more.
As Artificial Intelligence systems increasingly surpass or replace traditional human roles, institutions founded on beliefs in human cognitive superiority, moral authority, and procedural oversight encounter a more profound challenge than mere disruption: expiration. This paper posits that, instead of being outperformed, many legacy institutions are becoming epistemically misaligned with the realities of AI-driven environments. To clarify this change, the paper presents the Expiration Theory. This conceptual model interprets institutional collapse not as a market failure but as the erosion of fundamental assumptions amid technological shifts. In addition, the paper introduces the AI Pressure Clock, a diagnostic tool that categorizes institutions based on their vulnerability to AI disruption and their capacity to adapt to it. Through an analysis across various sectors, including law, healthcare, education, finance, and the creative industries, the paper illustrates how specific systems are nearing functional obsolescence while others are actively restructuring their foundational norms. As a conceptual study, the paper concludes by highlighting the theoretical, policy, and leadership ramifications, asserting that institutional survival in the age of AI relies not solely on digital capabilities but also on the capacity to redefine the core principles of legitimacy, authority, and decision-making. Full article
Show Figures

Figure 1

Back to TopTop