Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (295)

Search Parameters:
Keywords = software interoperability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 27375 KB  
Article
ComputationalAnalysis of a Towed Jumper During Static Line Airborne Operations: A Parametric Study Using Various Airdrop Configurations
by Usbaldo Fraire, Mehdi Ghoreyshi, Adam Jirasek, Keith Bergeron and Jürgen Seidel
Aerospace 2025, 12(10), 897; https://doi.org/10.3390/aerospace12100897 - 3 Oct 2025
Abstract
This study uses the CREATETM-AV/Kestrel simulation software to model a towed jumper scenario using standard aircraft settings to quantify paratrooper stability and risk of contact during static line airborne operations. The focus areas of this study include a review of the [...] Read more.
This study uses the CREATETM-AV/Kestrel simulation software to model a towed jumper scenario using standard aircraft settings to quantify paratrooper stability and risk of contact during static line airborne operations. The focus areas of this study include a review of the technical build-up, which includes aircraft, paratrooper and static line modeling, plus preliminary functional checkouts executed to verify simulation performance. This research and simulation development effort is driven by the need to meet the analysis demands required to support the US Army Personnel Airdrop with static line length studies and the North Atlantic Treaty Organization (NATO) Joint Airdrop Capability Syndicate (JACS) with airdrop interoperability assessments. Each project requires the use of various aircraft types, static line lengths and exit procedures. To help meet this need and establish a baseline proof of concept (POC) simulation, simulation setups were developed for a towed jumper from both the C-130J and C-17 using a 20-ft static line to support US Army Personnel Airdrop efforts. Concurrently, the JACS is requesting analysis to support interoperability testing to help qualify the T-11 parachute from an Airbus A400M Atlas aircraft, operated by NATO nations. Due to the lack of an available A400M geometry, the C-17 was used to demonstrate the POC, and plans to substitute the geometry are in order when it becomes available. The results of a nominal Computational Fluid Dynamics (CFD) simulation run using a C-17 and C-130J will be reviewed with a sample of the output to help characterize performance differences for the aircraft settings selected. The US Army Combat Capabilities Development Command Soldier Center (DEVCOM-SC) Aerial Delivery Division (ADD) has partnered with the US Air Force Academy (USAFA) High Performance Computing Research Center (HPCRC) to enable Modeling and Simulation (M&S) capabilities that support the Warfighter and NATO airdrop interoperability efforts. Full article
(This article belongs to the Special Issue Advancing Fluid Dynamics in Aerospace Applications)
34 pages, 3263 KB  
Systematic Review
From Network Sensors to Intelligent Systems: A Decade-Long Review of Swarm Robotics Technologies
by Fouad Chaouki Refis, Nassim Ahmed Mahammedi, Chaker Abdelaziz Kerrache and Sahraoui Dhelim
Sensors 2025, 25(19), 6115; https://doi.org/10.3390/s25196115 - 3 Oct 2025
Abstract
Swarm Robotics (SR) is a relatively new field, inspired by the collective intelligence of social insects. It involves using local rules to control and coordinate large groups (swarms) of relatively simple physical robots. Important tasks that robot swarms can handle include demining, search, [...] Read more.
Swarm Robotics (SR) is a relatively new field, inspired by the collective intelligence of social insects. It involves using local rules to control and coordinate large groups (swarms) of relatively simple physical robots. Important tasks that robot swarms can handle include demining, search, rescue, and cleaning up toxic spills. Over the past decade, the research effort in the field of Swarm Robotics has intensified significantly in terms of hardware, software, and systems integrated developments, yet significant challenges remain, particularly regarding standardization, scalability, and cost-effective deployment. To contextualize the state of Swarm Robotics technologies, this paper provides a systematic literature review (SLR) of Swarm Robotic technologies published from 2014 to 2024, with an emphasis on how hardware and software subsystems have co-evolved. This work provides an overview of 40 studies in peer-reviewed journals along with a well-defined and replicable systematic review protocol. The protocol describes criteria for including and excluding studies and outlines a data extraction approach. We explored trends in sensor hardware, actuation methods, communication devices, and energy systems, as well as an examination of software platforms to produce swarm behavior, covering meta-heuristic algorithms and generic middleware platforms such as ROS. Our results demonstrate how dependent hardware and software are to achieve Swarm Intelligence, the lack of uniform standards for their design, and the pragmatic limits which hinder scalability and deployment. We conclude by noting ongoing challenges and proposing future directions for developing interoperable, energy-efficient Swarm Robotics (SR) systems incorporating machine learning (ML). Full article
(This article belongs to the Special Issue Cooperative Perception and Planning for Swarm Robot Systems)
Show Figures

Figure 1

21 pages, 812 KB  
Systematic Review
The Potential of Low-Cost IoT-Enabled Agrometeorological Stations: A Systematic Review
by Christa M. Al Kalaany, Hilda N. Kimaita, Ahmed A. Abdelmoneim, Roula Khadra, Bilal Derardja and Giovana Dragonetti
Sensors 2025, 25(19), 6020; https://doi.org/10.3390/s25196020 - 1 Oct 2025
Abstract
The integration of Internet of Things (IoT) technologies in agriculture has facilitated real-time environmental monitoring, with low-cost IoT-enabled agrometeorological stations emerging as a valuable tool for climate-smart farming. This systematic review examines low-cost IoT-based weather stations by analyzing their hardware and software components [...] Read more.
The integration of Internet of Things (IoT) technologies in agriculture has facilitated real-time environmental monitoring, with low-cost IoT-enabled agrometeorological stations emerging as a valuable tool for climate-smart farming. This systematic review examines low-cost IoT-based weather stations by analyzing their hardware and software components and assessing their potential in comparison to conventional weather stations. It emphasizes their contribution to improving climate resilience, facilitating data-driven decision-making, and expanding access to weather data in resource-constrained environments. The analysis revealed widespread adoption of ESP32 microcontrollers, favored for its affordability and modularity, as well as increasing use of communication protocols like LoRa and Wi-Fi due to their balance of range, power efficiency, and scalability. Sensor integration largely focused on core parameters such as air temperature, relative humidity, soil moisture, and rainfall supporting climate-smart irrigation, disease risk modeling, and microclimate management. Studies highlighted the importance of usability and adaptability through modular hardware and open-source platforms. Additionally, scalability was demonstrated through community-level and multi-station deployments. Despite their promise, challenges persist regarding sensor calibration, data interoperability, and long-term field validation. Future research should explore the integration of edge computing, adaptive analytics, and standardization protocols to further enhance the reliability and functionality of IoT-enabled agrometeorological systems. Full article
Show Figures

Figure 1

25 pages, 6041 KB  
Article
A Dynamic Bridge Architecture for Efficient Interoperability Between AUTOSAR Adaptive and ROS2
by Suhong Kim, Hyeongju Choi, Suhaeng Lee, Minseo Kim, Hyunseo Shin and Changjoo Moon
Electronics 2025, 14(18), 3635; https://doi.org/10.3390/electronics14183635 - 14 Sep 2025
Viewed by 429
Abstract
The automotive industry is undergoing a transition toward Software-Defined Vehicles (SDVs), necessitating the integration of AUTOSAR Adaptive, a standard for vehicle control, with ROS2, a platform for autonomous driving research. However, current static bridge approaches present notable limitations, chiefly regarding unnecessary resource consumption [...] Read more.
The automotive industry is undergoing a transition toward Software-Defined Vehicles (SDVs), necessitating the integration of AUTOSAR Adaptive, a standard for vehicle control, with ROS2, a platform for autonomous driving research. However, current static bridge approaches present notable limitations, chiefly regarding unnecessary resource consumption and compatibility issues with Quality of Service (QoS). To tackle these challenges, in this paper, we put forward a dynamic bridge architecture consisting of three components: a Discovery Manager, a Bridge Manager, and a Message Router. The proposed dynamic SOME/IP-DDS bridge dynamically detects service discovery events from the SOME/IP and DDS domains in real time, allowing for the creation and destruction of communication entities as needed. Additionally, it automatically manages QoS settings to ensure that they remain compatible. The experimental results indicate that this architecture maintains a stable latency even with a growing number of connections, demonstrating high scalability while also reducing memory usage during idle periods compared to static methods. Moreover, real-world assessments using an autonomous driving robot confirm its real-time applicability by reliably relaying sensor data to Autoware with minimal end-to-end latency. This research contributes to expediting the integration of autonomous driving exploration and production vehicle platforms by offering a more efficient and robust interoperability solution. Full article
(This article belongs to the Special Issue Advances in Autonomous Vehicular Networks)
Show Figures

Figure 1

22 pages, 746 KB  
Article
Schema-Agnostic Data Type Inference and Validation for Exchanging JSON-Encoded Construction Engineering Information
by Seokjoon You, Hyon Wook Ji, Hyunseok Kwak, Taewon Chung and Moongyo Bae
Buildings 2025, 15(17), 3159; https://doi.org/10.3390/buildings15173159 - 2 Sep 2025
Viewed by 469
Abstract
Modern construction and infrastructure projects produce large volumes of heterogeneous data, including building information models, JSON sensor streams, and maintenance logs. Ensuring interoperability and data integrity across diverse software platforms requires standardized data exchange methods. However, traditional neutral object models, often constrained by [...] Read more.
Modern construction and infrastructure projects produce large volumes of heterogeneous data, including building information models, JSON sensor streams, and maintenance logs. Ensuring interoperability and data integrity across diverse software platforms requires standardized data exchange methods. However, traditional neutral object models, often constrained by rigid and incompatible schemas, are ill-suited to accommodate the heterogeneity and long-term nature of such data. Addressing this challenge, the study proposes a schema-less data exchange approach that improves flexibility in representing and interpreting infrastructure information. The method uses dynamic JSON-based objects, with infrastructure model definitions serving as semantic guidelines rather than rigid templates. Rule-based reasoning and dictionary-guided term mapping are employed to infer entity types from semi-structured data without enforcing prior schema conformance. Experimental evaluation across four datasets demonstrated exact entity-type match rates ranging from 61.4% to 76.5%, with overall success rates—including supertypes and ties—reaching up to 95.0% when weighted accuracy metrics were applied. Compared to a previous baseline, the method showed a notable improvement in exact matches while maintaining overall performance. These results confirm the feasibility of schema-less inference using domain dictionaries and indicate that incorporating schema-derived constraints could further improve accuracy and applicability in real-world infrastructure data environments. Full article
(This article belongs to the Special Issue BIM Methodology and Tools Development/Implementation)
Show Figures

Figure 1

19 pages, 6184 KB  
Article
Research on Hardware-in-the-Loop Test Platform Based on Simulated IED and Man-in-the-Middle Attack
by Ke Liu, Rui Song, Wenqian Zhang, Han Guo, Jun Han and Hongbo Zou
Processes 2025, 13(9), 2735; https://doi.org/10.3390/pr13092735 - 27 Aug 2025
Viewed by 462
Abstract
With the widespread adoption of intelligent electronic devices (IEDs) in smart substations, the real-time data transmission and interoperability features of the IEC 61850 communication standard play a crucial role in ensuring seamless automation system integration. This paper presents a hardware-in-the-loop (HIL) platform experiment [...] Read more.
With the widespread adoption of intelligent electronic devices (IEDs) in smart substations, the real-time data transmission and interoperability features of the IEC 61850 communication standard play a crucial role in ensuring seamless automation system integration. This paper presents a hardware-in-the-loop (HIL) platform experiment analysis based on a simulated IED and man-in-the-middle (MITM) attack, leveraging built-in IEC 61850 protocol software to replicate an existing substation communication architecture in cyber physical systems. This study investigates the framework performance and protocol robustness of this approach. First, the physical network infrastructure of smart grids is analyzed in detail, followed by the development of an HIL testing platform tailored for discrete communication network scenarios. Next, virtual models of intelligent electrical equipment and MITM attacks are created, along with their corresponding communication layer architectures, enabling comprehensive simulation analysis. Finally, in the 24-h stability operation test and the test of three typical fault scenarios, the simulated IED can achieve 100% of the protocol consistency passing rate, which is completely consistent with the protection action decision of the physical IED, the end-to-end delay is less than 4 ms, and the measurement accuracy matches the accuracy level of the physical IED, which verifies that the proposed test platform can effectively guide the commissioning of smart substations. Full article
Show Figures

Figure 1

25 pages, 7884 KB  
Article
Watershed-BIM Integration for Urban Flood Resilience: A Framework for Simulation, Assessment, and Planning
by Panagiotis Tsikas, Athanasios Chassiakos and Vasileios Papadimitropoulos
Sustainability 2025, 17(17), 7687; https://doi.org/10.3390/su17177687 - 26 Aug 2025
Viewed by 862
Abstract
Urban flooding represents a growing global concern, especially in areas with rapid urbanization, unregulated urban sprawl and climate change conditions. Conventional flood modeling approaches do not effectively capture the complex dynamics between natural watershed behavior and urban infrastructure; they typically simulate these domains [...] Read more.
Urban flooding represents a growing global concern, especially in areas with rapid urbanization, unregulated urban sprawl and climate change conditions. Conventional flood modeling approaches do not effectively capture the complex dynamics between natural watershed behavior and urban infrastructure; they typically simulate these domains in isolation. This study introduces the Watershed-BIM methodology, a three-dimensional simulation framework that integrates Building and City Information Modeling (BIM/CIM), Geographic Information Systems (GIS), Flood Risk Assessment (FRA), and Flood Risk Management (FRM) into a single framework. Autodesk InfraWorks 2024, Civil 3D 2024, and RiverFlow2D v8.14 software are incorporated in the development. The methodology enhances interoperability and prediction accuracy by bridging hydrological processes with detailed urban-scale data. The framework was tested on a real-world flash flood event in Mandra, Greece, an area frequently exposed to extreme rainfall and runoff events. A specific comparison with observed flood characteristics indicates improved accuracy in comparison to other hydrological analyses (e.g., by HEC-RAS simulation). Beyond flood depth, the model offers additional insights into flow direction, duration, and localized water accumulation around buildings and infrastructure. In this context, integrated tools such as Watershed-BIM stand out as essential instruments for translating complex flood dynamics into actionable, city-scale resilience planning. Full article
(This article belongs to the Special Issue Sustainable Project, Production and Service Operations Management)
Show Figures

Figure 1

44 pages, 900 KB  
Article
MetaFFI-Multilingual Indirect Interoperability System
by Tsvi Cherny-Shahar and Amiram Yehudai
Software 2025, 4(3), 21; https://doi.org/10.3390/software4030021 - 26 Aug 2025
Viewed by 587
Abstract
The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity [...] Read more.
The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity and error proneness, due to the absence of an adequate system for the interoperability of multiple programming languages. Developers are compelled to resort to workarounds, such as library reimplementation or language-specific wrappers, which are often dependent on C as the common denominator for interoperability. These challenges render the use of multiple programming languages a burdensome and demanding task that necessitates highly skilled developers for implementation, debugging, and maintenance, and raise doubts about the benefits of interoperability. To overcome these challenges, we propose MetaFFI, introducing a fully in-process, plugin-oriented, runtime-independent architecture based on a minimal C abstraction layer. It provides deep binding without relying on a shared object model, virtual machine bytecode, or manual glue code. This architecture is scalable (O(n) integration for n languages) and supports true polymorphic function and object invocation across languages. MetaFFI is based on leveraging FFI and embedding mechanisms, which minimize restrictions on language selection while still enabling full-duplex binding and deep integration. This is achieved by exploiting the less restrictive shallow binding mechanisms (e.g., Foreign Function Interface) to offer deep binding features (e.g., object creation, methods, fields). MetaFFI provides a runtime-independent framework to load and xcall (Cross-Call) foreign entities (e.g., getters, functions, objects). MetaFFI uses Common Data Types (CDTs) to pass parameters and return values, including objects and complex types, and even cross-language callbacks and dynamic calling conventions for optimization. The indirect interoperability approach of MetaFFI has the significant advantage of requiring only 2n mechanisms to support n languages, compared to direct interoperability approaches that need n2 mechanisms. We developed and tested a proof of concept tool interoperating three languages (Go, Python, and Java), on Windows and Ubuntu. To evaluate the approach and the tool, we conducted a user study, with promising results. The MetaFFI framework is available as open source software, including its full source code and installers, to facilitate adoption and collaboration across academic and industrial communities. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

26 pages, 1772 KB  
Article
Manufacturing Management Processes Integration Framework
by Miguel Ângelo Pereira, Gaspar Vieira, Leonilde Varela, Goran Putnik, Manuela Cruz-Cunha, André Santos, Teresa Dieguez, Filipe Pereira, Nuno Leal and José Machado
Appl. Sci. 2025, 15(16), 9165; https://doi.org/10.3390/app15169165 - 20 Aug 2025
Viewed by 563
Abstract
This paper proposes a novel and comprehensive framework for the integration of manufacturing management processes, spanning strategic and operational levels, within and across organizational boundaries. The framework combines a robust set of technologies—such as cyber-physical systems, digital twins, AI, and blockchain—designed to support [...] Read more.
This paper proposes a novel and comprehensive framework for the integration of manufacturing management processes, spanning strategic and operational levels, within and across organizational boundaries. The framework combines a robust set of technologies—such as cyber-physical systems, digital twins, AI, and blockchain—designed to support real-time decision-making, interoperability, and collaboration in Industry 4.0 and 5.0 contexts. Implemented and validated in a Portuguese manufacturing group comprising three interoperating factories, the framework demonstrated its ability to improve agility, coordination, and stakeholder integration through a multi-layered architecture and modular software platform. Quantitative and qualitative feedback from 32 participants confirmed enhanced decision support, operational responsiveness, and external collaboration. While tailored to a specific industrial setting, the results highlight the framework’s scalability and adaptability, positioning it as a meaningful contribution toward sustainable, human-centric digital transformation in manufacturing environments. Full article
Show Figures

Figure 1

31 pages, 3398 KB  
Article
The Role of Virtual and Augmented Reality in Industrial Design: A Case Study of Usability Assessment
by Amanda Martín-Mariscal, Carmen Torres-Leal, Teresa Aguilar-Planet and Estela Peralta
Appl. Sci. 2025, 15(15), 8725; https://doi.org/10.3390/app15158725 - 7 Aug 2025
Cited by 1 | Viewed by 1136
Abstract
The integration of virtual and augmented reality is transforming processes in the field of product design. This study evaluates the usability of immersive digital tools applied to industrial design through a combined market research and empirical case study, using the software ‘Gravity Sketch’ [...] Read more.
The integration of virtual and augmented reality is transforming processes in the field of product design. This study evaluates the usability of immersive digital tools applied to industrial design through a combined market research and empirical case study, using the software ‘Gravity Sketch’ and the immersive headset ‘Meta Quest 3’. An embedded single case study was conducted based on the international standard ISO 9241-11, considering the dimensions of effectiveness, efficiency, and satisfaction, analysed through nine indicators: tasks completed, time to complete tasks, dimensional accuracy, interoperability, interactivity, fatigue, human error, learning curve, and perceived creativity. The results show a progressive improvement in user–system interaction across the seven Design Units, as users become more familiar with immersive technologies. Effectiveness improves as users gain experience, though it remains sensitive to design complexity. Efficiency shows favourable values even in early stages, reflecting operational fluency despite learning demands. Satisfaction records the greatest improvement, driven by smoother interaction and greater creative freedom. These findings highlight the potential of immersive tools to support design processes while also underlining the need for future research on sustained usability, interface ergonomics, and collaborative workflows in extended reality environments. Full article
(This article belongs to the Special Issue Recent Advances and Application of Virtual Reality)
Show Figures

Graphical abstract

24 pages, 1821 KB  
Review
An Overview on LCA Integration in BIM: Tools, Applications, and Future Trends
by Cecilia Bolognesi, Deida Bassorizzi, Simone Balin and Vasili Manfredi
Digital 2025, 5(3), 31; https://doi.org/10.3390/digital5030031 - 31 Jul 2025
Viewed by 1752
Abstract
The integration of Life Cycle Assessment (LCA) into Building Information Modeling (BIM) processes is becoming increasingly important for enhancing the environmental performance of construction projects. This scoping review examines how LCA methods and environmental data are currently integrated into BIM workflows, focusing on [...] Read more.
The integration of Life Cycle Assessment (LCA) into Building Information Modeling (BIM) processes is becoming increasingly important for enhancing the environmental performance of construction projects. This scoping review examines how LCA methods and environmental data are currently integrated into BIM workflows, focusing on automation, data standardization, and visualization strategies. We selected 43 peer-reviewed studies (January 2010–May 2025) via structured searches in five major academic databases. The review identifies five main types of BIM–LCA integration workflows; the most common approach involves exporting quantity data from BIM models to external LCA tools. More recent studies explore the use of artificial intelligence for improving automation and accuracy in data mapping between BIM objects and LCA databases. Key challenges include inconsistent levels of data granularity, a lack of harmonized EPD formats, and limited interoperability between BIM and LCA software environments. Visualization methods such as color-coded 3D models are used to support early-stage decision-making, although uncertainty representation remains limited. To address these issues, future research should focus on standardizing EPD data structures, enriching BIM objects with validated environmental information, and developing explainable AI solutions for automated classification and matching. These advancements would improve the reliability and usability of LCA in BIM-based design, contributing to more informed decisions in sustainable construction. Full article
(This article belongs to the Special Issue Advances in Data Management)
Show Figures

Figure 1

40 pages, 3045 KB  
Review
HBIM and Information Management for Knowledge and Conservation of Architectural Heritage: A Review
by Maria Parente, Nazarena Bruno and Federica Ottoni
Heritage 2025, 8(8), 306; https://doi.org/10.3390/heritage8080306 - 30 Jul 2025
Viewed by 1435
Abstract
This paper presents a comprehensive review of research on Historic Building Information Modeling (HBIM), focusing on its role as a tool for managing knowledge and supporting conservation practices of Architectural Heritage. While previous review articles and most research works have predominantly addressed geometric [...] Read more.
This paper presents a comprehensive review of research on Historic Building Information Modeling (HBIM), focusing on its role as a tool for managing knowledge and supporting conservation practices of Architectural Heritage. While previous review articles and most research works have predominantly addressed geometric modeling—given its significant challenges in the context of historic buildings—this study places greater emphasis on the integration of non-geometric data within the BIM environment. A systematic search was conducted in the Scopus database to extract the 451 relevant publications analyzed in this review, covering the period from 2008 to mid-2024. A bibliometric analysis was first performed to identify trends in publication types, geographic distribution, research focuses, and software usage. The main body of the review then explores three core themes in the development of the information system: the definition of model entities, both semantic and geometric; the data enrichment phase, incorporating historical, diagnostic, monitoring and conservation-related information; and finally, data use and sharing, including on-site applications and interoperability. For each topic, the review highlights and discusses the principal approaches documented in the literature, critically evaluating the advantages and limitations of different information management methods with respect to the distinctive features of the building under analysis and the specific objectives of the information model. Full article
Show Figures

Figure 1

28 pages, 1334 KB  
Review
Evaluating Data Quality: Comparative Insights on Standards, Methodologies, and Modern Software Tools
by Theodoros Alexakis, Evgenia Adamopoulou, Nikolaos Peppes, Emmanouil Daskalakis and Georgios Ntouskas
Electronics 2025, 14(15), 3038; https://doi.org/10.3390/electronics14153038 - 30 Jul 2025
Cited by 1 | Viewed by 1246
Abstract
In an era of exponential data growth, ensuring high data quality has become essential for effective, evidence-based decision making. This study presents a structured and comparative review of the field by integrating data classifications, quality dimensions, assessment methodologies, and modern software tools. Unlike [...] Read more.
In an era of exponential data growth, ensuring high data quality has become essential for effective, evidence-based decision making. This study presents a structured and comparative review of the field by integrating data classifications, quality dimensions, assessment methodologies, and modern software tools. Unlike earlier reviews that focus narrowly on individual aspects, this work synthesizes foundational concepts with formal frameworks, including the Findable, Accessible, Interoperable, and Reusable (FAIR) principles and the ISO/IEC 25000 series on software and data quality. It further examines well-established assessment models, such as Total Data Quality Management (TDQM), Data Warehouse Quality (DWQ), and High-Quality Data Management (HDQM), and critically evaluates commercial platforms in terms of functionality, AI integration, and adaptability. A key contribution lies in the development of conceptual mappings that link data quality dimensions with FAIR indicators and maturity levels, offering a practical reference model. The findings also identify gaps in current tools and approaches, particularly around cost-awareness, explainability, and process adaptability. By bridging theory and practice, the study contributes to the academic literature while offering actionable insights for building scalable, standards-aligned, and context-sensitive data quality management strategies. Full article
Show Figures

Figure 1

23 pages, 650 KB  
Article
Exercise-Specific YANG Profile for AI-Assisted Network Security Labs: Bidirectional Configuration Exchange with Large Language Models
by Yuichiro Tateiwa
Information 2025, 16(8), 631; https://doi.org/10.3390/info16080631 - 24 Jul 2025
Viewed by 375
Abstract
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that [...] Read more.
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that could serve as tutors. We address this interoperability gap with an exercise-oriented YANG profile that augments the Internet Engineering Task Force (IETF) ietf-network module with a new network-devices module. The profile expresses Linux interface settings, routing, and firewall rules, and tags each node with roles such as linux-server or linux-firewall. Integrated into our LiNeS Cloud platform, it enables LLMs to both parse and generate machine-readable network states. We evaluated the profile on four topologies—from a simple client–server pair to multi-subnet scenarios with dedicated security devices—using ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash. Across 1050 evaluation tasks covering profile understanding (n = 180), instance analysis (n = 750), and instance generation (n = 120), the three LLMs answered correctly in 1028 cases, yielding an overall accuracy of 97.9%. Even with only minimal follow-up cues (≦3 turns) —rather than handcrafted prompt chains— analysis tasks reached 98.1% accuracy and generation tasks 93.3%. To our knowledge, this is the first exercise-focused YANG profile that simultaneously captures Linux/iptables semantics and is empirically validated across three proprietary LLMs, attaining 97.9% overall task accuracy. These results lay a practical foundation for artificial intelligence (AI)-assisted security labs where real-time feedback and scenario generation must scale beyond human instructor capacity. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

20 pages, 1798 KB  
Article
An Approach to Enable Human–3D Object Interaction Through Voice Commands in an Immersive Virtual Environment
by Alessio Catalfamo, Antonio Celesti, Maria Fazio, A. F. M. Saifuddin Saif, Yu-Sheng Lin, Edelberto Franco Silva and Massimo Villari
Big Data Cogn. Comput. 2025, 9(7), 188; https://doi.org/10.3390/bdcc9070188 - 17 Jul 2025
Viewed by 765
Abstract
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications [...] Read more.
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications through users’ voice commands presents significant challenges due to the hardware and software limitations of headset devices. This paper aims to bridge this gap by proposing a methodology to address these issues. In particular, starting from a Mel-Frequency Cepstral Coefficient (MFCC) extraction algorithm able to capture the unique characteristics of the user’s voice, we pass it as input to a Convolutional Neural Network (CNN) model. After that, in order to integrate the CNN model with a VR application running on a standalone headset, such as Oculus Quest, we converted it into an Open Neural Network Exchange (ONNX) format, i.e., a Machine Learning (ML) interoperability open standard format. The proposed system demonstrates good performance and represents a foundation for the development of user-centric, effective computing systems, enhancing accessibility to VR environments through voice-based commands. Experiments demonstrate that a native CNN model developed through TensorFlow presents comparable performances with respect to the corresponding CNN model converted into the ONNX format, paving the way towards the development of VR applications running in headsets controlled through the user’s voice. Full article
Show Figures

Figure 1

Back to TopTop