Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (385)

Search Parameters:
Keywords = design repositories

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 334 KiB  
Article
Enhancing Discoverability: A Metadata Framework for Empirical Research in Theses
by Giannis Vassiliou, George Tsamis, Stavroula Chatzinikolaou, Thomas Nipurakis and Nikos Papadakis
Algorithms 2025, 18(8), 490; https://doi.org/10.3390/a18080490 - 6 Aug 2025
Abstract
Despite the significant volume of empirical research found in student-authored academic theses—particularly in the social sciences—these works are often poorly documented and difficult to discover within institutional repositories. A key reason for this is the lack of appropriate metadata frameworks that balance descriptive [...] Read more.
Despite the significant volume of empirical research found in student-authored academic theses—particularly in the social sciences—these works are often poorly documented and difficult to discover within institutional repositories. A key reason for this is the lack of appropriate metadata frameworks that balance descriptive richness with usability. General standards such as Dublin Core are too simplistic to capture critical research details, while more robust models like the Data Documentation Initiative (DDI) are too complex for non-specialist users and not designed for use with student theses. This paper presents the design and validation of a lightweight, web-based metadata framework specifically tailored to document empirical research in academic theses. We are the first to adapt existing hybrid Dublin Core–DDI approaches specifically for thesis documentation, with a novel focus on cross-methodological research and non-expert usability. The model was developed through a structured analysis of actual student theses and refined to support intuitive, structured metadata entry without requiring technical expertise. The resulting system enhances the discoverability, classification, and reuse of empirical theses within institutional repositories, offering a scalable solution to elevate the visibility of the gray literature in higher education. Full article
19 pages, 3468 KiB  
Article
Fine-Tuning Models for Histopathological Classification of Colorectal Cancer
by Houda Saif ALGhafri and Chia S. Lim
Diagnostics 2025, 15(15), 1947; https://doi.org/10.3390/diagnostics15151947 - 3 Aug 2025
Viewed by 117
Abstract
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained [...] Read more.
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained models on specialized and multiple datasets is proposed, where the proposed models, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep, are algorithmically fine-tuned at varying depths to improve the performance of colorectal cancer classification. These models were applied to datasets of 10,613 images from public and private repositories, external sources, and unseen data. To validate the models’ decision-making and improve transparency, we integrated Grad-CAM to provide visual explanations that influence classification decisions. Results and Conclusions: On average across all datasets, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep achieved test accuracies of 99.34%, 99.48%, and 99.45%, respectively, highlighting the effectiveness of fine-tuning in improving classification performance and generalization. Statistical methods, including paired t-tests, ANOVA, and the Kruskal–Wallis test, confirmed significant improvements in the proposed methods’ performance, with p-values below 0.05. These findings demonstrate that fine-tuning based on the characteristics of CNN’s architecture enhances colorectal cancer classification in histopathology, thereby improving the diagnostic potential of deep learning models. Full article
Show Figures

Figure 1

23 pages, 6315 KiB  
Article
A Kansei-Oriented Morphological Design Method for Industrial Cleaning Robots Integrating Extenics-Based Semantic Quantification and Eye-Tracking Analysis
by Qingchen Li, Yiqian Zhao, Yajun Li and Tianyu Wu
Appl. Sci. 2025, 15(15), 8459; https://doi.org/10.3390/app15158459 - 30 Jul 2025
Viewed by 150
Abstract
In the context of Industry 4.0, user demands for industrial robots have shifted toward diversification and experience-orientation. Effectively integrating users’ affective imagery requirements into industrial-robot form design remains a critical challenge. Traditional methods rely heavily on designers’ subjective judgments and lack objective data [...] Read more.
In the context of Industry 4.0, user demands for industrial robots have shifted toward diversification and experience-orientation. Effectively integrating users’ affective imagery requirements into industrial-robot form design remains a critical challenge. Traditional methods rely heavily on designers’ subjective judgments and lack objective data on user cognition. To address these limitations, this study develops a comprehensive methodology grounded in Kansei engineering that combines Extenics-based semantic analysis, eye-tracking experiments, and user imagery evaluation. First, we used web crawlers to harvest user-generated descriptors for industrial floor-cleaning robots and applied Extenics theory to quantify and filter key perceptual imagery features. Second, eye-tracking experiments captured users’ visual-attention patterns during robot observation, allowing us to identify pivotal design elements and assemble a sample repository. Finally, the semantic differential method collected users’ evaluations of these design elements, and correlation analysis mapped emotional needs onto stylistic features. Our findings reveal strong positive correlations between four core imagery preferences—“dignified,” “technological,” “agile,” and “minimalist”—and their corresponding styling elements. By integrating qualitative semantic data with quantitative eye-tracking metrics, this research provides a scientific foundation and novel insights for emotion-driven design in industrial floor-cleaning robots. Full article
(This article belongs to the Special Issue Intelligent Robotics in the Era of Industry 5.0)
Show Figures

Figure 1

16 pages, 782 KiB  
Article
Knowledge-Based Engineering in Strategic Logistics Planning
by Roman Gumzej, Tomaž Kramberger, Kristijan Brglez and Rebeka Kovačič Lukman
Sustainability 2025, 17(15), 6820; https://doi.org/10.3390/su17156820 - 27 Jul 2025
Viewed by 157
Abstract
Strategic logistics planning is used by management to define action plans that will enable organizations to always make decisions that are in the organization’s best interests. They are based on a knowledge repository of business experiences, which is usually represented by a centralized, [...] Read more.
Strategic logistics planning is used by management to define action plans that will enable organizations to always make decisions that are in the organization’s best interests. They are based on a knowledge repository of business experiences, which is usually represented by a centralized, organized, and searchable digital system where organizations store and manage critical institutional knowledge. Thus, an institutional knowledge base provides sustainability, making the experiences readily available while keeping them well organized. In this research, the experiences of logistics experts from selected scholarly designs for six-sigma business improvement projects have been collected, classified, and organized to form a logistics knowledge management system. Although originally meant to facilitate current and future decisions in strategic logistics planning of the cooperating companies, it is also used in logistics education to introduce knowledge-based engineering principles to enterprise strategic planning, based on continuous improvement of quality-related product or process performance indicators. The main goal of this article is to highlight the benefits of knowledge-based engineering over the established ontological logistics knowledge base in smart production, based on the predisposition that ontological institutional knowledge base management is more efficient, adaptable, and sustainable. Full article
Show Figures

Figure 1

34 pages, 1954 KiB  
Article
A FAIR Resource Recommender System for Smart Open Scientific Inquiries
by Syed N. Sakib, Sajratul Y. Rubaiat, Kallol Naha, Hasan H. Rahman and Hasan M. Jamil
Appl. Sci. 2025, 15(15), 8334; https://doi.org/10.3390/app15158334 - 26 Jul 2025
Viewed by 240
Abstract
A vast proportion of scientific data remains locked behind dynamic web interfaces, often called the deep web—inaccessible to conventional search engines and standard crawlers. This gap between data availability and machine usability hampers the goals of open science and automation. While registries like [...] Read more.
A vast proportion of scientific data remains locked behind dynamic web interfaces, often called the deep web—inaccessible to conventional search engines and standard crawlers. This gap between data availability and machine usability hampers the goals of open science and automation. While registries like FAIRsharing offer structured metadata describing data standards, repositories, and policies aligned with the FAIR (Findable, Accessible, Interoperable, and Reusable) principles, they do not enable seamless, programmatic access to the underlying datasets. We present FAIRFind, a system designed to bridge this accessibility gap. FAIRFind autonomously discovers, interprets, and operationalizes access paths to biological databases on the deep web, regardless of their FAIR compliance. Central to our approach is the Deep Web Communication Protocol (DWCP), a resource description language that represents web forms, HyperText Markup Language (HTML) tables, and file-based data interfaces in a machine-actionable format. Leveraging large language models (LLMs), FAIRFind combines a specialized deep web crawler and web-form comprehension engine to transform passive web metadata into executable workflows. By indexing and embedding these workflows, FAIRFind enables natural language querying over diverse biological data sources and returns structured, source-resolved results. Evaluation across multiple open-source LLMs and database types demonstrates over 90% success in structured data extraction and high semantic retrieval accuracy. FAIRFind advances existing registries by turning linked resources from static references into actionable endpoints, laying a foundation for intelligent, autonomous data discovery across scientific domains. Full article
Show Figures

Figure 1

17 pages, 2001 KiB  
Article
A Methodological Route for Teaching Vocabulary in Spanish as a Foreign Language Using Oral Tradition Stories: The Witches of La Jagua and Colombia’s Linguistic and Cultural Diversity
by Daniel Guarín
Educ. Sci. 2025, 15(8), 949; https://doi.org/10.3390/educsci15080949 - 23 Jul 2025
Viewed by 362
Abstract
Oral tradition stories hold a vital place in language education, offering rich repositories of linguistic, cultural, and historical knowledge. In the Spanish as a Foreign Language (SFL) context, their inclusion provides dynamic opportunities to explore diversity, foster critical and creative thinking, and challenge [...] Read more.
Oral tradition stories hold a vital place in language education, offering rich repositories of linguistic, cultural, and historical knowledge. In the Spanish as a Foreign Language (SFL) context, their inclusion provides dynamic opportunities to explore diversity, foster critical and creative thinking, and challenge dominant epistemologies. Despite their pedagogical potential, these narratives remain largely absent from formal curricula, with most SFL textbooks still privileging canonical works, particularly those from the Latin American Boom or European literary texts. This article aims to provide practical guidance for SFL instructors on designing effective, culturally responsive materials for the teaching of vocabulary. Drawing on a methodological framework for material design and a cognitive approach to vocabulary learning, I present original pedagogical material based on a Colombian oral tradition story about the witches of La Jagua (Huila, Colombia) to inspire educators to integrate oral tradition stories into their classrooms. As argued throughout, oral narratives not only support vocabulary acquisition and intercultural competence but also offer students meaningful engagement with the values, worldviews, and linguistic diversity that shape Colombian culture. This approach redefines language teaching through a more descriptive, contextualized, and culturally grounded lens, equipping learners with pragmatic, communicative, and intercultural skills essential for the 21st century. My goal with this article is to advocate for teacher agency in material creation, emphasizing that educators are uniquely positioned to design pedagogical resources that reflect their own cultural realities and local knowledge and to adapt them meaningfully to their students’ needs. Full article
Show Figures

Figure 1

34 pages, 3660 KiB  
Review
A Guide in Synthetic Biology: Designing Genetic Circuits and Their Applications in Stem Cells
by Karim S. Elnaggar, Ola Gamal, Nouran Hesham, Sama Ayman, Nouran Mohamed, Ali Moataz, Emad M. Elzayat and Nourhan Hassan
SynBio 2025, 3(3), 11; https://doi.org/10.3390/synbio3030011 - 22 Jul 2025
Viewed by 711
Abstract
Stem cells, unspecialized cells with regenerative and differentiation capabilities, hold immense potential in regenerative medicine, exemplified by hematopoietic stem cell transplantation. However, their clinical application faces significant limitations, including their tumorigenic risk due to uncontrolled proliferation and cellular heterogeneity. This review explores how [...] Read more.
Stem cells, unspecialized cells with regenerative and differentiation capabilities, hold immense potential in regenerative medicine, exemplified by hematopoietic stem cell transplantation. However, their clinical application faces significant limitations, including their tumorigenic risk due to uncontrolled proliferation and cellular heterogeneity. This review explores how synthetic biology, an interdisciplinary approach combining engineering and biology, offers promising solutions to these challenges. It discusses the concepts, toolkit, and advantages of synthetic biology, focusing on the design and integration of genetic circuits to program stem cell differentiation and engineer safety mechanisms like inducible suicide switches. This review comprehensively examines recent advancements in synthetic biology applications for stem cell engineering, including programmable differentiation circuits, cell reprogramming strategies, and therapeutic cell engineering approaches. We highlight specific examples of genetic circuits that have been successfully implemented in various stem cell types, from embryonic stem cells to induced pluripotent stem cells, demonstrating their potential for clinical translation. Despite these advancements, the integration of synthetic biology with mammalian cells remains complex, necessitating further research, standardized datasets, open access repositories, and interdisciplinary collaborations to build a robust framework for predicting and managing this complexity. Full article
Show Figures

Figure 1

22 pages, 4581 KiB  
Article
Strategies to Mitigate Risks in Building Information Modelling Implementation: A Techno-Organizational Perspective
by Ibrahim Dogonyaro and Amira Elnokaly
Intell. Infrastruct. Constr. 2025, 1(2), 5; https://doi.org/10.3390/iic1020005 - 17 Jul 2025
Viewed by 210
Abstract
The construction industry is moving towards the era of industry 4.0; 5.0 with Building Information Modelling (BIM) as the tool gaining significant traction owing to its inherent advantages such as enhancing construction design, process and data management. However, the integration of BIM presents [...] Read more.
The construction industry is moving towards the era of industry 4.0; 5.0 with Building Information Modelling (BIM) as the tool gaining significant traction owing to its inherent advantages such as enhancing construction design, process and data management. However, the integration of BIM presents risks that are often overlooked in project implementation. This study aims to develop a novel amalgamated dimensional factor (Techno-organizational Aspect) that is set out to identify and align appropriate management strategies to these risks. Firstly, it encompasses an in-depth analysis of BIM and risk management, through an integrative review approach. The study utilizes an exploratory-based review centered around journal articles and conference papers sourced from Scopus and Google Scholar. Then processed using NVivo 12 Pro software to categorise risks through thematic analysis, resulting in a comprehensive Risk Breakdown Structure (RBS). Then qualitative content analysis was employed to identify and develop management strategies. Further data collection via online survey was crucial for closing the research gap identified. The analysis by mixed method research enabled to determine the risk severity via the quantitative approach using SPSS (version 29), while the qualitative approach linked management strategies to the risk factors. The findings accentuate the crucial linkages of key strategies such as version control system that controls BIM data repository transactions to mitigate challenges controlling transactions in multi-model collaborative environment. The study extends into underexplored amalgamated domains (techno-organisational spectrum). Therefore, a significant contribution to bridging the existing research gap in understanding the intricate relationship between BIM implementation risks and effective management strategies. Full article
Show Figures

Figure 1

14 pages, 1289 KiB  
Article
Method for Extracting Arterial Pulse Waveforms from Interferometric Signals
by Marian Janek, Ivan Martincek and Gabriela Tarjanyiova
Sensors 2025, 25(14), 4389; https://doi.org/10.3390/s25144389 - 14 Jul 2025
Viewed by 332
Abstract
This paper presents a methodology for extracting and simulating arterial pulse waveform signals from Fabry–Perot interferometric measurements, emphasizing a practical approach for noninvasive cardiovascular assessment. A key novelty of this work is the presentation of a complete Python-based processing pipeline, which is made [...] Read more.
This paper presents a methodology for extracting and simulating arterial pulse waveform signals from Fabry–Perot interferometric measurements, emphasizing a practical approach for noninvasive cardiovascular assessment. A key novelty of this work is the presentation of a complete Python-based processing pipeline, which is made publicly available as open-source code on GitHub (git version 2.39.5). To the authors’ knowledge, no such repository for demodulating these specific interferometric signals to obtain a raw arterial pulse waveform previously existed. The proposed system utilizes accessible Python-based preprocessing steps, including outlier removal, Butterworth high-pass filtering, and min–max normalization, designed for robust signal quality even in settings with common physiological artifacts. Key features such as the rate of change, the Hilbert transform of the rate of change (envelope), and detected extrema guide the signal reconstruction, offering a computationally efficient pathway to reveal its periodic and phase-dependent dynamics. Visual analyses highlight amplitude variations and residual noise sources, primarily attributed to sensor bandwidth limitations and interpolation methods, considerations critical for real-world deployment. Despite these practical challenges, the reconstructed arterial pulse waveform signals provide valuable insights into arterial motion, with the methodology’s performance validated on measurements from three subjects against synchronized ECG recordings. This demonstrates the viability of Fabry–Perot sensors as a potentially cost-effective and readily implementable tool for noninvasive cardiovascular diagnostics. The results underscore the importance of precise yet practical signal processing techniques and pave the way for further improvements in interferometric sensing, bio-signal analysis, and their translation into clinical practice. Full article
(This article belongs to the Special Issue Advanced Sensors for Human Health Management)
Show Figures

Figure 1

21 pages, 614 KiB  
Article
The Decarbonisation of Heating and Cooling Following EU Directives
by Joana Fernandes, Silvia Remédios, Frank Gérard, Andro Bačan, Martin Stroleny, Vassiliki Drosou and Rosie Christodoulaki
Energies 2025, 18(13), 3432; https://doi.org/10.3390/en18133432 - 30 Jun 2025
Cited by 1 | Viewed by 319
Abstract
Heating and cooling (H&C) accounts for approximately 50% of the European Union’s (EU) total energy demand and remains significantly reliant on imported fossil fuels. Hence, addressing the decarbonization of the H&C sector is key to achieving a successful energy transition. At the EU [...] Read more.
Heating and cooling (H&C) accounts for approximately 50% of the European Union’s (EU) total energy demand and remains significantly reliant on imported fossil fuels. Hence, addressing the decarbonization of the H&C sector is key to achieving a successful energy transition. At the EU level, several legislative instruments within the Fit for 55 package directly target the decarbonization of H&C, including the core directives on renewable energy, energy efficiency, and the energy performance of buildings. At the national level, EU Member States (MSs) have developed National Energy and Climate Plans (NECPs), which are the main framework for defining national energy transition strategies, including measures to address H&C. Within the EU-funded REDI4HEAT project, a guideline was developed to support the assessment of policy documents—particularly NECPs—regarding the robustness of their policies and measures for decarbonizing H&C. This assessment framework supports the identification of gaps and opportunities through six key Strategic Policy Priority (SPP) areas, offering a set of policy options that can be further elaborated into effective measures. The design of these policy measures is informed by the Knowledge Sharing Centre—an online repository of replicable and adaptable initiatives that can be tailored to the specific geographical, social, and economic contexts of each MS. Full article
(This article belongs to the Collection Energy Transition Towards Carbon Neutrality)
Show Figures

Figure 1

21 pages, 4080 KiB  
Article
M-Learning: Heuristic Approach for Delayed Rewards in Reinforcement Learning
by Cesar Andrey Perdomo Charry, Marlon Sneider Mora Cortes and Oscar J. Perdomo
Mathematics 2025, 13(13), 2108; https://doi.org/10.3390/math13132108 - 27 Jun 2025
Viewed by 354
Abstract
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. [...] Read more.
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. This document proposes a comparative analysis between the Q-Learning algorithm, which laid the foundations for Deep Q-Learning, and our proposed method, termed M-Learning. The comparison is conducted using Markov Decision Processes with the delayed reward as a general test bench framework. Firstly, this document provides a full description of the main challenges related to implementing Q-Learning, particularly concerning its multiple parameters. Then, the foundations of our proposed heuristic are presented, including its formulation, and the algorithm is described in detail. The methodology used to compare both algorithms involved training them in the Frozen Lake environment. The experimental results, along with an analysis of the best solutions, demonstrate that our proposal requires fewer episodes and exhibits reduced variability in the outcomes. Specifically, M-Learning trains agents 30.7% faster in the deterministic environment and 61.66% faster in the stochastic environment. Additionally, it achieves greater consistency, reducing the standard deviation of scores by 58.37% and 49.75% in the deterministic and stochastic settings, respectively. The code will be made available in a GitHub repository upon this paper’s publication. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms, 2nd Edition)
Show Figures

Figure 1

20 pages, 4951 KiB  
Article
LNT-YOLO: A Lightweight Nighttime Traffic Light Detection Model
by Syahrul Munir and Huei-Yung Lin
Smart Cities 2025, 8(3), 95; https://doi.org/10.3390/smartcities8030095 - 6 Jun 2025
Viewed by 1136
Abstract
Autonomous vehicles are one of the key components of smart mobility that leverage innovative technology to navigate and operate safely in urban environments. Traffic light detection systems, as a key part of autonomous vehicles, play a key role in navigation during challenging traffic [...] Read more.
Autonomous vehicles are one of the key components of smart mobility that leverage innovative technology to navigate and operate safely in urban environments. Traffic light detection systems, as a key part of autonomous vehicles, play a key role in navigation during challenging traffic scenarios. Nighttime driving poses significant challenges for autonomous vehicle navigation, particularly in regard to the accuracy of traffic lights detection (TLD) systems. Existing TLD methodologies frequently encounter difficulties under low-light conditions due to factors such as variable illumination, occlusion, and the presence of distracting light sources. Moreover, most of the recent works only focused on daytime scenarios, often overlooking the significantly increased risk and complexity associated with nighttime driving. To address these critical issues, this paper introduces a novel approach for nighttime traffic light detection using the LNT-YOLO model, which is based on the YOLOv7-tiny framework. LNT-YOLO incorporates enhancements specifically designed to improve the detection of small and poorly illuminated traffic signals. Low-level feature information is utilized to extract the small-object features that have been missing because of the structure of the pyramid structure in the YOLOv7-tiny neck component. A novel SEAM attention module is proposed to refine the features that represent both the spatial and channel information by leveraging the features from the Simple Attention Module (SimAM) and Efficient Channel Attention (ECA) mechanism. The HSM-EIoU loss function is also proposed to accurately detect a small traffic light by amplifying the loss for hard-sample objects. In response to the limited availability of datasets for nighttime traffic light detection, this paper also presents the TN-TLD dataset. This newly curated dataset comprises carefully annotated images from real-world nighttime driving scenarios, featuring both circular and arrow traffic signals. Experimental results demonstrate that the proposed model achieves high accuracy in recognizing traffic lights in the TN-TLD dataset and in the publicly available LISA dataset. The LNT-YOLO model outperforms the original YOLOv7-tiny model and other state-of-the-art object detection models in mAP performance by 13.7% to 26.2% on the TN-TLD dataset and by 9.5% to 24.5% on the LISA dataset. These results underscore the model’s feasibility and robustness compared to other state-of-the-art object detection models. The source code and dataset will be available through the GitHub repository. Full article
Show Figures

Figure 1

27 pages, 5632 KiB  
Article
Semantic Fusion of Health Data: Implementing a Federated Virtualized Knowledge Graph Framework Leveraging Ontop System
by Abid Ali Fareedi, Stephane Gagnon, Ahmad Ghazawneh and Raul Valverde
Future Internet 2025, 17(6), 245; https://doi.org/10.3390/fi17060245 - 30 May 2025
Viewed by 527
Abstract
Data integration (DI) and semantic interoperability (SI) are critical in healthcare, enabling seamless, patient-centric data sharing across systems to meet the demand for instant, unambiguous access to health information. Federated information systems (FIS) highlight auspicious issues for seamless DI and SI stemming from [...] Read more.
Data integration (DI) and semantic interoperability (SI) are critical in healthcare, enabling seamless, patient-centric data sharing across systems to meet the demand for instant, unambiguous access to health information. Federated information systems (FIS) highlight auspicious issues for seamless DI and SI stemming from diverse data sources or models. We present a hybrid ontology-based design science research engineering (ODSRE) methodology that combines design science activities with ontology engineering principles to address the above-mentioned issues. The ODSRE constructs a systematic mechanism leveraging the Ontop virtual paradigm to establish a state-of-the-art federated virtual knowledge graph framework (FVKG) embedded virtualized knowledge graph approach to mitigate the aforementioned challenges effectively. The proposed FVKG helps construct a virtualized data federation leveraging the Ontop semantic query engine that effectively resolves data bottlenecks. Using a virtualized technique, the FVKG helps to reduce data migration, ensures low latency and dynamic freshness, and facilitates real-time access while upholding integrity and coherence throughout the federation system. As a result, we suggest a customized framework for constructing ontological monolithic semantic artifacts, especially in FIS. The proposed FVKG incorporates ontology-based data access (OBDA) to build a monolithic virtualized repository that integrates various ontological-driven artifacts and ensures semantic alignments using schema mapping techniques. Full article
Show Figures

Figure 1

21 pages, 2082 KiB  
Article
Characterizing Agile Software Development: Insights from a Data-Driven Approach Using Large-Scale Public Repositories
by Carlos Moreno Martínez, Jesús Gallego Carracedo and Jaime Sánchez Gallego
Software 2025, 4(2), 13; https://doi.org/10.3390/software4020013 - 24 May 2025
Viewed by 1084
Abstract
This study investigates the prevalence and impact of Agile practices by leveraging metadata from thousands of public GitHub repositories through a novel data-driven methodology. To facilitate this analysis, we developed the AgileScore index, a metric designed to identify and evaluate patterns, characteristics, performance [...] Read more.
This study investigates the prevalence and impact of Agile practices by leveraging metadata from thousands of public GitHub repositories through a novel data-driven methodology. To facilitate this analysis, we developed the AgileScore index, a metric designed to identify and evaluate patterns, characteristics, performance and community engagement in Agile-oriented projects. This approach enables comprehensive, large-scale comparisons between Agile methodologies and traditional development practices within digital environments. Our findings reveal a significant annual growth of 16% in the adoption of Agile practices and validate the AgileScore index as a systematic tool for assessing Agile methodologies across diverse development contexts. Furthermore, this study introduces innovative analytical tools for researchers in software project management, software engineering and related fields, providing a foundation for future work in areas such as cost estimation and hybrid project management. These insights contribute to a deeper understanding of Agile’s role in fostering collaboration and adaptability in dynamic digital ecosystems. Full article
Show Figures

Figure 1

21 pages, 2929 KiB  
Article
Spatiotemporal Analysis of Obesity: The Case of Italian Regions
by Elena Grimaccia and Luciano Rota
Obesities 2025, 5(2), 37; https://doi.org/10.3390/obesities5020037 - 21 May 2025
Viewed by 792
Abstract
This study examines the spatial and temporal evolution of obesity among adults in Italian regions. In Italy, regional administrative areas are responsible for providing health services. Moreover, Italian regions present different socioeconomic conditions and health and nutritional habits. As a result, a regional [...] Read more.
This study examines the spatial and temporal evolution of obesity among adults in Italian regions. In Italy, regional administrative areas are responsible for providing health services. Moreover, Italian regions present different socioeconomic conditions and health and nutritional habits. As a result, a regional analysis of the spatiotemporal evolution of obesity allows the identification of key areas for prevention and control, enabling the design of more targeted and effective interventions. In this study, the geographic clustering of obesity in Italy was explored by analyzing the local spatial autocorrelation of regional-level prevalence rates of adulthood obesity between 2010 and 2022, updating and expanding the existing literature. Data from the Health For All repository are analyzed to determine distribution patterns and trends, employing choropleth maps, Moran’s Index and Welch’s t-test. Gender inequalities have been underlined both in the spatial and temporal distribution. Results show that obesity exhibits spatial clustering, with greater severity in the south. During the period under analysis, obesity prevalence rates in Italy show a tendency to grow, with a sharp increase during the COVID-19 lockdown. Full article
Show Figures

Figure 1

Back to TopTop