Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (453)

Search Parameters:
Keywords = FAIR principles

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2907 KB  
Article
GeoCetus: A Multi-Decadal Open Geospatial Infrastructure for the Continuous Monitoring of Marine Strandings in Italy
by Alessio Di Lorenzo, Ludovica Di Renzo, Chiara Profico, Daniela Profico, Vincenzo Olivieri and Sergio Guccione
Animals 2026, 16(9), 1323; https://doi.org/10.3390/ani16091323 (registering DOI) - 26 Apr 2026
Abstract
Marine turtle and cetacean strandings along the Italian coastline represent critical ecological events that require systematic documentation, yet historical data have suffered from fragmentation and poor accessibility across heterogeneous archives. GeoCetus addresses this gap by providing a unified national framework for the centralized [...] Read more.
Marine turtle and cetacean strandings along the Italian coastline represent critical ecological events that require systematic documentation, yet historical data have suffered from fragmentation and poor accessibility across heterogeneous archives. GeoCetus addresses this gap by providing a unified national framework for the centralized collection, management, and open visualization of these data. The platform’s architecture integrates a spatially enabled database with a modern RESTful API, utilizing automated workflows to push data to a public GitHub.com repository. This system unifies historical and contemporary datasets, comprising over 4700 georeferenced records dating back to 1999, while ensuring data quality through structured validation, qualified contributors and reverse geocoding. The results demonstrate a significant improvement in data interoperability and democratization, with the dataset expanding by an average of 150–300 new records annually under a CC-BY-SA license. By adhering to FAIR Data Principles, GeoCetus offers the necessary infrastructure to support real-time operational responses and reproducible ecological analyses. We conclude that this standardized, machine-readable approach is essential for evidence-based national conservation strategies and effective environmental monitoring. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

27 pages, 2004 KB  
Review
Machine Learning in Personalized Medication Regimen Design for the Geriatric Population: Integrating Pharmacokinetic and Pharmacodynamic Modeling with Clinical Decision-Making
by Ahmad R. Alsayed, Mohanad Al-Darraji, Mohannad Al-Qaiseiah, Anas Samara and Mustafa Al-Bayati
Technologies 2026, 14(4), 241; https://doi.org/10.3390/technologies14040241 - 21 Apr 2026
Viewed by 433
Abstract
Geriatric pharmacotherapy is usually challenged by physiological senescence. For instance, progressive declines in organ function and alterations in body composition can complicate drug disposition. However, conventional pharmacometrics models commonly have limited capacity to map these high-dimensional, nonlinear relationships. In this review, we are [...] Read more.
Geriatric pharmacotherapy is usually challenged by physiological senescence. For instance, progressive declines in organ function and alterations in body composition can complicate drug disposition. However, conventional pharmacometrics models commonly have limited capacity to map these high-dimensional, nonlinear relationships. In this review, we are examining the recent shift toward integrating machine learning (ML) with mechanistic pharmacokinetic (PK)/pharmacodynamic (PD) models to improve the accuracy and precision of dosing. Machine learning approaches like Random Forest and XGBoost consistently provided more accurate exposure predictions and significantly more efficient computational workflows than conventional methods. Nevertheless, concerns such as “black box” transparency and the potential of algorithmic bias toward specific patient demographics are challenging. It is important to incorporate explainability tools like SHAP, and adopting FAIR data principles is crucial for achieving professional trust and ensuring site-specific generalizability. Full article
(This article belongs to the Special Issue Technological Advances in Science, Medicine, and Engineering 2025)
Show Figures

Figure 1

15 pages, 1002 KB  
Review
Enabling Next-Generation Mass Spectrometry-Based Proteomics: Standards, Proteoform Resolution, and FAIR, Reproducible, and Quantitative Analysis
by Rui Vitorino
Proteomes 2026, 14(2), 20; https://doi.org/10.3390/proteomes14020020 - 21 Apr 2026
Viewed by 201
Abstract
Recent advances in mass spectrometry, data-independent acquisition, proteoform-resolving workflows, and multi-omics integration have significantly expanded the scale and scope of proteomics. However, the reuse and translational application of these datasets are limited by inconsistent standards, insufficient metadata, and inadequate computational interoperability. Proteoform-centric approaches [...] Read more.
Recent advances in mass spectrometry, data-independent acquisition, proteoform-resolving workflows, and multi-omics integration have significantly expanded the scale and scope of proteomics. However, the reuse and translational application of these datasets are limited by inconsistent standards, insufficient metadata, and inadequate computational interoperability. Proteoform-centric approaches provide higher molecular resolution by capturing intact protein variants and patterns of post-translational modification. Computational methods, including selected applications of machine learning and large language models (LLMs), are increasingly used for tasks such as spectral prediction and pattern discovery in clinical proteomics datasets. Despite these advancements, FAIR (Findable, Accessible, Interoperable, and Reusable) data practices, proteoform biology, and AI analytics are often pursued independently. This work presents an integrated framework for next-generation proteomics in which standardization and FAIR (Findable, Accessible, Interoperable, and Reusable) principles establish machine-actionable foundations for proteoform-resolved analysis and computational inference. It examines community efforts to promote data sharing and interoperability, as well as strategies for characterizing proteoforms using bottom-up, middle-down, and top-down approaches. It also highlights emerging AI and ML applications within the proteomics workflow. The framework emphasizes the importance of treating proteoforms as primary computational entities and adopting FAIR practices during data collection to enable reproducible and interpretable modeling. Finally, it introduces an architectural model that integrates FAIR infrastructures and proteoform resolution. In addition, practical recommendations for making AI-ready proteomics, including a minimal community checklist to support reproducibility, benchmarking, and translational scalability, are provided. Full article
(This article belongs to the Section Proteomics Technology and Methodology Development)
Show Figures

Figure 1

20 pages, 45555 KB  
Article
FAIRHiveFrames-1K: A Public FAIR Dataset of 1265 Annotated Hive Frame Images with Preliminary YOLOv8 and YOLOv11 Baselines
by Vladimir Kulyukin, Reagan Hill and Aleksey Kulyukin
Sensors 2026, 26(8), 2518; https://doi.org/10.3390/s26082518 (registering DOI) - 19 Apr 2026
Viewed by 151
Abstract
In precision apiculture, the portable digital camera is a cost-effective sensor for capturing hive images or videos used to quantify different colony variables. Openly accessible, well-annotated, interoperable cell-level image datasets are still the exception rather than the norm. This shortage constitutes a major [...] Read more.
In precision apiculture, the portable digital camera is a cost-effective sensor for capturing hive images or videos used to quantify different colony variables. Openly accessible, well-annotated, interoperable cell-level image datasets are still the exception rather than the norm. This shortage constitutes a major barrier to AI-driven approaches aimed at automating image-based comb analysis. In this article, we present FAIRHiveFrames-1K, a publicly available dataset of 1265 annotated hive frame images (1920 × 1080 PNG) designed to facilitate research in AI-intensive image-based comb analysis automation. The dataset, derived from a 2013–2022 U.S. Department of Agriculture–Agricultural Research Service multi-sensor research reservoir, includes 124,669 annotated regions of interest for seven biologically meaningful categories consistent with comb analysis literature and standard hive inspection protocols. FAIRHiveFrames-1K is curated according to FAIR principles (Findable, Accessible, Interoperable, Reusable) and distributed under CC-BY 4.0 with standard annotation formats, fixed training and validation splits, and reproducible benchmarking artifacts. To establish preliminary baseline performance, we iteratively tuned four YOLO architectures (YOLOv8n, YOLOv8s, YOLOv11n, YOLOv11s) under a shared tuning protocol over the period of dataset growth. Full article
Show Figures

Figure 1

26 pages, 1532 KB  
Review
Mapping the Evolution and Intellectual Structure of Marine Spatial Data Infrastructure (MSDI): A Systematic Review and Bibliometric Analysis
by Nuha Hamed Al-Subhi, Mohammed Nasser Al-Suqri and Faten Fatehi Hamad
Geographies 2026, 6(2), 39; https://doi.org/10.3390/geographies6020039 - 13 Apr 2026
Viewed by 198
Abstract
The proliferation of marine data presents both an opportunity for ocean governance and a challenge, contributing to fragmentation across disciplines, institutions, and sectors. Marine Spatial Data Infrastructure (MSDI) stands out as a major framework for integrating marine information. However, an integrated synthesis that [...] Read more.
The proliferation of marine data presents both an opportunity for ocean governance and a challenge, contributing to fragmentation across disciplines, institutions, and sectors. Marine Spatial Data Infrastructure (MSDI) stands out as a major framework for integrating marine information. However, an integrated synthesis that combines quantitative mapping of publication patterns with qualitative analysis of thematic evolution remains absent. This study employs a two-step approach combining systematic review and bibliometric analysis of Scopus-indexed literature (2000–2024). Based on a focused corpus of 20 publications rigorously screened for explicit MSDI relevance, we examine publication trends, collaboration patterns, thematic structures, and evolutionary trajectories. Results indicate accelerating scholarly interest in MSDI, with European institutions contributing 75% of the analysed publications. Policy frameworks such as the INSPIRE Directive (Infrastructure for Spatial Information in the European Community) and the Marine Strategy Framework Directive (MSFD) emerge as key drivers of research activity. Temporal analysis of this corpus suggests a tentative five-phase evolution in MSDI research: (1) foundational technical standardisation, (2) governance model implementation, (3) semantic interoperability enhancement, (4) policy integration, and (5) advanced applications incorporating FAIR (Findable, Accessible, Interoperable, Reusable) and CARE (Collective Benefit, Authority to Control, Responsibility, Ethics) principles and Artificial Intelligence (AI). These phases, derived from systematic coding of thematic focus across publications, represent observed patterns within the analysed literature rather than definitive stages. This paper concludes that MSDI is moving toward a more socio-technical approach that requires the consideration of a technical-focused tool in present-day ocean governance. Future work should combine semantic AI, decentralised architectures, polycentric governance models, and impact assessment frameworks to align MSDI development with the objectives of equity, inclusion, and sustainability. Full article
Show Figures

Figure 1

25 pages, 854 KB  
Systematic Review
Hybrid Machine Learning Architectures for Emergency Triage: A Systematic Review of Predictive Performance and the Complexity Gradient
by Junaid Ullah, R Kanesaraj Ramasamy and Venushini Rajendran
BioMedInformatics 2026, 6(2), 21; https://doi.org/10.3390/biomedinformatics6020021 - 10 Apr 2026
Viewed by 421
Abstract
Background: Emergency triage systems using machine learning traditionally rely on structured tabular data (vital signs), creating a “contextual blind spot” that ignores diagnostic information embedded in unstructured clinical narratives. Hybrid AI models that fuse tabular and text data may improve predictive discrimination, but [...] Read more.
Background: Emergency triage systems using machine learning traditionally rely on structured tabular data (vital signs), creating a “contextual blind spot” that ignores diagnostic information embedded in unstructured clinical narratives. Hybrid AI models that fuse tabular and text data may improve predictive discrimination, but the magnitude and conditions under which fusion adds value remain unclear. Methods: Five databases (PubMed, Scopus, Web of Science, IEEE Xplore, ACM Digital Library) were searched from 1 January 2015 to 15 December 2025. Eligible studies employed Hybrid AI models integrating structured and unstructured emergency department data with quantitative baseline comparisons. Twenty-five studies (N ≈ 4.8 million encounters) met inclusion criteria. We extracted marginal performance gains (ΔAUC), calibration metrics, and demographic reporting. Synthesis followed SWiM principles with subgroup meta-regression testing our novel “Complexity Gradient” hypothesis. Results: Hybrid models demonstrated superior discrimination compared to tabular baselines, with effect magnitude dependent on clinical task complexity. Low-complexity tasks (tachycardia prediction) showed minimal gains (median ΔAUC + 0.036, IQR: 0.02–0.05), while high-complexity tasks (hypoxia, sepsis) demonstrated substantial improvement (median ΔAUC + 0.111, IQR: 0.09–0.13). Meta-regression confirmed complexity significantly moderated effect size (R2 = 0.42, p = 0.003). Only 12% (3/25) of studies reported calibration metrics (Brier scores: 0.089–0.142). Zero studies stratified performance by race/ethnicity; 88% (22/25) failed to report training data demographics. Discussion: The complexity gradient framework explains when multimodal fusion adds predictive value: tasks where diagnostic signal resides in narrative features (temporality, negation) rather than physiological measurements. However, systematic absence of calibration reporting and fairness auditing prevents clinical deployment. Seventy-two percent of studies had high risk of bias in the analysis domain due to retrospective designs without temporal validation. Conclusions: Hybrid triage models show promise for complex diagnostic tasks but require mandatory calibration reporting and demographic performance stratification before clinical implementation. We propose minimum reporting standards including Brier scores, race-stratified metrics, and temporal validation protocols. Full article
Show Figures

Figure 1

18 pages, 262 KB  
Entry
Assessment Analytics in Digital Assessments
by Okan Bulut and Seyma N. Yildirim-Erbasli
Encyclopedia 2026, 6(4), 81; https://doi.org/10.3390/encyclopedia6040081 - 2 Apr 2026
Viewed by 503
Definition
The rapid expansion of digital and technology-enhanced assessments has enabled the capture of far more than final responses or total scores. As learners navigate traditional formats, such as multiple-choice, short-answer, and performance tasks, digital delivery platforms routinely capture response times, response revisions, navigation [...] Read more.
The rapid expansion of digital and technology-enhanced assessments has enabled the capture of far more than final responses or total scores. As learners navigate traditional formats, such as multiple-choice, short-answer, and performance tasks, digital delivery platforms routinely capture response times, response revisions, navigation patterns, and item-level metadata. More advanced formats, including interactive simulations, scenario-based tasks, and game-based assessments, further record fine-grained actions such as mouse clicks, keystrokes, hint requests, sequence of operations, and decision pathways. These increasingly rich data streams provide a multidimensional view of test-taker behavior, offering evidence about cognitive processes, strategy use, persistence, and motivation that goes beyond what correctness alone can reveal. Assessment analytics refers to the systematic collection, integration, and analysis of such data generated during the assessment process. In practice, this emerging field combines principles from psychometrics, learning analytics, data science, and human-computer interaction to evaluate the quality, validity, and fairness of assessments in digital environments. The ultimate goal of assessment analytics is to produce actionable evidence about how assessments measure what they intend to measure in contemporary, technology-rich educational contexts. Full article
(This article belongs to the Section Social Sciences)
27 pages, 1344 KB  
Article
Ethical Challenges of Artificial Intelligence in Higher Education: A Four-Pillar Student-Activity Framework for Institutional Governance
by Radovan Madleňák, Lucia Madleňáková, Viktória Cvacho and Daniel Gachulinec
Educ. Sci. 2026, 16(4), 555; https://doi.org/10.3390/educsci16040555 - 2 Apr 2026
Viewed by 855
Abstract
This study introduces a four-pillar student-activity framework (Studying and Learning, Research and Projects, Personal and Career Development, and Campus and Community Life) to analyze AI’s ethical challenges in higher education. Drawing on peer-reviewed sources from 2022 to 2025, we identify recurring risks across [...] Read more.
This study introduces a four-pillar student-activity framework (Studying and Learning, Research and Projects, Personal and Career Development, and Campus and Community Life) to analyze AI’s ethical challenges in higher education. Drawing on peer-reviewed sources from 2022 to 2025, we identify recurring risks across pillars: academic integrity, privacy/data protection, bias/fairness/equity, student agency/(de)skilling, and governance gaps. We distill three cross-pillar principles: disclosure plus process evidence (e.g., prompt/version logs), privacy-by-design, and proportionality and equity/fairness scaffolds (institutional access, bias audits, and multilingual support). These translate into actionable strategies for assessment redesign, research supervision, career services, and campus operations. The framework unifies fragmented discourse, supports institutional decision making, and reveals gaps for longitudinal and causal research. It demonstrates that responsible AI use emerges when processes are visible, data practices are proportionate, and access is equitable, amplifying human learning without eroding trust or integrity. Full article
Show Figures

Figure 1

19 pages, 849 KB  
Article
Ethical–Regulatory Guidelines for AI in Palliative Care Rehabilitation
by Daniela Oliveira, Sofia B. Nunes, Francisca Rego and Rui Nunes
Healthcare 2026, 14(7), 895; https://doi.org/10.3390/healthcare14070895 - 31 Mar 2026
Viewed by 335
Abstract
Background/Objectives: The integration of artificial intelligence (AI) into rehabilitation practice has expanded rapidly, including its emerging application in palliative care contexts. Although international organisations have established ethical and governance frameworks for AI in healthcare, these initiatives remain largely high-level and are not specifically [...] Read more.
Background/Objectives: The integration of artificial intelligence (AI) into rehabilitation practice has expanded rapidly, including its emerging application in palliative care contexts. Although international organisations have established ethical and governance frameworks for AI in healthcare, these initiatives remain largely high-level and are not specifically tailored to the clinical complexity, vulnerability, and relational dimensions of palliative care rehabilitation. The absence of context-specific ethical–regulatory guidance poses challenges for responsible implementation in ethically sensitive settings. This study aimed to consolidate ethically grounded regulatory guidance for the use of AI in palliative care rehabilitation by translating existing international principles into context-sensitive domains. Methods: A qualitative documentary analysis with a normative ethical–regulatory orientation was conducted using the READ (Ready, Extract, Analyse, Distil) framework. Authoritative international policy, governance, and regulatory documents addressing AI in healthcare were identified and analysed. Data were extracted using a structured analytical table and coded according to predefined ethical–regulatory domains derived from previously published ethical guidelines and verified through documentary analysis. Results: The analysis identified five convergent ethical–regulatory domains recurrent across international governance frameworks: (1) Human oversight and clinical responsibility; (2) Patient autonomy, preferences, and proportionality; (3) Transparency and explainability; (4) Fairness, equity, and non-discrimination; and (5) Professional competence and ethical literacy. These domains were synthesised into practical ethical–regulatory considerations linking ethical principles with governance expectations and clinical implementation requirements. Conclusions: This study provides context-sensitive ethical–regulatory guidance that bridges high-level AI governance principles with the operational realities of palliative care rehabilitation. By systematising and operationalising existing ethical norms, the proposed framework supports responsible clinical decision-making, strengthens institutional accountability, and safeguards patient dignity and autonomy in vulnerable care contexts. Full article
Show Figures

Figure 1

24 pages, 3181 KB  
Article
Neoliberal Phoenix: The Contested Legacy of Solidere’s Post-War Reconstruction of Beirut Central District
by Sarah Al-Thani, Jasim Azhar, Raffaello Furlan, Jalal Hoblos and Abdulla AlNuaimi
Urban Sci. 2026, 10(4), 184; https://doi.org/10.3390/urbansci10040184 - 30 Mar 2026
Viewed by 739
Abstract
Neoliberal privatization models, emphasizing economic advancement over universal fairness, present considerable challenges to the urban regeneration process in post-conflict environments. The Solidere project in Beirut shows how architectural development in the Central District establishes social obstacles through its transformation of 1.8 million m [...] Read more.
Neoliberal privatization models, emphasizing economic advancement over universal fairness, present considerable challenges to the urban regeneration process in post-conflict environments. The Solidere project in Beirut shows how architectural development in the Central District establishes social obstacles through its transformation of 1.8 million m2 of war-destroyed territory. This research applies UNESCO’s Historic Urban Landscape (HUL) framework to distinguish regeneration from gentrification systematically and to assess the impact of privatized governance. By employing rigorous case study methodologies to assess master plans, legal statutes, corporate reports, and academic publications, four evaluation criteria for the HUL: historical layering, social participation, spatial connectivity, and physical integrity, were developed. The results show that while Solidere’s physical reconstruction was successful; it did not incorporate HUL principles fully. This resulted in the forced relocation of between 40,000 and 60,000 individuals, the commercialization of heritage through façadism, with 24% of the original buildings being preserved and 76% being destroyed. Sarajevo serves as a point of comparison, revealing the vulnerabilities of profit-driven approaches. The study shows that market-driven reconstruction efforts lacking public engagement will foster exclusionary gentrification, resulting in the erosion of urban identity and ownership, challenging neoliberal urban theories. Full article
(This article belongs to the Special Issue Urban Regeneration: A Rethink)
Show Figures

Figure 1

33 pages, 3591 KB  
Review
Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025
by Charalampos M. Liapis, Nikos Fazakis, Sotiris Kotsiantis and Yannis Dimakopoulos
Informatics 2026, 13(4), 51; https://doi.org/10.3390/informatics13040051 - 27 Mar 2026
Viewed by 1572
Abstract
Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, [...] Read more.
Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, and justice. This review synthesizes publications and key policy developments between 2019 and 2025, bringing sectoral discourses together with cross-cutting frameworks. Grounded in a systematic scoping review methodology, we frame the field along four meta-dimensions: trust and transparency, bias and fairness, governance & regulation, and justice, while we investigate their expression across diverse sectors. Special attention is dedicated to healthcare (patient trust and algorithmic bias), education (integrity and authorship), media (misinformation), law (accountability), and the industrial sector (data integrity, intellectual property protection, and environmental safety). We ground abstract principles in concrete case studies to illustrate real-world harms and mitigation strategies. Furthermore, we incorporate pluralistic ethics (e.g., Ubuntu, Islamic perspectives), environmental ethics, and emerging challenges posed by Generative AI and neuro-AI interfaces. To bridge theory and practice, we propose an operational governance framework for organizations. We contend that success involves transitioning from principles toward ethics-by-design, pluralistic governance, sustainability, and adaptive oversight. This review is intended for scholars, practitioners, and policymakers who need a comprehensive and actionable framework for navigating the complex landscape of AI ethics. Full article
Show Figures

Figure 1

33 pages, 1502 KB  
Review
Ethics Without Teeth? Challenges and Opportunities in AI Declarations for Platform Governance
by Ahmad Haidar
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 103; https://doi.org/10.3390/jtaer21040103 - 26 Mar 2026
Viewed by 1184
Abstract
The rapid integration of artificial intelligence (AI) into digital platforms has raised critical questions about how AI’s ethical declarations influence this sector. This study adopts a mixed-methods approach. First, a descriptive content analysis examined 54 declarations, including 45 national declarations across Africa, Asia, [...] Read more.
The rapid integration of artificial intelligence (AI) into digital platforms has raised critical questions about how AI’s ethical declarations influence this sector. This study adopts a mixed-methods approach. First, a descriptive content analysis examined 54 declarations, including 45 national declarations across Africa, Asia, Europe, and the Americas, and 9 from major global actors (MGAs) such as the OECD, G7, and the EU. Ethical principle frequency was examined, and a benchmarking index was developed to compare “dominant principles” cited in over 50% of regional declarations with those cited in over 50% of MGA declarations. The analysis reveals universal adoption of societal well-being, fairness, accountability, and privacy (100%), while transparency and security show regional variation (75%). Second, a semi-systematic literature review following PRISMA guidelines identified four opportunities (e.g., global participation) and seven limitations (e.g., lack of standard frameworks, definitional ambiguities, implementation challenges, and legal enforcement difficulties). The implications of these limitations for digital platforms are then examined, leading to the identification of two dimensions for responsible platform governance: assessment mechanisms (e.g., UNESCO’s Ethical Impact Assessment) and governance implementation structures. The study further distinguishes three tiers of enforceability: declarative, procedural, and institutionalized ethics, bridging normative declarations and operational practice in platform governance. Full article
Show Figures

Figure 1

16 pages, 1304 KB  
Article
Determining the Origin of Electricity Consumed from Low-Carbon and Renewable Energy Sources: A Matrix-Based Modelling Approach and Algorithm
by Andrzej Smolarz, Saule Smailova, Ainur Ormanbekova, Iryna Hunko, Petr Lezhniuk, Vladyslav Lysyi and Laura Duisembayeva
Energies 2026, 19(7), 1620; https://doi.org/10.3390/en19071620 - 25 Mar 2026
Viewed by 364
Abstract
This article details a matrix-based mathematical method to calculate power flows and transmission losses in an electric grid specifically attributable to low-carbon and renewable energy sources (LCRES) (wind, solar, nuclear). The goal is to improve the transparency and reliability of Guarantees of Origin [...] Read more.
This article details a matrix-based mathematical method to calculate power flows and transmission losses in an electric grid specifically attributable to low-carbon and renewable energy sources (LCRES) (wind, solar, nuclear). The goal is to improve the transparency and reliability of Guarantees of Origin (GO) certificates. Current GO schemes rely on contractual accounting and neglect physical power losses, undermining consumers’ confidence that they receive “clean” energy. The method uses steady-state power flow analysis to derive a power-loss distribution coefficient matrix. This matrix accurately allocates grid losses back to the LCRES generating nodes, complying strictly with electrical engineering principles. It accommodates both time-varying renewable output and stable nuclear generation. The results offer highly accurate loss-attribution data, supporting more verifiable GOs, ensuring fair compensation for losses, and enhancing energy balance accuracy in hybrid power systems. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

32 pages, 3942 KB  
Article
Adopting MOD-API in a Modern Dataset Catalog Platform: Opportunities, Challenges and Limitations
by Manuel Fiorelli, Paolo Bocciarelli and Armando Stellato
Technologies 2026, 14(3), 193; https://doi.org/10.3390/technologies14030193 - 23 Mar 2026
Viewed by 479
Abstract
As data exploitation continues to demonstrate its value, ontologies, thesauri, and other semantic datasets are increasingly recognized for enabling semantically meaningful data integration across disparate domains. With the proliferation of dataset catalogs, the MOD ontology (Metadata for Ontology Description and publication) was adopted, [...] Read more.
As data exploitation continues to demonstrate its value, ontologies, thesauri, and other semantic datasets are increasingly recognized for enabling semantically meaningful data integration across disparate domains. With the proliferation of dataset catalogs, the MOD ontology (Metadata for Ontology Description and publication) was adopted, and an associated API was developed to support the future European Open Science Cloud (EOSC). Their aim is to harmonize catalogs of semantic datasets with respect to metadata vocabularies and access mechanisms, thereby ensuring compliance with the FAIR principles. Within an implementation action involving developers of prominent dataset catalogs, we were selected to integrate the MOD-API into ShowVoc, our platform for publishing and consuming ontologies, thesauri, lexicons, and other Semantic Web datasets. However, ShowVoc already relied on an expressive metadata model, the MDR (acronym for “Metadata Registry”), named after the component responsible for managing the platform’s internal catalog. Due to precise dissemination requirements, the MDR provides multiple abstraction levels and detailed specifications concerning the distributions and formats in which a dataset may be made available. In this article, we report on the challenges that we faced and the trade-offs that we made while reconciling these metadata models, highlighting limitations in the current MOD standard that may inform future enhancements. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

22 pages, 1384 KB  
Article
Deriving Empirically Grounded NFR Specifications from Practitioner Discourse: A Validated Methodology Applied to Trustworthy APIs in the AI Era
by Apitchaka Singjai
Information 2026, 17(3), 304; https://doi.org/10.3390/info17030304 - 22 Mar 2026
Cited by 1 | Viewed by 342
Abstract
Specifying non-functional requirements (NFRs) for rapidly evolving domains such as trustworthy APIs in the AI era is challenging as best practices emerge through practitioner discourse faster than traditional requirements engineering can capture them. We present a systematic methodology for deriving prioritized NFR specifications [...] Read more.
Specifying non-functional requirements (NFRs) for rapidly evolving domains such as trustworthy APIs in the AI era is challenging as best practices emerge through practitioner discourse faster than traditional requirements engineering can capture them. We present a systematic methodology for deriving prioritized NFR specifications from multimedia practitioner discourse combining AI-assisted transcript analysis, grounded theory principles, and Theme Coverage Score (TCS) validation. Our five-task approach integrates purposive sampling, automated transcription with speaker diarization, grounded theory coding extracting stakeholder-specific themes with TCS quantification, MoSCoW prioritization using empirically derived thresholds (Must Have ≥85%, Should Have 65–84%, Could Have 45–64%, and Won’t Have <45%), and NFR specification consistent with ISO/IEC 25010:2023 principles of stakeholder perspective, measurable quality criteria, and explicit rationale. Applying this methodology to 22 expert presentations on trustworthy APIs yields Weighted Coverage Score of 0.71 and 30 prioritized NFR specifications across five trustworthiness dimensions. MoSCoW classification produces 11 Must Have requirements (Robustness and Transparency), 9 Should Have, 6 Could Have, and 4 Won’t Have. The analysis reveals systematic disparities where Fairness contributes zero Must Have or Should Have requirements due to insufficient practitioner consensus. Each NFR emphasizes stakeholder perspective, measurable quality criteria, and explicit rationale, enabling systematic verification. The validated methodology with complete replication package enables empirically grounded, prioritized NFR derivation from practitioner discourse in any rapidly evolving domain. Full article
Show Figures

Graphical abstract

Back to TopTop