Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,345)

Search Parameters:
Keywords = metadata

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3439 KB  
Article
HCHS-Net: A Multimodal Handcrafted Feature and Metadata Framework for Interpretable Skin Lesion Classification
by Ahmet Solak
Biomimetics 2026, 11(2), 154; https://doi.org/10.3390/biomimetics11020154 - 19 Feb 2026
Abstract
Accurate and timely classification of skin lesions is critical for early cancer detection, yet current deep learning approaches suffer from high computational costs, limited interpretability, and poor transparency for clinical deployment. This study presents HCHS-Net, a lightweight and interpretable multimodal framework for six-class [...] Read more.
Accurate and timely classification of skin lesions is critical for early cancer detection, yet current deep learning approaches suffer from high computational costs, limited interpretability, and poor transparency for clinical deployment. This study presents HCHS-Net, a lightweight and interpretable multimodal framework for six-class skin lesion classification on the PAD-UFES-20 dataset. The proposed framework extracts a 116-dimensional visual feature vector through three complementary handcrafted modules: a Color Module employing multi-channel histogram analysis to capture chromatic diagnostic patterns, a Haralick Module deriving texture descriptors from the gray-level co-occurrence matrix (GLCM) that quantify surface characteristics correlated with malignancy, and a Shape Module encoding morphological properties via Hu moment invariants aligned with the clinical ABCD rule. The architectural design of HCHS-Net adopts a biomimetic approach by emulating the hierarchical information processing of the human visual system and the cognitive diagnostic workflows of expert dermatologists. Unlike conventional black-box deep learning models, this framework employs parallel processing branches that simulate the selective attention mechanisms of the human eye by focusing on biologically significant visual cues such as chromatic variance, textural entropy, and morphological asymmetry. These visual features are concatenated with a 12-dimensional clinical metadata vector encompassing patient demographics and lesion characteristics, yielding a compact 128-dimensional multimodal representation. Classification is performed through an ensemble of three gradient boosting algorithms (XGBoost, LightGBM, CatBoost) with majority voting. HCHS-Net achieves 97.76% classification accuracy with only 0.25 M parameters, outperforming deep learning baselines, including VGG-16 (94.60%), ResNet-50 (94.80%), and EfficientNet-B2 (95.16%), which require 60–97× more parameters. The framework delivers an inference time of 0.11 ms per image, enabling real-time classification on standard CPUs without GPU acceleration. Ablation analysis confirms the complementary contribution of each feature module, with metadata integration providing a 2.53% accuracy gain. The model achieves perfect melanoma and nevus recall (100%) with 99.55% specificity, maintaining reliable discrimination at safety-critical diagnostic boundaries. Comprehensive benchmarking against 13 published methods demonstrates that domain-informed handcrafted features combined with clinical metadata can match or exceed deep learning fusion approaches while offering superior interpretability and computational efficiency for point-of-care deployment. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

29 pages, 7593 KB  
Article
UAV-Based Visual Detection and Tracking of Drowning Victims in Maritime Rescue Operations
by Thanh Binh Ngo, Long Ngo, Danh Thanh Nguyen, Anh Vu Phi, Asanka Perera and Andy Nguyen
Drones 2026, 10(2), 146; https://doi.org/10.3390/drones10020146 - 19 Feb 2026
Abstract
Maritime search and rescue (SAR) operations are challenged by vast search areas, poor visibility, and the time-critical nature of victim survival, particularly in dynamic coastal areas. This study presents an intelligent unmanned aerial vehicle (UAV) framework for real-time detection, tracking, and prioritization of [...] Read more.
Maritime search and rescue (SAR) operations are challenged by vast search areas, poor visibility, and the time-critical nature of victim survival, particularly in dynamic coastal areas. This study presents an intelligent unmanned aerial vehicle (UAV) framework for real-time detection, tracking, and prioritization of people in distress at sea. Unlike existing UAV-based SAR systems that rely on visual sensing or offline human intervention, the proposed framework integrates RGB-thermal multimodal sensing and posture recognition to enhance victim prioritization and survivability estimation. Visual-thermal data support human posture detection, inference of physiological indicators, and autonomous UAV navigation. Metadata are transmitted to a ground control station to enable adaptive altitude control, trajectory rejoining, and multi-target prioritization. Field-inspired experiments in Quang Ninh Province, Vietnam demonstrated robust real-time performance, achieving 23 FPS with detection accuracy up to 84% for swimming subjects and over 50% for drowning postures. These findings demonstrate that Edge-AI-enabled UAVs can serve as a practical and efficient solution for maritime SAR, reducing response times and improving mission outcomes. Full article
Show Figures

Figure 1

20 pages, 313 KB  
Article
Making the Child Legible: Children’s Literature as Archive and Agent in Central Europe, 1860–2025
by Milan Mašát
Histories 2026, 6(1), 18; https://doi.org/10.3390/histories6010018 - 19 Feb 2026
Abstract
Central European children’s literature can be read as both archive—recording shifting norms, institutions, and visual regimes—and agent, a medium through which childhood, citizenship, and cultural memory are made legible. This conceptual article proposes an edition-sensitive framework for analysing texts, images, and paratexts across [...] Read more.
Central European children’s literature can be read as both archive—recording shifting norms, institutions, and visual regimes—and agent, a medium through which childhood, citizenship, and cultural memory are made legible. This conceptual article proposes an edition-sensitive framework for analysing texts, images, and paratexts across Central Europe (1860–2025), with particular attention to institutional mediation. Rather than offering a comprehensive dataset or causal claims about reception, it synthesises research in childhood history, book and media history, memory studies, and translation and circulation studies to advance three arguments. First, children’s books are institutionally framed artefacts: paratexts and material features (series branding, curricular endorsements, library markings, pricing cues, regulatory traces) can be read as historically interpretable speech acts of legitimation. Second, shifts in visual and material regimes should be analysed as changing conditions of legibility—expectations of clarity, affect, and authority—rather than as mere stylistic evolution. Third, translation and circulation function as infrastructures that reorganise repertoires and interpretive horizons, complicating nation-centred narratives without exhaustive market mapping. The article concludes by stating methodological limits (catalogue gaps, survival bias, uneven metadata) and outlining a transferable agenda for paratext-centred documentation and edition-sensitive reading. Full article
(This article belongs to the Section Cultural History)
19 pages, 636 KB  
Article
Transferring AI-Based Iconclass Classification Across Image Traditions: A RAG Pipeline for the Wenzelsbibel
by Drew B. Thomas and Julia Hintersteiner
Histories 2026, 6(1), 17; https://doi.org/10.3390/histories6010017 - 18 Feb 2026
Viewed by 28
Abstract
This study evaluates whether a multimodal retrieval-augmented generation (RAG) pipeline originally developed for early modern woodcuts can be effectively transferred to the domain of medieval manuscript illumination. Using a dataset of Wenzelsbibel miniatures annotated with Iconclass, the pipeline combined page-level image input, LLM [...] Read more.
This study evaluates whether a multimodal retrieval-augmented generation (RAG) pipeline originally developed for early modern woodcuts can be effectively transferred to the domain of medieval manuscript illumination. Using a dataset of Wenzelsbibel miniatures annotated with Iconclass, the pipeline combined page-level image input, LLM description generation, vector retrieval, and hierarchical reasoning. Although overall scores were lower than in the earlier woodcut study, the best-performing configuration still substantially surpassed both image-similarity and keyword-based search, confirming the advantages of structured multimodal retrieval for medieval material. Truncation analysis further revealed that many errors occurred only at the deepest Iconclass levels: removing levels raised precision to 0.64 and 0.73, with average remaining depths of 5.49 and 4.49 levels, respectively. These results indicate that the model’s broader hierarchical placement is often correct even when fine-grained specificity breaks down. Taken together, the findings demonstrate that a woodcut-oriented RAG pipeline can be meaningfully adapted to manuscript illumination and that its strengths lie in contextual reasoning and structured classification. Future improvements should incorporate available textual metadata, explore graph-based retrieval, and refine Iconclass-driven pathways. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Historical Research)
Show Figures

Figure 1

34 pages, 1583 KB  
Article
Curriculum-Aware Retrieval-Augmented Generation for Bilingual Tutoring in Low-Resource Swahili–English Secondary Schools
by Innocent E. Rugemalila, Wei Cai, Xiaogang Zhang, Chuanwei Liu and Bang Wang
Technologies 2026, 14(2), 129; https://doi.org/10.3390/technologies14020129 - 18 Feb 2026
Viewed by 49
Abstract
In Tanzanian secondary education, Swahili-language-based question-answering systems currently face systemic disparities and linguistic barriers, which undermine the fairness and justice of the educational system. While Large Language Models (LLMs) offer scalable instructional support, they typically lack curriculum grounding, which causes them to perform [...] Read more.
In Tanzanian secondary education, Swahili-language-based question-answering systems currently face systemic disparities and linguistic barriers, which undermine the fairness and justice of the educational system. While Large Language Models (LLMs) offer scalable instructional support, they typically lack curriculum grounding, which causes them to perform unreliably in low-resource languages. This study introduces a Curriculum-Aware Retrieval-Augmented Generation (RAG) framework designed to be a linguistically inclusive AI tutor. The architecture combines hybrid dense–lexical retrieval, cross-encoder reranking, and metadata-based curriculum alignment to ensure factual, grade-appropriate responses. We evaluate five distinct generative models using a stratified 500-question Golden Dataset covering English, Swahili, and code-switched inputs. Findings indicate that there is a significant trade-off between scale and deployability. Although high-capacity LLMs provide useful reference performance, Qwen2.5-0.5B offers the most realistic trade-off between quality and deployability in low-resource settings. Under the proposed curriculum-aware pipeline, Qwen2.5-0.5B attains the best answer quality (F1: 32.7%), achieves strong grounding faithfulness (83.0%, validated by human evaluation), and maintains low end-to-end latency suitable for interactive classroom use (≤1.24 s). Notably, considering the limited size of the code-switched evaluation subset, our framework demonstrates promising capabilities in handling Swahili–English code-switched inputs, narrowing the observed performance gap between Swahili and English through improved semantic accuracy. These results provide initial empirical evidence that curriculum-aligned RAG can enable Small Language Models (SLMs) to serve as quality, safe, and sustainable educational assistants in low-resource Global South contexts. Full article
(This article belongs to the Section Information and Communication Technologies)
24 pages, 6631 KB  
Article
Application of Computer Vision to the Automated Extraction of Metadata from Natural History Specimen Labels: A Case Study on Herbarium Specimens
by Jacopo Zacchigna, Weiwei Liu, Felice Andrea Pellegrino, Adriano Peron, Francesco Roma-Marzio, Lorenzo Peruzzi and Stefano Martellos
Plants 2026, 15(4), 637; https://doi.org/10.3390/plants15040637 - 17 Feb 2026
Viewed by 104
Abstract
Metadata extraction from natural history collection labels is a pivotal task for the online publication of digitized specimens. However, given the scale of these collections—which are estimated to host more than 2 billion specimens worldwide, including ca. 400 million herbarium specimens—manual metadata extraction [...] Read more.
Metadata extraction from natural history collection labels is a pivotal task for the online publication of digitized specimens. However, given the scale of these collections—which are estimated to host more than 2 billion specimens worldwide, including ca. 400 million herbarium specimens—manual metadata extraction is an extremely time-consuming task. Thus, automated data extraction from digital images of specimens and their labels therefore is a promising application of state-of-the-art computer vision techniques. Extracting information from herbarium specimen labels normally involves three main steps: text segmentation, multilingual and handwriting recognition, and data parsing. The primary bottleneck in this workflow lies in the limitations of Optical Character Recognition (OCR) systems. This study explores how the general knowledge embedded in multimodal Transformer models can be transferred to the specific task of herbarium specimen label digitization. The final goal is to develop an easy-to-use, end-to-end solution to mitigate the limitations of classic OCR approaches while offering greater flexibility to adapt to different label formats. Donut-base, a pre-trained visual document understanding (VDU) transformer, was the base model selected for fine-tuning. A dataset from the University of Pisa served as a test bed. The initial attempt achieved an accuracy of 85%, measured using the Tree Edit Distance (TED), demonstrating the feasibility of fine-tuning for this task. Cases with low accuracies were also investigated to identify limitations of the approach. In particular, specimens with multiple labels, especially if combining handwritten and typewritten text, proved to be the most challenging. Strategies aimed at addressing these weaknesses are discussed. Full article
25 pages, 1558 KB  
Article
Towards Scalable Monitoring: An Interpretable Multimodal Framework for Migration Content Detection on TikTok Under Data Scarcity
by Dimitrios Taranis, Gerasimos Razis and Ioannis Anagnostopoulos
Electronics 2026, 15(4), 850; https://doi.org/10.3390/electronics15040850 - 17 Feb 2026
Viewed by 155
Abstract
Short-form video platforms such as TikTok (TikTok Pte. Ltd., Singapore) host large volumes of user-generated, often ephemeral, content related to irregular migration, where relevant cues are distributed across visual scenes, on-screen text, and multilingual captions. Automatically identifying migration-related videos is challenging due to [...] Read more.
Short-form video platforms such as TikTok (TikTok Pte. Ltd., Singapore) host large volumes of user-generated, often ephemeral, content related to irregular migration, where relevant cues are distributed across visual scenes, on-screen text, and multilingual captions. Automatically identifying migration-related videos is challenging due to this multimodal complexity and the scarcity of labeled data in sensitive domains. This paper presents an interpretable multimodal classification framework designed for deployment under data-scarce conditions. We extract features from platform metadata, automated video analysis (Google Cloud Video Intelligence), and Optical Character Recognition (OCR) text, and compare text-only, OCR-only, and vision-only baselines against a multimodal fusion approach using Logistic Regression, Random Forest, and XGBoost. In this pilot study, multimodal fusion consistently improves class separation over single-modality models, achieving an F1-score of 0.92 for the migration-related class under stratified cross-validation. Given the limited sample size, these results are interpreted as evidence of feature separability rather than definitive generalization. Feature importance and SHAP analyses identify OCR-derived keywords, maritime cues, and regional indicators as the most influential predictors. To assess robustness under data scarcity, we apply SMOTE to synthetically expand the training set to 500 samples and evaluate performance on a small held-out set of real videos, observing stable results that further support feature-level robustness. Finally, we demonstrate scalability by constructing a weakly labeled corpus of 600 videos using the identified multimodal cues, highlighting the suitability of the proposed feature set for weakly supervised monitoring at scale. Overall, this work serves as a methodological blueprint for building interpretable multimodal monitoring pipelines in sensitive, low-resource settings. Full article
(This article belongs to the Special Issue Multimodal Learning for Multimedia Content Analysis and Understanding)
Show Figures

Figure 1

18 pages, 4591 KB  
Data Descriptor
Individual-Level Behavioral Dataset Linking Trace Eyeblink Conditioning, Contextual Fear Memory, and Home-Cage Activities in rTg4510 and Wild-Type Mice with Doxycycline Treatment
by Ryo Kachi, Takuma Nishijo and Yasushi Kishimoto
Data 2026, 11(2), 42; https://doi.org/10.3390/data11020042 - 16 Feb 2026
Viewed by 91
Abstract
This dataset provides synchronized multimodal behavioral measurements from 36 mice across four experimental groups: wild-type and rTg4510 tauopathy mice, each tested with or without doxycycline-mediated suppression of mutant tau expression. Of these, 34 mice had complete measurements across all three behavioral paradigms and [...] Read more.
This dataset provides synchronized multimodal behavioral measurements from 36 mice across four experimental groups: wild-type and rTg4510 tauopathy mice, each tested with or without doxycycline-mediated suppression of mutant tau expression. Of these, 34 mice had complete measurements across all three behavioral paradigms and were used for analyses requiring full cross-task linkage. At six months of age, all animals underwent three standardized behavioral paradigms: home cage monitoring, ten-day trace eyeblink conditioning, and contextual fear conditioning. The individual-level data included locomotor activity, rearing duration, conditioned response metrics, eyelid closure latencies, and contextual freezing percentages. All measurements were linked using unique mouse identifiers, enabling cross-task analysis without preprocessing or imputation. The dataset was accompanied by a complete data dictionary, processing workflow diagram, and validation analyses demonstrating cross-paradigm correlations. The cross-task associations are illustrated in the main figures, with additional early phase acquisition and temporal processing correlations provided in the main figures. Provided in an open CSV format with detailed metadata, this resource supports behavioral phenotyping, machine learning applications, and the investigation of learning mechanisms in tauopathy models. Full article
Show Figures

Figure 1

18 pages, 4973 KB  
Project Report
Data Management and Data Services in Large Collaborative Projects—DiverSea Experience
by Vassil Vassilev, Georgi Petkov, Boris Kraychev, Stoyan Haydushki, Stoyan Nikolov, Viktor Sowinski-Mydlarz, Ensiye Kiyamousavi, Nikolay Shivarov and Denitsa Stoilova
Algorithms 2026, 19(2), 154; https://doi.org/10.3390/a19020154 - 15 Feb 2026
Viewed by 175
Abstract
Collaborative projects under the Horizon Europe Framework Program of the European Union typically involve a large number of partners from multiple countries. Data-centric projects, among them, often require integration of disparate data source formats and collection methods, leading to complex data management architectures [...] Read more.
Collaborative projects under the Horizon Europe Framework Program of the European Union typically involve a large number of partners from multiple countries. Data-centric projects, among them, often require integration of disparate data source formats and collection methods, leading to complex data management architectures and policies. This article is an extended version of an article presented at the 1st International Conference on Big Data Analytics and Applications (BDAA’2025). It explores design decisions, organisational principles, and technological solutions to address these challenges by focusing on data integration of data sources and the hybridisation of data services. This experience was gathered while working on DiverSea, a project dedicated to the analysis of biodiversity dynamics along European coastlines—ranging from the Black Sea to the Mediterranean and the North Sea. While grounded in established technologies, the project’s takeaways offer valuable insights for environmental data projects across aquatic, terrestrial, and atmospheric domains. Full article
(This article belongs to the Special Issue Blockchain and Big Data Analytics: AI-Driven Data Science)
Show Figures

Figure 1

28 pages, 786 KB  
Article
How Well Do Current Geoportals Support Geodata Discovery? An Empirical Study
by Susanna Ankama, Auriol Degbelo, Erich Naoseb, Christin Henzen and Lars Bernard
ISPRS Int. J. Geo-Inf. 2026, 15(2), 82; https://doi.org/10.3390/ijgi15020082 - 14 Feb 2026
Viewed by 134
Abstract
Implementing effective geospatial data discovery mechanisms in geoportals is crucial for facilitating easy access to geospatial data and services. Despite existing efforts to formulate geoportal design requirements, understanding end-user issues beyond a single geoportal in the context of geodata discovery is still lacking. [...] Read more.
Implementing effective geospatial data discovery mechanisms in geoportals is crucial for facilitating easy access to geospatial data and services. Despite existing efforts to formulate geoportal design requirements, understanding end-user issues beyond a single geoportal in the context of geodata discovery is still lacking. To address this gap, this study reports on a usability study conducted in Germany and Namibia, with the aim of examining issues faced by users during geodata search and discovery. The study employed a mixed-method approach combining Retrospective Think-Aloud (RTA) interviews and structured questionnaires. The results reveal key usability issues, including inefficient search mechanisms, inefficient presentation of search results, lack of user guidance, inefficient map interactions, and inefficient metadata descriptions. Additionally, the study revealed a difference in user perceptions regarding user experience aspects between the two user groups. The findings are of interest to the designers of geoportals in the context of open data reuse and spatial data infrastructure. Full article
Show Figures

Figure 1

24 pages, 4094 KB  
Article
MMY-Net: A BERT-Enhanced Y-Shaped Network for Multimodal Pathological Image Segmentation Using Patient Metadata
by Ahmed Muhammad Rehan, Kun Li and Ping Chen
Electronics 2026, 15(4), 815; https://doi.org/10.3390/electronics15040815 - 13 Feb 2026
Viewed by 90
Abstract
Medical image segmentation, particularly for pathological diagnosis, faces challenges in leveraging patient clinical metadata that could enhance diagnostic accuracy. This study presents MMY-Net (Multimodal Y-shaped Network), a novel deep learning framework that effectively fuses patient metadata with pathological images for improved tumor segmentation [...] Read more.
Medical image segmentation, particularly for pathological diagnosis, faces challenges in leveraging patient clinical metadata that could enhance diagnostic accuracy. This study presents MMY-Net (Multimodal Y-shaped Network), a novel deep learning framework that effectively fuses patient metadata with pathological images for improved tumor segmentation performance. The proposed architecture incorporates a Text Processing Block (TPB) utilizing BERT for metadata feature extraction and a Text Encoding Block (TEB) for multi-scale fusion of textual and visual information. The network employs an Interlaced Sparse Self-Attention (ISSA) mechanism to capture both local and global dependencies while maintaining computational efficiency. Experiments were conducted on two open/public eyelid tumor datasets (Dataset 1: 112 WSIs for training/validation; Dataset 2: 107 WSIs as an independent test set) and the public Dataset 3 gland segmentation benchmark. For Dataset 1, 7989 H&E-stained patches (1024 × 1024, resized to 224 × 224) were extracted and split 7:2:1 (train:val:test); Dataset 2 was used exclusively for external validation. All images underwent Vahadane stain normalization. Training employed SGD (lr = 0.001), 1000 epochs, and a hybrid loss (cross-entropy + MS-SSIM + Lovász). Results show that integrating metadata—such as age and gender—significantly improves segmentation accuracy, even when metadata does not directly describe tumor characteristics. Ablation studies confirm the superiority of the proposed text feature extraction and fusion strategy. MMY-Net achieves state-of-the-art performance across all datasets, establishing a generalizable framework for multimodal medical image analysis. Full article
(This article belongs to the Section Electronic Materials, Devices and Applications)
Show Figures

Figure 1

29 pages, 1219 KB  
Article
Tuberculosis Screening from Cough Audio: Baseline Models, Clinical Variables, and Uncertainty Quantification
by George P. Kafentzis and Efstratios Selisios
Sensors 2026, 26(4), 1223; https://doi.org/10.3390/s26041223 - 13 Feb 2026
Viewed by 140
Abstract
In this paper, we propose a standardized framework for automatic tuberculosis (TB) detection from cough audio and routinely collected clinical data using machine learning. While TB screening from audio has attracted growing interest, progress is difficult to measure because existing studies vary substantially [...] Read more.
In this paper, we propose a standardized framework for automatic tuberculosis (TB) detection from cough audio and routinely collected clinical data using machine learning. While TB screening from audio has attracted growing interest, progress is difficult to measure because existing studies vary substantially in datasets, cohort definitions, feature representations, model families, validation protocols, and reported metrics. Consequently, reported gains are often not directly comparable, and it remains unclear whether improvements stem from modeling advances or from differences in data and evaluation. We address this gap by establishing a strong, well-documented baseline for TB prediction using cough recordings and accompanying clinical metadata from a recently compiled dataset from several countries. Our pipeline is reproducible end-to-end, covering feature extraction, multimodal fusion, cougher-independent evaluation, and uncertainty quantification, and it reports a consistent suite of clinically relevant metrics to enable fair comparison. We further quantify performance for cough audio-only and fused (audio + clinical metadata) models, and release the full experimental protocol to facilitate benchmarking. This baseline is intended to serve as a common reference point and to reduce methodological variance that currently holds back progress in the field. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

26 pages, 1219 KB  
Systematic Review
A Systematic Review of Arts Practice-Based Research Abstracts from Small and/or Specialist Institutions
by Samantha Broadhead, Henry Gonnet and Marianna Tsionki
Publications 2026, 14(1), 13; https://doi.org/10.3390/publications14010013 - 12 Feb 2026
Viewed by 374
Abstract
Through this qualitative systematic review, the authors ask the following: To what extent is the 300-word abstract fit for purpose in representing art and design practice-based research outputs on small and/or specialist institutional repositories? The abstract is an important part of the metadata [...] Read more.
Through this qualitative systematic review, the authors ask the following: To what extent is the 300-word abstract fit for purpose in representing art and design practice-based research outputs on small and/or specialist institutional repositories? The abstract is an important part of the metadata when an Arts Practice-Based Output (APBO) is deposited on a repository. APBOs are non-traditional item types resulting from creative/artistic research processes. Examples include exhibitions, artefacts and digital videos. Little is known about how effectively these abstracts communicate research processes and insights across the art and design sector. This study aims to investigate how well the abstract communicates information about the arts practice-based research through a systematic review of APBOs. The eligibility criteria for inclusion in the review were as follows: APBOs must be from the date range January 2019 to January 2024, be an item type where the 300-word abstract is required, the abstract must be part of the publicly available metadata for the item, and outputs must be practice-based and from the art and design field. The date range (2019–2024) was employed because, during this time, APBOs had gained recognition in the wider research environment. APBOs from the reviewers’ institutional repository were not included in this study to avoid bias that could skew the results of the review. The data repositories from small and/or specialist Higher Education Institutions in the United Kingdom were searched for outputs which appeared to meet the eligibility criteria. These types of institution prioritise and produce more of these output types. A quality tool appropriate for creative/artistic research was applied to the identified dataset of APBOs. The resulting 27 APBOs’ 300-word abstracts were analysed using a thematic approach. Findings suggest that the 300-word abstracts contained information about the quality indicators such as whether the project got funding, the identities of prestigious collaborators and/or dissemination vehicles, and the international recognition of the research. Other identified themes were methodologies, contribution to knowledge, subject matter and item type. Full article
Show Figures

Figure 1

15 pages, 4686 KB  
Article
Taming the Fine Particulate–Mortality Curve
by Richard Thomas Burnett
Atmosphere 2026, 17(2), 185; https://doi.org/10.3390/atmos17020185 - 10 Feb 2026
Viewed by 419
Abstract
Estimating the population-level mortality burden attributable to exposure to outdoor fine particulate matter PM2.5 requires characterizing both the magnitude and shape of the relative risk function that mathematically models how exposure affects response. The relationship can be derived using cohort studies [...] Read more.
Estimating the population-level mortality burden attributable to exposure to outdoor fine particulate matter PM2.5 requires characterizing both the magnitude and shape of the relative risk function that mathematically models how exposure affects response. The relationship can be derived using cohort studies where the association between PM2.5 exposure and mortality is directly observed. As policy issues of interest can involve exposures that exceed by several-fold those observed in cohort studies, how the association is extrapolated to these high concentrations is a major source of uncertainty. To address this issue, we suggest the extrapolation criterium that the estimated proportion of deaths attributed to exposure above the observed cohort exposure range is no more than that below the range. This criterion implies that the relative risk function must be bounded from above with marginal changes in risk, rapidly declining with increasing concentration. We illustrated the approach with the use of meta-data from 44 cohorts conducted in North America, Western Europe, Asia, and China, examining the association between long-term exposure to outdoor PM2.5 and non-accidental mortality. Full article
(This article belongs to the Section Air Quality and Health)
Show Figures

Figure 1

33 pages, 745 KB  
Article
XAI-Driven Malware Detection from Memory Artifacts: An Alert-Driven AI Framework with TabNet and Ensemble Classification
by Aristeidis Mystakidis, Grigorios Kalogiannnis, Nikolaos Vakakis, Nikolaos Altanis, Konstantina Milousi, Iason Somarakis, Gabriela Mihalachi, Mariana S. Mazi, Dimitris Sotos, Antonis Voulgaridis, Christos Tjortjis, Konstantinos Votis and Dimitrios Tzovaras
AI 2026, 7(2), 66; https://doi.org/10.3390/ai7020066 - 10 Feb 2026
Viewed by 355
Abstract
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day [...] Read more.
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day threats. This study presents a two-stage host-based malware detection framework, that integrates memory forensics, explainable machine learning, and ensemble classification, designed as a post-alert asynchronous SOC workflow balancing forensic depth and operational efficiency. Utilizing the MemMal-D2024 dataset—comprising rich memory forensic artifacts from Windows systems infected with malware samples whose creation metadata spans 2006–2021—the system performs malware detection, using features extracted from volatile memory. In the first stage, an Attentive and Interpretable Learning for structured Tabular data (TabNet) model is used for binary classification (benign vs. malware), leveraging its sequential attention mechanism and built-in explainability. In the second stage, a Voting Classifier ensemble, composed of Light Gradient Boosting Machine (LGBM), eXtreme Gradient Boosting (XGB), and Histogram Gradient Boosting (HGB) models, is used to identify the specific malware family (Trojan, Ransomware, Spyware). To reduce memory dump extraction and analysis time without compromising detection performance, only a curated subset of 24 memory features—operationally selected to reduce acquisition/extraction time and validated via redundancy inspection, model explainability (SHAP/TabNet), and training data correlation analysis —was used during training and runtime, identifying the best trade-off between memory analysis and detection accuracy. The pipeline, which is triggered from host-based Wazuh Security Information and Event Management (SIEM) alerts, achieved 99.97% accuracy in binary detection and 70.17% multiclass accuracy, resulting in an overall performance of 87.02%, including both global and local explainability, ensuring operational transparency and forensic interpretability. This approach provides an efficient and interpretable detection solution used in combination with conventional security tools as an extra layer of defense suitable for modern threat landscapes. Full article
Show Figures

Figure 1

Back to TopTop