Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (211)

Search Parameters:
Keywords = metadata standardization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 1926 KiB  
Article
A FAIR Resource Recommender System for Smart Open Scientific Inquiries
by Syed N. Sakib, Sajratul Y. Rubaiat, Kallol Naha, Hasan H. Rahman and Hasan M. Jamil
Appl. Sci. 2025, 15(15), 8334; https://doi.org/10.3390/app15158334 - 26 Jul 2025
Viewed by 74
Abstract
A vast proportion of scientific data remains locked behind dynamic web interfaces, often called the deep web—inaccessible to conventional search engines and standard crawlers. This gap between data availability and machine usability hampers the goals of open science and automation. While registries like [...] Read more.
A vast proportion of scientific data remains locked behind dynamic web interfaces, often called the deep web—inaccessible to conventional search engines and standard crawlers. This gap between data availability and machine usability hampers the goals of open science and automation. While registries like FAIRsharing offer structured metadata describing data standards, repositories, and policies aligned with the FAIR (Findable, Accessible, Interoperable, and Reusable) principles, they do not enable seamless, programmatic access to the underlying datasets. We present FAIRFind, a system designed to bridge this accessibility gap. FAIRFind autonomously discovers, interprets, and operationalizes access paths to biological databases on the deep web, regardless of their FAIR compliance. Central to our approach is the Deep Web Communication Protocol (DWCP), a resource description language that represents web forms, HyperText Markup Language (HTML) tables, and file-based data interfaces in a machine-actionable format. Leveraging large language models (LLMs), FAIRFind combines a specialized deep web crawler and web-form comprehension engine to transform passive web metadata into executable workflows. By indexing and embedding these workflows, FAIRFind enables natural language querying over diverse biological data sources and returns structured, source-resolved results. Evaluation across multiple open-source LLMs and database types demonstrates over 90% success in structured data extraction and high semantic retrieval accuracy. FAIRFind advances existing registries by turning linked resources from static references into actionable endpoints, laying a foundation for intelligent, autonomous data discovery across scientific domains. Full article
37 pages, 1895 KiB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Viewed by 260
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

20 pages, 3952 KiB  
Article
Assessing the Height Gain Trajectory of White Spruce and Hybrid Spruce Provenances in Canadian Boreal and Hemiboreal Forests
by Suborna Ahmed, Valerie LeMay, Alvin Yanchuk, Peter Marshall and Gary Bull
Forests 2025, 16(7), 1123; https://doi.org/10.3390/f16071123 - 7 Jul 2025
Viewed by 307
Abstract
We assessed the impacts of tree improvement programs on the associated gains in yield of white spruce (Picea glauca (Moench) Voss) and hybrid spruce (Picea engelmannii Parry ex Engelmann x Picea glauca (Moench) Voss) over long temporal and large spatial extents. The [...] Read more.
We assessed the impacts of tree improvement programs on the associated gains in yield of white spruce (Picea glauca (Moench) Voss) and hybrid spruce (Picea engelmannii Parry ex Engelmann x Picea glauca (Moench) Voss) over long temporal and large spatial extents. The definition of gain varied in the tree improvement programs. We assessed the definition of gain using a sensitivity analysis, altering the evaluation age with the definitions of the baseline and top performers. We used meta-data from provenance trials extracted from the literature to model the yields of provenances relative to those of standard stocks. Using a previously developed meta-model and a chosen gain definition, a meta-dataset of the gain of plantation ages was developed. Using this gain meta-dataset, a gain trajectory model was fitted for white and hybrid spruce provenances across Canadian boreal and hemiboreal forests. The planting site, mean annual daily temperature, mean annual precipitation, and number of degree days > 5 °C had large impacts on gain. This model can be used to predict gain up to harvest age at any planting site in the boreal and hemiboreal forests of Canada. Further, these gain trajectories could be averaged over a region to indicate the yield potential of tree improvement programs. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

24 pages, 13051 KiB  
Article
DamageScope: An Integrated Pipeline for Building Damage Segmentation, Geospatial Mapping, and Interactive Web-Based Visualization
by Sultan Al Shafian, Chao He and Da Hu
Remote Sens. 2025, 17(13), 2267; https://doi.org/10.3390/rs17132267 - 2 Jul 2025
Viewed by 327
Abstract
Effective post-disaster damage assessment is crucial for guiding emergency response and resource allocation. This study introduces DamageScope, an integrated deep learning framework designed to detect and classify building damage levels from post-disaster satellite imagery. The proposed system leverages a convolutional neural network trained [...] Read more.
Effective post-disaster damage assessment is crucial for guiding emergency response and resource allocation. This study introduces DamageScope, an integrated deep learning framework designed to detect and classify building damage levels from post-disaster satellite imagery. The proposed system leverages a convolutional neural network trained exclusively on post-event data to segment building footprints and assign them to one of four standardized damage categories: no damage, minor damage, major damage, and destroyed. The model achieves an average F1 score of 0.598 across all damage classes on the test dataset. To support geospatial analysis, the framework extracts the coordinates of damaged structures using embedded metadata, enabling rapid and precise mapping. These results are subsequently visualized through an interactive, web-based platform that facilitates spatial exploration of damage severity. By integrating classification, geolocation, and visualization, DamageScope provides a scalable and operationally relevant tool for disaster management agencies seeking to enhance situational awareness and expedite post-disaster decision making. Full article
Show Figures

Figure 1

18 pages, 544 KiB  
Review
Integrating Machine Learning into Asset Administration Shell: A Practical Example Using Industrial Control Valves
by Julliana Gonçalves Marques, Felipe L. Medeiros, Pedro L. F. F. de Medeiros, Gustavo B. Paz Leitão, Danilo C. de Souza, Diego R. Cabral Silva and Luiz Affonso Guedes
Processes 2025, 13(7), 2100; https://doi.org/10.3390/pr13072100 - 2 Jul 2025
Viewed by 358
Abstract
Asset Management (AM) is quickly transforming due to the digital revolution induced by Industry 4.0, in which Cyber–Physical Systems (CPS) and Digital Twins (DT) are taking key positions in monitoring and optimizing physical assets. With more intelligent functionalities arising in industrial contexts, Machine [...] Read more.
Asset Management (AM) is quickly transforming due to the digital revolution induced by Industry 4.0, in which Cyber–Physical Systems (CPS) and Digital Twins (DT) are taking key positions in monitoring and optimizing physical assets. With more intelligent functionalities arising in industrial contexts, Machine Learning (ML) has transitioned from playing a supporting role to becoming a core constituent of asset operation. However, while the Asset Administration Shell (AAS) has become an industry standard format for digital asset representation, incorporating ML models into this format is a significant challenge. In this research, a control valve, a common asset in industrial equipment, is used to explore the modeling of a machine learning model as an AAS submodel, including its related elements, such as parameters, hyperparameters, and metadata, in accordance with the latest guidelines issued by the Industrial Digital Twin Association (IDTA) in early 2025. The main contribution of this work is to clarify basic machine learning principles while demonstrating their alignment with the AAS framework, hence facilitating the further development of smart and interoperable DTs in modern industrial environments. Full article
Show Figures

Figure 1

16 pages, 2289 KiB  
Article
Taxonomic Diversity and Clinical Correlations in Periapical Lesions by Next-Generation Sequencing Analysis
by Juliana D. Bronzato, Brenda P. F. A. Gomes and Tsute Chen
Genes 2025, 16(7), 775; https://doi.org/10.3390/genes16070775 - 30 Jun 2025
Viewed by 253
Abstract
Objectives: The aim of this study was to assess the taxonomic diversity of the microbiota associated with periapical lesions of endodontic origin and to determine whether microbial profiles vary across different populations and clinical characteristics using a unified in silico analysis of next-generation [...] Read more.
Objectives: The aim of this study was to assess the taxonomic diversity of the microbiota associated with periapical lesions of endodontic origin and to determine whether microbial profiles vary across different populations and clinical characteristics using a unified in silico analysis of next-generation sequencing (NGS) data. Methods: Raw 16S rRNA sequencing data from three published studies were retrieved from the NCBI Sequence Read Archive and reprocessed using a standardized bioinformatics pipeline. Amplicon sequence variants were inferred using DADA2, and taxonomic assignments were performed using BLASTN against a curated 16S rRNA reference database. Alpha and beta diversity analyses were conducted using QIIME 2 and R, and differential abundance was assessed with ANCOM-BC2. Statistical comparisons were made based on population, sex, symptomatology, and other clinical metadata. Results: A total of 38 periapical lesion samples yielded 566,223 high-confidence reads assigned to 347 bacterial species. Significant differences in microbial composition were observed between geographic regions (China vs. Spain), sexes, and symptoms. Core species such as Fretibacterium sp. HMT 360 and Porphyromonas endodontalis were prevalent across datasets. Porphyromonas gingivalis and Fusobacterium nucleatum were found in abundance across all three studies. Beta diversity metrics revealed distinct clustering by study and country. Symptomatic lesions were associated with higher abundance of Alloprevotella tannerae and Prevotella oris. Conclusions: The periapical lesion microbiota is taxonomically diverse and varies significantly by geographic and clinical features. Full article
(This article belongs to the Special Issue Application of Bioinformatics in Microbiome—2nd Edition)
Show Figures

Figure 1

24 pages, 3832 KiB  
Article
Stitching History into Semantics: LLM-Supported Knowledge Graph Engineering for 19th-Century Greek Bookbinding
by Dimitrios Doumanas, Efthalia Ntalouka, Costas Vassilakis, Manolis Wallace and Konstantinos Kotis
Mach. Learn. Knowl. Extr. 2025, 7(3), 59; https://doi.org/10.3390/make7030059 - 24 Jun 2025
Viewed by 722
Abstract
Preserving cultural heritage can be efficiently supported by structured and semantic representation of historical artifacts. Bookbinding, a critical aspect of book history, provides valuable insights into past craftsmanship, material use, and conservation practices. However, existing bibliographic records often lack the depth needed to [...] Read more.
Preserving cultural heritage can be efficiently supported by structured and semantic representation of historical artifacts. Bookbinding, a critical aspect of book history, provides valuable insights into past craftsmanship, material use, and conservation practices. However, existing bibliographic records often lack the depth needed to analyze bookbinding techniques, provenance, and preservation status. This paper presents a proof-of-concept system that explores how Large Language Models (LLMs) can support knowledge graph engineering within the context of 19th-century Greek bookbinding (1830–1900), and as a result, generate a domain-specific ontology and a knowledge graph. Our ontology encapsulates materials, binding techniques, artistic styles, and conservation history, integrating metadata standards like MARC and Dublin Core to ensure interoperability with existing library and archival systems. To validate its effectiveness, we construct a Neo4j knowledge graph, based on the generated ontology and utilize Cypher Queries—including LLM-generated queries—to extract insights about bookbinding practices and trends. This study also explores how semantic reasoning over the knowledge graph can identify historical binding patterns, assess book conservation needs, and infer relationships between bookbinding workshops. Unlike previous bibliographic ontologies, our approach provides a comprehensive, semantically rich representation of bookbinding history, methods and techniques, supporting scholars, conservators, and cultural heritage institutions. By demonstrating how LLMs can assist in ontology/KG creation and query generation, we introduce and evaluate a semi-automated pipeline as a methodological demonstration for studying historical bookbinding, contributing to digital humanities, book conservation, and cultural informatics. Finally, the proposed approach can be used in other domains, thus, being generally applicable in knowledge engineering. Full article
(This article belongs to the Special Issue Knowledge Graphs and Large Language Models)
Show Figures

Graphical abstract

21 pages, 1856 KiB  
Article
Decoding the CD36-Centric Axis in Gastric Cancer: Insights into Lipid Metabolism, Obesity, and Hypercholesterolemia
by Preyangsee Dutta, Dwaipayan Saha, Atanu Giri, Aseem Rai Bhatnagar and Abhijit Chakraborty
Int. J. Transl. Med. 2025, 5(3), 26; https://doi.org/10.3390/ijtm5030026 - 23 Jun 2025
Viewed by 681
Abstract
Background: Gastric cancer is a leading cause of cancer-related mortality worldwide, with approximately one million new cases diagnosed annually. While Helicobacter pylori infection remains a primary etiological factor, mounting evidence implicates obesity and lipid metabolic dysregulation, particularly in hypercholesterolemia, as emerging drivers of [...] Read more.
Background: Gastric cancer is a leading cause of cancer-related mortality worldwide, with approximately one million new cases diagnosed annually. While Helicobacter pylori infection remains a primary etiological factor, mounting evidence implicates obesity and lipid metabolic dysregulation, particularly in hypercholesterolemia, as emerging drivers of gastric tumorigenesis. This study investigates the molecular intersections between gastric cancer, obesity, and hypercholesterolemia through a comprehensive multi-omics and systems biology approach. Methods: We conducted integrative transcriptomic analysis of gastric adenocarcinoma using The Cancer Genome Atlas (TCGA) RNA-sequencing dataset (n = 623, 8863 genes), matched with standardized clinical metadata (n = 413). Differential gene expression between survival groups was assessed using Welch’s t-test with Benjamini–Hochberg correction (FDR < 0.05, |log2FC| ≥ 1). High-confidence gene sets for obesity (n = 128) and hypercholesterolemia (n = 97) were curated from the OMIM, STRING (confidence ≥ 0.7), and KEGG databases using hierarchical evidence-based prioritization. Overlapping gene signatures were identified, followed by pathway enrichment via Enrichr (KEGG 2021 Human) and protein–protein interaction (PPI) analysis using STRING v11.5 and Cytoscape v3.9.0. CD36’s prognostic value was evaluated via Kaplan–Meier and log-rank testing alongside clinicopathological correlations. Results: We identified 36 genes shared between obesity and gastric cancer, and 31 genes shared between hypercholesterolemia and gastric cancer. CD36 emerged as the only gene intersecting all three conditions, marking it as a unique molecular integrator. Enrichment analyses implicated dysregulated fatty acid uptake, adipocytokine signaling, cholesterol metabolism, and NF-κB-mediated inflammation as key pathways. Elevated CD36 expression was significantly correlated with higher tumor stage (p = 0.016), reduced overall survival (p = 0.001), and race-specific expression differences (p = 0.007). No sex-based differences in CD36 expression or survival were observed. Conclusions: CD36 is a central metabolic–oncogenic node linking obesity, hypercholesterolemia, and gastric cancer. It functions as both a mechanistic driver of tumor progression and a clinically actionable biomarker, particularly in metabolically comorbid patients. These findings provide a rationale for targeting CD36-driven pathways as part of a precision oncology strategy and highlight the need to incorporate metabolic profiling into gastric cancer risk assessment and treatment paradigms. Full article
Show Figures

Figure 1

31 pages, 18162 KiB  
Article
Recovery and Reconstructions of 18th Century Precipitation Records in Italy: Problems and Analyses
by Antonio della Valle, Francesca Becherini and Dario Camuffo
Climate 2025, 13(6), 131; https://doi.org/10.3390/cli13060131 - 19 Jun 2025
Viewed by 1090
Abstract
Precipitation is one of the main meteorological variables in climate research and long records provide a unique, long-term knowledge of climatic variability and extreme events. Moreover, they are a prerequisite for climate modeling and reanalyses. Like all meteorological observations, in the early period, [...] Read more.
Precipitation is one of the main meteorological variables in climate research and long records provide a unique, long-term knowledge of climatic variability and extreme events. Moreover, they are a prerequisite for climate modeling and reanalyses. Like all meteorological observations, in the early period, every observer used a personal measuring protocol. Instruments and their locations were not standardized and not always specified in the observer’s metadata. The situation began to change in 1873 with the foundation of the International Meteorological Committee, though the complete standardization of protocols, instruments, and exposure was reached in 1950 with the World Meteorological Organization. The aim of this paper is to present and discuss the methodology needed to recover and reconstruct early precipitation records and to provide high-quality dataset of precipitation usable for climate studies. The main issues that have to be addresses are described and critically analyzed based on the longest Italian precipitation series to which the methodology was successfully applied. Full article
(This article belongs to the Special Issue Climate Variability in the Mediterranean Region (Second Edition))
Show Figures

Figure 1

23 pages, 2071 KiB  
Systematic Review
Creating Value in Metaverse-Driven Global Value Chains: Blockchain Integration and the Evolution of International Business
by Sina Mirzaye Shirkoohi and Muhammad Mohiuddin
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 126; https://doi.org/10.3390/jtaer20020126 - 2 Jun 2025
Viewed by 735
Abstract
The convergence of blockchain and metaverse technologies is poised to redefine how Global Value Chains (GVCs) create, capture, and distribute value, yet scholarly insight into their joint impact remains scattered. Addressing this gap, the present study aims to clarify where, how, and under [...] Read more.
The convergence of blockchain and metaverse technologies is poised to redefine how Global Value Chains (GVCs) create, capture, and distribute value, yet scholarly insight into their joint impact remains scattered. Addressing this gap, the present study aims to clarify where, how, and under what conditions blockchain-enabled transparency and metaverse-enabled immersion enhance GVC performance. A systematic literature review (SLR), conducted according to PRISMA 2020 guidelines, screened 300 articles from ABI Global, Business Source Premier, and Web of Science records, yielding 65 peer-reviewed articles for in-depth analysis. The corpus was coded thematically and mapped against three theoretical lenses: transaction cost theory, resource-based view, and network/ecosystem perspectives. Key findings reveal the following: 1. digital twins anchored in immersive platforms reduce planning cycles by up to 30% and enable real-time, cross-border supply chain reconfiguration; 2. tokenized assets, micro-transactions, and decentralized finance (DeFi) are spawning new revenue models but simultaneously shift tax triggers and compliance burdens; 3. cross-chain protocols are critical for scalable trust, yet regulatory fragmentation—exemplified by divergent EU, U.S., and APAC rules—creates non-trivial coordination costs; and 4. traditional IB theories require extension to account for digital-capability orchestration, emerging cost centers (licensing, reserve backing, data audits), and metaverse-driven network effects. Based on these insights, this study recommends that managers adopt phased licensing and geo-aware tax engines, embed region-specific compliance flags in smart-contract metadata, and pilot digital-twin initiatives in sandbox-friendly jurisdictions. Policymakers are urged to accelerate work on interoperability and reporting standards to prevent systemic bottlenecks. Finally, researchers should pursue multi-case and longitudinal studies measuring the financial and ESG outcomes of integrated blockchain–metaverse deployments. By synthesizing disparate streams and articulating a forward agenda, this review provides a conceptual bridge for international business scholarship and a practical roadmap for firms navigating the next wave of digital GVC transformation. Full article
Show Figures

Figure 1

25 pages, 1932 KiB  
Article
Enhancing Facility Management with Emerging Technologies: A Study on the Application of Blockchain and NFTs
by Andrea Bongini, Marco Sparacino, Luca Marzi and Carlo Biagini
Buildings 2025, 15(11), 1911; https://doi.org/10.3390/buildings15111911 - 1 Jun 2025
Viewed by 470
Abstract
In recent years, Facility Management has undergone significant technological and methodological advancements, primarily driven by Building Information Modelling (BIM), Computer-Aided Facility Management (CAFM), and Computerized Maintenance Management Systems (CMMS). These innovations have improved process efficiency and risk management. However, challenges remain in asset [...] Read more.
In recent years, Facility Management has undergone significant technological and methodological advancements, primarily driven by Building Information Modelling (BIM), Computer-Aided Facility Management (CAFM), and Computerized Maintenance Management Systems (CMMS). These innovations have improved process efficiency and risk management. However, challenges remain in asset management, maintenance, traceability, and transparency. This study investigates the potential of blockchain technology and non-fungible tokens (NFTs) to address these challenges. By referencing international (ISO, BOMA) and European (EN) standards, the research develops an asset management process model incorporating blockchain and NFTs. The methodology includes evaluating the technical and practical aspects of this model and strategies for metadata utilization. The model ensures an immutable record of transactions and maintenance activities, reducing errors and fraud. Smart contracts automate sub-phases like progress validation and milestone-based payments, increasing operational efficiency. The study’s practical implications are significant, offering advanced solutions for transparent, efficient, and secure Facility Management. It lays the groundwork for future research, emphasizing practical implementations and real-world case studies. Additionally, integrating blockchain with emerging technologies like artificial intelligence and machine learning could further enhance Facility Management processes. Full article
Show Figures

Figure 1

23 pages, 1557 KiB  
Article
Dual Partial Reversible Data Hiding Using Enhanced Hamming Code
by Cheonshik Kim, Ching-Nung Yang and Lu Leng
Appl. Sci. 2025, 15(10), 5264; https://doi.org/10.3390/app15105264 - 8 May 2025
Viewed by 358
Abstract
Traditional reversible data hiding (RDH) methods prioritize the exact recovery of the original cover image; however, this rigidity often hinders both capacity and design flexibility. This study introduces a partial reversible data hiding (PRDH) framework that departs from conventional standards by allowing reversibility [...] Read more.
Traditional reversible data hiding (RDH) methods prioritize the exact recovery of the original cover image; however, this rigidity often hinders both capacity and design flexibility. This study introduces a partial reversible data hiding (PRDH) framework that departs from conventional standards by allowing reversibility relative to a generated cover image rather than the original. The proposed system leverages a dual-image structure and an enhanced HC(7,4) Hamming code to synthesize virtual pixels, enabling efficient and low-distortion syndrome-based encoding. Notably, it achieves embedding rates up to 1.5 bpp with PSNR values exceeding 48 dB. While the proposed method avoids auxiliary data, its reliability hinges on paired image availability, which is a consideration for real-world deployment. Demonstrated resilience to RS-based steganalysis suggests viability in sensitive domains such as embedding structured metadata in diagnostic medical imagery. Nonetheless, further evaluation across more diverse image types and attack scenarios is necessary in order to confirm its generalizability. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

15 pages, 29428 KiB  
Article
Color as a High-Value Quantitative Tool for PET/CT Imaging
by Michail Marinis, Sofia Chatziioannou and Maria Kallergi
Information 2025, 16(5), 352; https://doi.org/10.3390/info16050352 - 27 Apr 2025
Viewed by 571
Abstract
The successful application of artificial intelligence (AI) techniques for the quantitative analysis of hybrid medical imaging data such as PET/CT is challenged by the differences in the type of information and image quality between the two modalities. The purpose of this work was [...] Read more.
The successful application of artificial intelligence (AI) techniques for the quantitative analysis of hybrid medical imaging data such as PET/CT is challenged by the differences in the type of information and image quality between the two modalities. The purpose of this work was to develop color-based, pre-processing methodologies for PET/CT data that could yield a better starting point for subsequent diagnosis and image processing and analysis. Two methods are proposed that are based on the encoding of Hounsfield Units (HU) and Standardized Uptake Values (SUVs) in separate transformed .png files as reversible color information in combination with .png basic information metadata based on DICOM attributes. Linux Ubuntu using Python was used for the implementation and pilot testing of the proposed methodologies on brain 18F-FDG PET/CT scans acquired with different PET/CT systems. The range of HUs and SUVs was mapped using novel weighted color distribution functions that allowed for a balanced representation of the data and an improved visualization of anatomic and metabolic differences. The pilot application of the proposed mapping codes yielded CT and PET images where it was easier to pinpoint variations in anatomy and metabolic activity and offered a potentially better starting point for the subsequent fully automated quantitative analysis of specific regions of interest or observer evaluation. It should be noted that the output .png files contained all the raw values and may be treated as raw DICOM input data. Full article
Show Figures

Figure 1

19 pages, 1789 KiB  
Article
Optimization of Temporal Feature Attribution and Sequential Dependency Modeling for High-Precision Multi-Step Resource Forecasting: A Methodological Framework and Empirical Evaluation
by Jiaqi Shen, Peiwen Qin, Rui Zhong and Peiyao Han
Mathematics 2025, 13(8), 1339; https://doi.org/10.3390/math13081339 - 19 Apr 2025
Viewed by 420
Abstract
This paper presents a comprehensive time-series analysis framework leveraging the Temporal Fusion Transformer (TFT) architecture to address the challenge of multi-horizon forecasting in complex ecological systems, specifically focusing on global fishery resources. Using global fishery data spanning 70 years (1950–2020), enhanced with key [...] Read more.
This paper presents a comprehensive time-series analysis framework leveraging the Temporal Fusion Transformer (TFT) architecture to address the challenge of multi-horizon forecasting in complex ecological systems, specifically focusing on global fishery resources. Using global fishery data spanning 70 years (1950–2020), enhanced with key climate indicators, we develop a methodology for predicting time-dependent patterns across three-year, five-year, and extended seven-year horizons. Our approach integrates static metadata with temporal features, including historical catch and climate data, through a specialized architecture incorporating variable selection networks, multi-head attention mechanisms, and bidirectional encoding layers. A comparative analysis demonstrates the TFT model’s robust performance against traditional methods (ARIMA), standard deep learning models (MLP, LSTM), and contemporary architectures (TCN, XGBoost). While competitive across different horizons, TFT excels in the 7-year forecast, achieving a mean absolute percentage error (MAPE) of 13.7%, outperforming the next best model (LSTM, 15.1%). Through a sensitivity analysis, we identify the optimal temporal granularity and historical context length for maximizing prediction accuracy. The variable selection component reveals differential weighting, with recent market observations (past 1-year catch: 31%) and climate signals (ONI index: 15%, SST anomaly: 10%) playing significant roles. A species-specific analysis uncovers variations in predictability patterns. Ablation experiments quantify the contributions of the architectural components. The proposed methodology offers practical applications for resource management and theoretical insights into modeling temporal dependencies in complex ecological data. Full article
(This article belongs to the Special Issue Deep Neural Network: Theory, Algorithms and Applications)
Show Figures

Figure 1

22 pages, 5224 KiB  
Article
A Common Data Environment Framework Applied to Structural Life Cycle Assessment: Coordinating Multiple Sources of Information
by Lini Xiang, Gang Li and Haijiang Li
Buildings 2025, 15(8), 1315; https://doi.org/10.3390/buildings15081315 - 16 Apr 2025
Viewed by 758
Abstract
In Building Information Modeling (BIM)-driven collaboration, the workflow for information management utilizes a Common Data Environment (CDE). The core idea of a CDE is to serve as a single source of truth, enabling efficient coordination among diverse stakeholders. Nevertheless, investigations into employing CDEs [...] Read more.
In Building Information Modeling (BIM)-driven collaboration, the workflow for information management utilizes a Common Data Environment (CDE). The core idea of a CDE is to serve as a single source of truth, enabling efficient coordination among diverse stakeholders. Nevertheless, investigations into employing CDEs to manage projects reveal that procuring commercial CDE solutions is too expensive and functionally redundant for small and medium-sized enterprises (SMEs) and small research organizations, and there is a lack of experience in using CDE tools. Consequently, this study aimed to provide a cheap and lightweight alternative. It proposes a three-layered CDE framework: decentralized databases enabling work in distinct software environments; resource description framework (RDF)-based metadata facilitating seamless data communication; and microservices enabling data collection and reorganization via standardized APIs and query languages. We also apply the CDE framework to structural life cycle assessment (LCA). The results show that a lightweight CDE solution is achievable using tools like the bcfOWL ontology, RESTful APIs, and ASP.NET 6 Clean architecture. This paper offers a scalable framework that reduces infrastructure complexity while allowing users the freedom to integrate diverse tools and APIs for customized information management workflows. This paper’s CDE architecture surpasses traditional commercial software in terms of its flexibility and scalability, facilitating broader CDE applications in the construction industry. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop