Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = open graph benchmark

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1948 KB  
Article
Graph-MambaRoadDet: A Symmetry-Aware Dynamic Graph Framework for Road Damage Detection
by Zichun Tian, Xiaokang Shao and Yuqi Bai
Symmetry 2025, 17(10), 1654; https://doi.org/10.3390/sym17101654 - 5 Oct 2025
Abstract
Road-surface distress poses a serious threat to traffic safety and imposes a growing burden on urban maintenance budgets. While modern detectors based on convolutional networks and Vision Transformers achieve strong frame-level performance, they often overlook an essential property of road environments—structural symmetry [...] Read more.
Road-surface distress poses a serious threat to traffic safety and imposes a growing burden on urban maintenance budgets. While modern detectors based on convolutional networks and Vision Transformers achieve strong frame-level performance, they often overlook an essential property of road environments—structural symmetry within road networks and damage patterns. We present Graph-MambaRoadDet (GMRD), a symmetry-aware and lightweight framework that integrates dynamic graph reasoning with state–space modeling for accurate, topology-informed, and real-time road damage detection. Specifically, GMRD employs an EfficientViM-T1 backbone and two DefMamba blocks, whose deformable scanning paths capture sub-pixel crack patterns while preserving geometric symmetry. A superpixel-based graph is constructed by projecting image regions onto OpenStreetMap road segments, encoding both spatial structure and symmetric topological layout. We introduce a Graph-Generating State–Space Model (GG-SSM) that synthesizes sparse sample-specific adjacency in O(M) time, further refined by a fusion module that combines detector self-attention with prior symmetry constraints. A consistency loss promotes smooth predictions across symmetric or adjacent segments. The full INT8 model contains only 1.8 M parameters and 1.5 GFLOPs, sustaining 45 FPS at 7 W on a Jetson Orin Nano—eight times lighter and 1.7× faster than YOLOv8-s. On RDD2022, TD-RD, and RoadBench-100K, GMRD surpasses strong baselines by up to +6.1 mAP50:95 and, on the new RoadGraph-RDD benchmark, achieves +5.3 G-mAP and +0.05 consistency gain. Qualitative results demonstrate robustness under shadows, reflections, back-lighting, and occlusion. By explicitly modeling spatial and topological symmetry, GMRD offers a principled solution for city-scale road infrastructure monitoring under real-time and edge-computing constraints. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 384 KB  
Article
On Solving the Minimum Spanning Tree Problem with Conflicting Edge Pairs
by Roberto Montemanni and Derek H. Smith
Algorithms 2025, 18(8), 526; https://doi.org/10.3390/a18080526 - 18 Aug 2025
Cited by 2 | Viewed by 494
Abstract
The Minimum Spanning Tree with Conflicting Edge Pairs is a generalization that adds conflict constraints to a classical optimization problem on graphs used to model several real-world applications. In recent years, several heuristic and exact approaches have been proposed to tackle this problem. [...] Read more.
The Minimum Spanning Tree with Conflicting Edge Pairs is a generalization that adds conflict constraints to a classical optimization problem on graphs used to model several real-world applications. In recent years, several heuristic and exact approaches have been proposed to tackle this problem. In this paper, we present a mixed-integer linear program not previously applied to this problem, and we solve it with an open-source solver. Computational results for the benchmark instances commonly adopted in the literature of the problem are reported. The results indicate that the approach we propose obtains results aligned with those of the much more sophisticated approaches available, notwithstanding it being much simpler to implement. During the experimental campaign, six instances were closed for the first time, with nine improved best-known lower bounds and sixteen improved best-known upper bounds over a total of two hundred thirty instances considered. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

18 pages, 1160 KB  
Review
Machine Learning for the Optimization of the Bioplastics Design
by Neelesh Ashok, Pilar Garcia-Diaz, Marta E. G. Mosquera and Valentina Sessini
Macromol 2025, 5(3), 38; https://doi.org/10.3390/macromol5030038 - 14 Aug 2025
Viewed by 563
Abstract
Biodegradable polyesters have gained attention due to their sustainability benefits, considering the escalating environmental challenges posed by synthetic polymers. Advances in artificial intelligence (AI), including machine learning (ML) and deep learning (DL), are expected to significantly accelerate research in polymer science. This review [...] Read more.
Biodegradable polyesters have gained attention due to their sustainability benefits, considering the escalating environmental challenges posed by synthetic polymers. Advances in artificial intelligence (AI), including machine learning (ML) and deep learning (DL), are expected to significantly accelerate research in polymer science. This review article explores “bio” polymer informatics by harnessing insights from the AI techniques used to predict structure–property relationships and to optimize the synthesis of bioplastics. This review also discusses PolyID, a machine learning-based tool that employs message-passing graph neural networks to provide a framework capable of accelerating the discovery of bioplastics. An extensive literature review is conducted on explainable AI (XAI) and generative AI techniques, as well as on benchmarking data repositories in polymer science. The current state-of-the art in ML methods for ring-opening polymerizations and the synthesizability of biodegradable polyesters is also presented. This review offers an in-depth insight and comprehensive knowledge of current AI-based models for polymerizations, molecular descriptors, structure–property relationships, predictive modeling, and open-source benchmarked datasets for sustainable polymers. This study serves as a reference and provides critical insights into the capabilities of AI for the accelerated design and discovery of green polymers aimed at achieving a sustainable future. Full article
Show Figures

Figure 1

20 pages, 2714 KB  
Article
Diagnosing Bias and Instability in LLM Evaluation: A Scalable Pairwise Meta-Evaluator
by Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Adina Cocu, Adrian Istrate and Constantin Adrian Andrei
Information 2025, 16(8), 652; https://doi.org/10.3390/info16080652 - 31 Jul 2025
Viewed by 1349
Abstract
The evaluation of large language models (LLMs) increasingly relies on other LLMs acting as automated judges. While this approach offers scalability and efficiency, it raises serious concerns regarding evaluator reliability, positional bias, and ranking stability. This paper presents a scalable framework for diagnosing [...] Read more.
The evaluation of large language models (LLMs) increasingly relies on other LLMs acting as automated judges. While this approach offers scalability and efficiency, it raises serious concerns regarding evaluator reliability, positional bias, and ranking stability. This paper presents a scalable framework for diagnosing positional bias and instability in LLM-based evaluation by using controlled pairwise comparisons judged by multiple independent language models. The system supports mirrored comparisons with reversed response order, prompt injection, and surface-level perturbations (e.g., paraphrasing, lexical noise), enabling fine-grained analysis of evaluator consistency and verdict robustness. Over 3600 pairwise comparisons were conducted across five instruction-tuned open-weight models using ten open-ended prompts. The top-performing model (gemma:7b-instruct) achieved a 66.5% win rate. Evaluator agreement was uniformly high, with 100% consistency across judges, yet 48.4% of verdicts reversed under mirrored response order, indicating strong positional bias. Kendall’s Tau analysis further showed that local model rankings varied substantially across prompts, suggesting that semantic context influences evaluator judgment. All evaluation traces were stored in a graph database (Neo4j), enabling structured querying and longitudinal analysis. The proposed framework provides not only a diagnostic lens for benchmarking models but also a blueprint for fairer and more interpretable LLM-based evaluation. These findings underscore the need for structure-aware, perturbation-resilient evaluation pipelines when benchmarking LLMs. The proposed framework offers a reproducible path for diagnosing evaluator bias and ranking instability in open-ended language tasks. Future work will apply this methodology to educational assessment tasks, using rubric-based scoring and graph-based traceability to evaluate student responses in technical domains. Full article
Show Figures

Figure 1

37 pages, 1895 KB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Cited by 1 | Viewed by 2101
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

23 pages, 6756 KB  
Article
Structure-Enhanced Prompt Learning for Graph-Based Code Vulnerability Detection
by Wei Chang, Chunyang Ye and Hui Zhou
Appl. Sci. 2025, 15(11), 6128; https://doi.org/10.3390/app15116128 - 29 May 2025
Viewed by 1203
Abstract
Recent advances in prompt learning have opened new avenues for enhancing natural language understanding in domain-specific tasks, including code vulnerability detection. Motivated by the limitations of conventional binary classification methods in capturing complex code semantics, we propose a novel framework that integrates a [...] Read more.
Recent advances in prompt learning have opened new avenues for enhancing natural language understanding in domain-specific tasks, including code vulnerability detection. Motivated by the limitations of conventional binary classification methods in capturing complex code semantics, we propose a novel framework that integrates a two-stage prompt optimization mechanism with hierarchical representation learning. Our approach leverages graphon theory to generate task-adaptive, structurally enriched prompts by encoding both contextual and graphical information into trainable vector representations. To further enhance representational capacity, we incorporate the pretrained model CodeBERTScore, a syntax-aware encoder, and Graph Neural Networks, enabling comprehensive modeling of both local syntactic features and global structural dependencies. Experimental results on three public datasets—FFmpeg+Qemu, SVulD and Reveal—demonstrate that our method performs competitively across all benchmarks, achieving accuracy rates of 64.40%, 83.44% and 90.69%, respectively. These results underscore the effectiveness of combining prompt-based learning with graph-based structural modeling, offering a more accurate and robust solution for automated vulnerability detection. Full article
Show Figures

Figure 1

21 pages, 1572 KB  
Article
OWNC: Open-World Node Classification on Graphs with a Dual-Embedding Interaction Framework
by Yuli Chen and Chun Wang
Mathematics 2025, 13(9), 1475; https://doi.org/10.3390/math13091475 - 30 Apr 2025
Viewed by 495
Abstract
Traditional node classification is typically conducted in a closed-world setting, where all labels are known during training, enabling graph neural network methods to achieve high performance. However, in real-world scenarios, the constant emergence of new categories and updates to existing labels can result [...] Read more.
Traditional node classification is typically conducted in a closed-world setting, where all labels are known during training, enabling graph neural network methods to achieve high performance. However, in real-world scenarios, the constant emergence of new categories and updates to existing labels can result in some nodes no longer fitting into any known category, rendering closed-world classification methods inadequate. Thus, open-world classification becomes essential for graph data. Due to the inherent diversity of graph data in the open-world setting, it is common for the number of nodes with different labels to be imbalanced, yet current models are ineffective at handling such imbalance. Additionally, when there are too many or too few nodes from unseen classes, classification performance typically declines. Motivated by these observations, we propose a solution to address the challenges of open-world node classification and introduce a model named OWNC. This model incorporates a dual-embedding interaction training framework and a generator–discriminator architecture. The dual-embedding interaction training framework reduces label loss and enhances the distinction between known and unseen samples, while the generator–discriminator structure enhances the model’s ability to identify nodes from unseen classes. Experimental results on three benchmark datasets demonstrate the superior performance of our model compared to various baseline algorithms, while ablation studies validate the underlying mechanisms and robustness of our approach. Full article
Show Figures

Figure 1

12 pages, 1069 KB  
Article
A GNN-Based Placement Optimization Guidance Framework by Physical and Timing Prediction
by Peng Cao, Zhi Li and Wenjie Ding
Electronics 2025, 14(2), 329; https://doi.org/10.3390/electronics14020329 - 15 Jan 2025
Viewed by 1792
Abstract
Placement is crucial in physical design flow with significant impact on later routability and ultimate manufacturability in terms of performance, power, and area (PPA), which may deviate from finding the optimal solution and/or lead to unnecessary iterations suffering from interleaved optimization steps and [...] Read more.
Placement is crucial in physical design flow with significant impact on later routability and ultimate manufacturability in terms of performance, power, and area (PPA), which may deviate from finding the optimal solution and/or lead to unnecessary iterations suffering from interleaved optimization steps and inaccurate PPA estimation. To solve this issue, we propose a physical- and timing-related placement optimization guidance framework which provides candidate gate sizing and buffer insertion solutions as well as a path group for potential violated paths based on graph neural networks (GNNs) to improve placement quality significantly and efficiently. Experimental results on the OpenCores benchmarks with 22 nm technology demonstrate that the proposed placement optimization guidance framework achieves up to 35.66% and 43.51% worst negative slack (WNS) and total negative slack (TNS) improvement and 52.17% reduction in the number of violating paths (NVP), which is beneficial to later routing stages with 2.33% wirelength decrease. Full article
Show Figures

Figure 1

17 pages, 3228 KB  
Article
A Method for Fault Localization in Distribution Networks with High Proportions of Distributed Generation Based on Graph Convolutional Networks
by Xiping Ma, Wenxi Zhen, Haodong Ren, Guangru Zhang, Kai Zhang and Haiying Dong
Energies 2024, 17(22), 5758; https://doi.org/10.3390/en17225758 - 18 Nov 2024
Cited by 5 | Viewed by 1372
Abstract
To address the issues arising from the integration of a high proportion of distributed generation (DG) into the distribution network, which has led to the transition from traditional single-source to multi-source distribution systems, resulting in increased complexity of the distribution network topology and [...] Read more.
To address the issues arising from the integration of a high proportion of distributed generation (DG) into the distribution network, which has led to the transition from traditional single-source to multi-source distribution systems, resulting in increased complexity of the distribution network topology and difficulties in fault localization, this paper proposes a fault localization method based on graph convolutional networks (GCNs) for distribution networks with a high proportion of distributed generation. By abstracting busbars and lines into graph structure nodes and edges, GCN captures spatial coupling relationships between nodes, using key electrical quantities such as node voltage magnitude, current magnitude, power, and phase angle as input features to construct a fault localization model. A multi-type fault dataset is generated using the Matpower toolbox, and model training is evaluated using K-fold cross-validation. The training process is optimized through early stopping mechanisms and learning rate scheduling. Simulations are conducted based on the IEEE 33-node distribution network benchmark, with photovoltaic generation, wind generation, and energy storage systems connected at specific nodes, validating the model’s fault localization capability under various fault types (single-phase ground fault, phase-to-phase short circuit, and line open circuit). Experimental results demonstrate that the proposed model can effectively locate fault nodes in complex distribution networks with high DG integration, achieving an accuracy of 98.5% and an AUC value of 0.9997. It still shows strong robustness in noisy environments and is significantly higher than convolutional neural networks and other methods in terms of model localization accuracy, training time, F1 score, AUC value, and single fault detection inference time, which has good potential for practical application. Full article
(This article belongs to the Special Issue Clean and Efficient Use of Energy: 2nd Edition)
Show Figures

Figure 1

16 pages, 634 KB  
Article
LGTCN: A Spatial–Temporal Traffic Flow Prediction Model Based on Local–Global Feature Fusion Temporal Convolutional Network
by Wei Ye, Haoxuan Kuang, Kunxiang Deng, Dongran Zhang and Jun Li
Appl. Sci. 2024, 14(19), 8847; https://doi.org/10.3390/app14198847 - 1 Oct 2024
Cited by 2 | Viewed by 1933
Abstract
High-precision traffic flow prediction facilitates intelligent traffic control and refined management decisions. Previous research has built a variety of exquisite models with good prediction results. However, they ignore the reality that traffic flows can propagate backwards on road networks when modeling spatial relationships, [...] Read more.
High-precision traffic flow prediction facilitates intelligent traffic control and refined management decisions. Previous research has built a variety of exquisite models with good prediction results. However, they ignore the reality that traffic flows can propagate backwards on road networks when modeling spatial relationships, as well as associations between distant nodes. In addition, more effective model components for modeling temporal relationships remain to be developed. To address the above challenges, we propose a local–global features fusion temporal convolutional network (LGTCN) for spatio-temporal traffic flow prediction, which incorporates a bidirectional graph convolutional network, probabilistic sparse self-attention, and a multichannel temporal convolutional network. To extract the bidirectional propagation relationship of traffic flow on the road network, we improve the traditional graph convolutional network so that information can be propagated in multiple directions. In addition, in spatial global dimensions, we propose probabilistic sparse self-attention to effectively perceive global data correlations and reduce the computational complexity caused by the finite perspective graph. Furthermore, we develop a multichannel temporal convolutional network. It not only retains the temporal learning capability of temporal convolutional networks, but also corresponds each channel to a node, and it realizes the interaction of node features through output interoperation. Extensive experiments on four open access benchmark traffic flow datasets demonstrate the effectiveness of our model. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems)
Show Figures

Figure 1

12 pages, 394 KB  
Article
Composite Graph Neural Networks for Molecular Property Prediction
by Pietro Bongini, Niccolò Pancino, Asma Bendjeddou, Franco Scarselli, Marco Maggini and Monica Bianchini
Int. J. Mol. Sci. 2024, 25(12), 6583; https://doi.org/10.3390/ijms25126583 - 14 Jun 2024
Cited by 4 | Viewed by 2952
Abstract
Graph Neural Networks have proven to be very valuable models for the solution of a wide variety of problems on molecular graphs, as well as in many other research fields involving graph-structured data. Molecules are heterogeneous graphs composed of atoms of different species. [...] Read more.
Graph Neural Networks have proven to be very valuable models for the solution of a wide variety of problems on molecular graphs, as well as in many other research fields involving graph-structured data. Molecules are heterogeneous graphs composed of atoms of different species. Composite graph neural networks process heterogeneous graphs with multiple-state-updating networks, each one dedicated to a particular node type. This approach allows for the extraction of information from s graph more efficiently than standard graph neural networks that distinguish node types through a one-hot encoded type of vector. We carried out extensive experimentation on eight molecular graph datasets and on a large number of both classification and regression tasks. The results we obtained clearly show that composite graph neural networks are far more efficient in this setting than standard graph neural networks. Full article
Show Figures

Figure 1

13 pages, 2270 KB  
Article
GRAAL: Graph-Based Retrieval for Collecting Related Passages across Multiple Documents
by Misael Mongiovì and Aldo Gangemi
Information 2024, 15(6), 318; https://doi.org/10.3390/info15060318 - 29 May 2024
Cited by 2 | Viewed by 1464
Abstract
Finding passages related to a sentence over a large collection of text documents is a fundamental task for claim verification and open-domain question answering. For instance, a common approach for verifying a claim is to extract short snippets of relevant text from a [...] Read more.
Finding passages related to a sentence over a large collection of text documents is a fundamental task for claim verification and open-domain question answering. For instance, a common approach for verifying a claim is to extract short snippets of relevant text from a collection of reference documents and provide them as input to a natural language inference machine that determines whether the claim can be deduced or refuted. Available approaches struggle when several pieces of evidence from different documents need to be combined to make an inference, as individual documents often have a low relevance with the input and are therefore excluded. We propose GRAAL (GRAph-based retrievAL), a novel graph-based approach that outlines the relevant evidence as a subgraph of a large graph that summarizes the whole corpus. We assess the validity of this approach by building a large graph that represents co-occurring entity mentions on a corpus of Wikipedia pages and using this graph to identify candidate text relevant to a claim across multiple pages. Our experiments on a subset of FEVER, a popular benchmark, show that the proposed approach is effective in identifying short passages related to a claim from multiple documents. Full article
(This article belongs to the Special Issue 2nd Edition of Information Retrieval and Social Media Mining)
Show Figures

Figure 1

15 pages, 1943 KB  
Article
Improving Radiology Report Generation Quality and Diversity through Reinforcement Learning and Text Augmentation
by Daniel Parres, Alberto Albiol and Roberto Paredes
Bioengineering 2024, 11(4), 351; https://doi.org/10.3390/bioengineering11040351 - 3 Apr 2024
Cited by 8 | Viewed by 3613
Abstract
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text [...] Read more.
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field. Full article
Show Figures

Figure 1

26 pages, 930 KB  
Article
Agriculture Named Entity Recognition—Towards FAIR, Reusable Scholarly Contributions in Agriculture
by Jennifer D’Souza
Knowledge 2024, 4(1), 1-26; https://doi.org/10.3390/knowledge4010001 - 19 Jan 2024
Viewed by 2871
Abstract
We introduce the Open Research Knowledge Graph Agriculture Named Entity Recognition (the ORKG Agri-NER) corpus and service for contribution-centric scientific entity extraction and classification. The ORKG Agri-NER corpus is a seminal benchmark for the evaluation of contribution-centric scientific entity extraction and classification in [...] Read more.
We introduce the Open Research Knowledge Graph Agriculture Named Entity Recognition (the ORKG Agri-NER) corpus and service for contribution-centric scientific entity extraction and classification. The ORKG Agri-NER corpus is a seminal benchmark for the evaluation of contribution-centric scientific entity extraction and classification in the agricultural domain. It comprises titles of scholarly papers that are available as Open Access articles on a major publishing platform. We describe the creation of this corpus and highlight the obtained findings in terms of the following features: (1) a generic conceptual formalism focused on capturing scientific entities in agriculture that reflect the direct contribution of a work; (2) a performance benchmark for named entity recognition of scientific entities in the agricultural domain by empirically evaluating various state-of-the-art sequence labeling neural architectures and transformer models; and (3) a delineated 3-step automatic entity resolution procedure for the resolution of the scientific entities to an authoritative ontology, specifically AGROVOC that is released in the Linked Open Vocabularies cloud. With this work we aim to provide a strong foundation for future work on the automatic discovery of scientific entities in the scholarly literature of the agricultural domain. Full article
Show Figures

Figure 1

35 pages, 1713 KB  
Review
Deep Learning for Time Series Forecasting: Advances and Open Problems
by Angelo Casolaro, Vincenzo Capone, Gennaro Iannuzzo and Francesco Camastra
Information 2023, 14(11), 598; https://doi.org/10.3390/info14110598 - 4 Nov 2023
Cited by 86 | Viewed by 53051
Abstract
A time series is a sequence of time-ordered data, and it is generally used to describe how a phenomenon evolves over time. Time series forecasting, estimating future values of time series, allows the implementation of decision-making strategies. Deep learning, the currently leading field [...] Read more.
A time series is a sequence of time-ordered data, and it is generally used to describe how a phenomenon evolves over time. Time series forecasting, estimating future values of time series, allows the implementation of decision-making strategies. Deep learning, the currently leading field of machine learning, applied to time series forecasting can cope with complex and high-dimensional time series that cannot be usually handled by other machine learning techniques. The aim of the work is to provide a review of state-of-the-art deep learning architectures for time series forecasting, underline recent advances and open problems, and also pay attention to benchmark data sets. Moreover, the work presents a clear distinction between deep learning architectures that are suitable for short-term and long-term forecasting. With respect to existing literature, the major advantage of the work consists in describing the most recent architectures for time series forecasting, such as Graph Neural Networks, Deep Gaussian Processes, Generative Adversarial Networks, Diffusion Models, and Transformers. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

Back to TopTop