Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = EGNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3173 KB  
Article
Graph Neural Networks for Sustainable Energy: Predicting Adsorption in Aromatic Molecules
by Hasan Imani Parashkooh and Cuiying Jian
ChemEngineering 2025, 9(4), 85; https://doi.org/10.3390/chemengineering9040085 - 6 Aug 2025
Viewed by 3316
Abstract
The growing need for rapid screening of adsorption energies in organic materials has driven substantial progress in developing various architectures of equivariant graph neural networks (eGNNs). This advancement has largely been enabled by the availability of extensive Density Functional Theory (DFT)-generated datasets, sufficiently [...] Read more.
The growing need for rapid screening of adsorption energies in organic materials has driven substantial progress in developing various architectures of equivariant graph neural networks (eGNNs). This advancement has largely been enabled by the availability of extensive Density Functional Theory (DFT)-generated datasets, sufficiently large to train complex eGNN models effectively. However, certain material groups with significant industrial relevance, such as aromatic compounds, remain underrepresented in these large datasets. In this work, we aim to bridge the gap between limited, domain-specific DFT datasets and large-scale pretrained eGNNs. Our methodology involves creating a specialized dataset by segregating aromatic compounds after a targeted ensemble extraction process, then fine-tuning a pretrained model via approaches that include full retraining and systematically freezing specific network sections. We demonstrate that these approaches can yield accurate energy and force predictions with minimal domain-specific training data and computation. Additionally, we investigate the effects of augmenting training datasets with chemically related but out-of-domain groups. Our findings indicate that incorporating supplementary data that closely resembles the target domain, even if approximate, would enhance model performance on domain-specific tasks. Furthermore, we systematically freeze different sections of the pretrained models to elucidate the role each component plays during adaptation to new domains, revealing that relearning low-level representations is critical for effective domain transfer. Overall, this study contributes valuable insights and practical guidelines for efficiently adapting deep learning models for accurate adsorption energy predictions, significantly reducing reliance on extensive training datasets. Full article
Show Figures

Figure 1

15 pages, 1476 KB  
Article
A Comparison between Invariant and Equivariant Classical and Quantum Graph Neural Networks
by Roy T. Forestano, Marçal Comajoan Cara, Gopal Ramesh Dahale, Zhongtian Dong, Sergei Gleyzer, Daniel Justice, Kyoungchul Kong, Tom Magorsch, Konstantin T. Matchev, Katia Matcheva and Eyup B. Unlu
Axioms 2024, 13(3), 160; https://doi.org/10.3390/axioms13030160 - 29 Feb 2024
Cited by 5 | Viewed by 4376
Abstract
Machine learning algorithms are heavily relied on to understand the vast amounts of data from high-energy particle collisions at the CERN Large Hadron Collider (LHC). The data from such collision events can naturally be represented with graph structures. Therefore, deep geometric methods, such [...] Read more.
Machine learning algorithms are heavily relied on to understand the vast amounts of data from high-energy particle collisions at the CERN Large Hadron Collider (LHC). The data from such collision events can naturally be represented with graph structures. Therefore, deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics. One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles. The increasing size and complexity of the LHC particle datasets, as well as the computational models used for their analysis, have greatly motivated the development of alternative fast and efficient computational paradigms such as quantum computation. In addition, to enhance the validity and robustness of deep networks, we can leverage the fundamental symmetries present in the data through the use of invariant inputs and equivariant layers. In this paper, we provide a fair and comprehensive comparison of classical graph neural networks (GNNs) and equivariant graph neural networks (EGNNs) and their quantum counterparts: quantum graph neural networks (QGNNs) and equivariant quantum graph neural networks (EQGNN). The four architectures were benchmarked on a binary classification task to classify the parton-level particle initiating the jet. Based on their area under the curve (AUC) scores, the quantum networks were found to outperform the classical networks. However, seeing the computational advantage of quantum networks in practice may have to wait for the further development of quantum technology and its associated application programming interfaces (APIs). Full article
(This article belongs to the Special Issue Computational Aspects of Machine Learning and Quantum Computing)
Show Figures

Figure 1

17 pages, 2693 KB  
Article
Fused Node-Level Residual Structure Edge Graph Neural Network for Few-Shot Image Classification
by Yaoqun Xu and Yuemao Wang
Appl. Sci. 2023, 13(19), 10996; https://doi.org/10.3390/app131910996 - 5 Oct 2023
Cited by 1 | Viewed by 2538
Abstract
In spite of recent rapid developments across various computer vision domains, numerous cutting-edge deep learning algorithms often demand a substantial volume of data to operate effectively. Within this research, a novel few-shot learning approach is presented with the objective of enhancing the accuracy [...] Read more.
In spite of recent rapid developments across various computer vision domains, numerous cutting-edge deep learning algorithms often demand a substantial volume of data to operate effectively. Within this research, a novel few-shot learning approach is presented with the objective of enhancing the accuracy of few-shot image classification. This task entails the classification of unlabeled query samples based on a limited set of labeled support examples. Specifically, the integration of the edge-conditioned graph neural network (EGNN) framework with hierarchical node residual connections is proposed. The primary aim is to enhance the performance of graph neural networks when applied to few-shot classification, a rather unconventional application of hierarchical node residual structures in few-shot image classification tasks. It is noteworthy that this work represents an innovative attempt to combine these two techniques. Extensive experimental findings on publicly available datasets demonstrate that the methodology surpasses the original EGNN algorithm, achieving a maximum improvement of 2.7%. Particularly significant is the performance gain observed on our custom-built dataset, CBAC (Car Brand Appearance Classification), which consistently outperforms the original method, reaching an impressive peak improvement of 11.14%. Full article
Show Figures

Figure 1

20 pages, 3891 KB  
Article
Employing Molecular Conformations for Ligand-Based Virtual Screening with Equivariant Graph Neural Network and Deep Multiple Instance Learning
by Yaowen Gu, Jiao Li, Hongyu Kang, Bowen Zhang and Si Zheng
Molecules 2023, 28(16), 5982; https://doi.org/10.3390/molecules28165982 - 9 Aug 2023
Cited by 16 | Viewed by 4246
Abstract
Ligand-based virtual screening (LBVS) is a promising approach for rapid and low-cost screening of potentially bioactive molecules in the early stage of drug discovery. Compared with traditional similarity-based machine learning methods, deep learning frameworks for LBVS can more effectively extract high-order molecule structure [...] Read more.
Ligand-based virtual screening (LBVS) is a promising approach for rapid and low-cost screening of potentially bioactive molecules in the early stage of drug discovery. Compared with traditional similarity-based machine learning methods, deep learning frameworks for LBVS can more effectively extract high-order molecule structure representations from molecular fingerprints or structures. However, the 3D conformation of a molecule largely influences its bioactivity and physical properties, and has rarely been considered in previous deep learning-based LBVS methods. Moreover, the relative bioactivity benchmark dataset is still lacking. To address these issues, we introduce a novel end-to-end deep learning architecture trained from molecular conformers for LBVS. We first extracted molecule conformers from multiple public molecular bioactivity data and consolidated them into a large-scale bioactivity benchmark dataset, which totally includes millions of endpoints and molecules corresponding to 954 targets. Then, we devised a deep learning-based LBVS called EquiVS to learn molecule representations from conformers for bioactivity prediction. Specifically, graph convolutional network (GCN) and equivariant graph neural network (EGNN) are sequentially stacked to learn high-order molecule-level and conformer-level representations, followed with attention-based deep multiple-instance learning (MIL) to aggregate these representations and then predict the potential bioactivity for the query molecule on a given target. We conducted various experiments to validate the data quality of our benchmark dataset, and confirmed EquiVS achieved better performance compared with 10 traditional machine learning or deep learning-based LBVS methods. Further ablation studies demonstrate the significant contribution of molecular conformation for bioactivity prediction, as well as the reasonability and non-redundancy of deep learning architecture in EquiVS. Finally, a model interpretation case study on CDK2 shows the potential of EquiVS in optimal conformer discovery. The overall study shows that our proposed benchmark dataset and EquiVS method have promising prospects in virtual screening applications. Full article
Show Figures

Graphical abstract

20 pages, 6672 KB  
Article
Embedding-Graph-Neural-Network for Transient NOx Emissions Prediction
by Yun Chen, Chengwei Liang, Dengcheng Liu, Qingren Niu, Xinke Miao, Guangyu Dong, Liguang Li, Shanbin Liao, Xiaoci Ni and Xiaobo Huang
Energies 2023, 16(1), 3; https://doi.org/10.3390/en16010003 - 20 Dec 2022
Cited by 4 | Viewed by 2987
Abstract
Recently, Acritical Intelligent (AI) methodologies such as Long and Short-term Memory (LSTM) have been widely considered promising tools for engine performance calibration, especially for engine emission performance prediction and optimization, and Transformer is also gradually applied to sequence prediction. To carry out high-precision [...] Read more.
Recently, Acritical Intelligent (AI) methodologies such as Long and Short-term Memory (LSTM) have been widely considered promising tools for engine performance calibration, especially for engine emission performance prediction and optimization, and Transformer is also gradually applied to sequence prediction. To carry out high-precision engine control and calibration, predicting long time step emission sequences is required. However, LSTM has the problem of gradient disappearance on too long input and output sequences, and Transformer cannot reflect the dynamic features of historic emission information which derives from cycle-by-cycle engine combustion events, which leads to low accuracy and weak algorithm adaptability due to the inherent limitations of the encoder-decoder structure. In this paper, considering the highly nonlinear relation between the multi-dimensional engine operating parameters the engine emission data outputs, an Embedding-Graph-Neural-Network (EGNN) model was developed combined with self-attention mechanism for the adaptive graph generation part of the GNN to capture the relationship between the sequences, improve the ability of predicting long time step sequences, and reduce the number of parameters to simplify network structure. Then, a sensor embedding method was adopted to make the model adapt to the data characteristics of different sensors, so as to reduce the impact of experimental hardware on prediction accuracy. The experimental results show that under the condition of long-time step forecasting, the prediction error of our model decreased by 31.04% on average compared with five other baseline models, which demonstrates the EGNN model can potentially be used in future engine calibration procedures. Full article
Show Figures

Figure 1

Back to TopTop