Efficient Graph Algorithms in Machine Learning

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (6 July 2020) | Viewed by 28229

Special Issue Editors


E-Mail Website
Guest Editor
Signal Processing Laboratory, École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
Interests: data analysis; machine learning; signal processing; graph algorithms

E-Mail Website
Guest Editor
Centre national de la recherche scientifique (CNRS), Grenoble, France
Interests: efficient sampling algorithms for machine learning; graph signal processing; community detection

Special Issue Information

Dear Colleagues,

There is currently a large gap between the size of the graph data we possess and the machine learning algorithms we use to process them. For instance, though social networks can grow to include many millions of nodes and billions of edges, most machine learning experiments being done with graph data are limited to graphs that are many magnitudes smaller. This is particularly problematic in the view that the capacity of most learning algorithms to generalize beyond the training set increases monotonically with the size of the data they are trained with. We argue that to fulfill the potential of machine learning on graph data, we need learning algorithms that circumvent computational and memory complexity bottlenecks without compromising solution quality. This special issue aims to gather such research contributions.

Both original contributions and review articles will be considered. Submitted articles may focus on any machine learning problem involving graph data, such as:

  • semi-supervised learning (node and edge classification)
  • graph classification and embedding (e.g., with graph neural networks or graph kernels)
  • unsupervised learning problems (e.g. clustering/community detection, dimensionality reduction, compression, link prediction)
  • graph signal processing (filtering, sampling, fast transforms, inverse problems on graphs)
  • graph inference and construction
  • graph decompositions
  • graph recommender systems
  • generative models for graphs and graph data
  • new application domains (learnable simulators, protein interaction prediction, brain network analysis, point cloud processing)
  • learned heuristics for hard graph-theoretic problems

They may also utilize any approximation or heuristic cost-saving scheme, such as sampling (e.g., coresets), structure exploitation (e.g., sparsity, communities), randomized linear algebra, sparsification, coarsening, etc.

Articles that provide strong evidence (theoretical or empirical) of algorithmic efficiency beyond the state of the art will be of particular interest.

The considered gains may be in terms of computational complexity, space complexity, sample complexity, or opportunity for parallelism.

Dr. Andreas Loukas
Dr. Nicolas Tremblay
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning on graphs
  • graph signal processing
  • geometric deep learning
  • graph reduction
  • graph inference

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1812 KiB  
Article
Representing Deep Neural Networks Latent Space Geometries with Graphs
by Carlos Lassance, Vincent Gripon and Antonio Ortega
Algorithms 2021, 14(2), 39; https://doi.org/10.3390/a14020039 - 27 Jan 2021
Cited by 7 | Viewed by 2954
Abstract
Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained [...] Read more.
Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

22 pages, 14856 KiB  
Article
Spectrum-Adapted Polynomial Approximation for Matrix Functions with Applications in Graph Signal Processing
by Tiffany Fan, David I. Shuman, Shashanka Ubaru and Yousef Saad
Algorithms 2020, 13(11), 295; https://doi.org/10.3390/a13110295 - 13 Nov 2020
Cited by 2 | Viewed by 3543
Abstract
We propose and investigate two new methods to approximate f(A)b for large, sparse, Hermitian matrices A. Computations of this form play an important role in numerous signal processing and machine learning tasks. The main idea behind both methods [...] Read more.
We propose and investigate two new methods to approximate f(A)b for large, sparse, Hermitian matrices A. Computations of this form play an important role in numerous signal processing and machine learning tasks. The main idea behind both methods is to first estimate the spectral density of A, and then find polynomials of a fixed order that better approximate the function f on areas of the spectrum with a higher density of eigenvalues. Compared to state-of-the-art methods such as the Lanczos method and truncated Chebyshev expansion, the proposed methods tend to provide more accurate approximations of f(A)b at lower polynomial orders, and for matrices A with a large number of distinct interior eigenvalues and a small spectral width. We also explore the application of these techniques to (i) fast estimation of the norms of localized graph spectral filter dictionary atoms, and (ii) fast filtering of time-vertex signals. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

21 pages, 1513 KiB  
Article
Spikyball Sampling: Exploring Large Networks via an Inhomogeneous Filtered Diffusion
by Benjamin Ricaud, Nicolas Aspert and Volodymyr Miz
Algorithms 2020, 13(11), 275; https://doi.org/10.3390/a13110275 - 30 Oct 2020
Cited by 2 | Viewed by 2582
Abstract
Studying real-world networks such as social networks or web networks is a challenge. These networks often combine a complex, highly connected structure together with a large size. We propose a new approach for large scale networks that is able to automatically sample user-defined [...] Read more.
Studying real-world networks such as social networks or web networks is a challenge. These networks often combine a complex, highly connected structure together with a large size. We propose a new approach for large scale networks that is able to automatically sample user-defined relevant parts of a network. Starting from a few selected places in the network and a reduced set of expansion rules, the method adopts a filtered breadth-first search approach, that expands through edges and nodes matching these properties. Moreover, the expansion is performed over a random subset of neighbors at each step to mitigate further the overwhelming number of connections that may exist in large graphs. This carries the image of a “spiky” expansion. We show that this approach generalize previous exploration sampling methods, such as Snowball or Forest Fire and extend them. We demonstrate its ability to capture groups of nodes with high interactions while discarding weakly connected nodes that are often numerous in social networks and may hide important structures. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

19 pages, 1369 KiB  
Article
Online Topology Inference from Streaming Stationary Graph Signals with Partial Connectivity Information
by Rasoul Shafipour and Gonzalo Mateos
Algorithms 2020, 13(9), 228; https://doi.org/10.3390/a13090228 - 9 Sep 2020
Cited by 18 | Viewed by 2745
Abstract
We develop online graph learning algorithms from streaming network data. Our goal is to track the (possibly) time-varying network topology, and affect memory and computational savings by processing the data on-the-fly as they are acquired. The setup entails observations modeled as stationary graph [...] Read more.
We develop online graph learning algorithms from streaming network data. Our goal is to track the (possibly) time-varying network topology, and affect memory and computational savings by processing the data on-the-fly as they are acquired. The setup entails observations modeled as stationary graph signals generated by local diffusion dynamics on the unknown network. Moreover, we may have a priori information on the presence or absence of a few edges as in the link prediction problem. The stationarity assumption implies that the observations’ covariance matrix and the so-called graph shift operator (GSO—a matrix encoding the graph topology) commute under mild requirements. This motivates formulating the topology inference task as an inverse problem, whereby one searches for a sparse GSO that is structurally admissible and approximately commutes with the observations’ empirical covariance matrix. For streaming data, said covariance can be updated recursively, and we show online proximal gradient iterations can be brought to bear to efficiently track the time-varying solution of the inverse problem with quantifiable guarantees. Specifically, we derive conditions under which the GSO recovery cost is strongly convex and use this property to prove that the online algorithm converges to within a neighborhood of the optimal time-varying batch solution. Numerical tests illustrate the effectiveness of the proposed graph learning approach in adapting to streaming information and tracking changes in the sought dynamic network. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

26 pages, 2583 KiB  
Article
Fast Spectral Approximation of Structured Graphs with Applications to Graph Filtering
by Mario Coutino, Sundeep Prabhakar Chepuri, Takanori Maehara and Geert Leus
Algorithms 2020, 13(9), 214; https://doi.org/10.3390/a13090214 - 31 Aug 2020
Viewed by 3152
Abstract
To analyze and synthesize signals on networks or graphs, Fourier theory has been extended to irregular domains, leading to a so-called graph Fourier transform. Unfortunately, different from the traditional Fourier transform, each graph exhibits a different graph Fourier transform. Therefore to analyze the [...] Read more.
To analyze and synthesize signals on networks or graphs, Fourier theory has been extended to irregular domains, leading to a so-called graph Fourier transform. Unfortunately, different from the traditional Fourier transform, each graph exhibits a different graph Fourier transform. Therefore to analyze the graph-frequency domain properties of a graph signal, the graph Fourier modes and graph frequencies must be computed for the graph under study. Although to find these graph frequencies and modes, a computationally expensive, or even prohibitive, eigendecomposition of the graph is required, there exist families of graphs that have properties that could be exploited for an approximate fast graph spectrum computation. In this work, we aim to identify these families and to provide a divide-and-conquer approach for computing an approximate spectral decomposition of the graph. Using the same decomposition, results on reducing the complexity of graph filtering are derived. These results provide an attempt to leverage the underlying topological properties of graphs in order to devise general computational models for graph signal processing. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

33 pages, 1731 KiB  
Article
Fused Gromov-Wasserstein Distance for Structured Objects
by Titouan Vayer, Laetitia Chapel, Remi Flamary, Romain Tavenard and Nicolas Courty
Algorithms 2020, 13(9), 212; https://doi.org/10.3390/a13090212 - 31 Aug 2020
Cited by 25 | Viewed by 9172
Abstract
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of [...] Read more.
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them independently, whereas the Gromov–Wasserstein distance focuses on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper, we study the Fused Gromov-Wasserstein distance that extends the Wasserstein and Gromov–Wasserstein distances in order to encode simultaneously both the feature and structure information. We provide the mathematical framework for this distance in the continuous setting, prove its metric and interpolation properties, and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various applications, where structured objects are involved. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

22 pages, 3322 KiB  
Article
Hierarchical and Unsupervised Graph Representation Learning with Loukas’s Coarsening
by Louis Béthune, Yacouba Kaloga, Pierre Borgnat, Aurélien Garivier and Amaury Habrard
Algorithms 2020, 13(9), 206; https://doi.org/10.3390/a13090206 - 21 Aug 2020
Cited by 1 | Viewed by 2970
Abstract
We propose a novel algorithm for unsupervised graph representation learning with attributed graphs. It combines three advantages addressing some current limitations of the literature: (i) The model is inductive: it can embed new graphs without re-training in the presence of new data; (ii) [...] Read more.
We propose a novel algorithm for unsupervised graph representation learning with attributed graphs. It combines three advantages addressing some current limitations of the literature: (i) The model is inductive: it can embed new graphs without re-training in the presence of new data; (ii) The method takes into account both micro-structures and macro-structures by looking at the attributed graphs at different scales; (iii) The model is end-to-end differentiable: it is a building block that can be plugged into deep learning pipelines and allows for back-propagation. We show that combining a coarsening method having strong theoretical guarantees with mutual information maximization suffices to produce high quality embeddings. We evaluate them on classification tasks with common benchmarks of the literature. We show that our algorithm is competitive with state of the art among unsupervised graph representation learning methods. Full article
(This article belongs to the Special Issue Efficient Graph Algorithms in Machine Learning)
Show Figures

Figure 1

Back to TopTop