VAE-Sim: A Novel Molecular Similarity Measure Based on a Variational Autoencoder

Molecular similarity is an elusive but core “unsupervised” cheminformatics concept, yet different “fingerprint” encodings of molecular structures return very different similarity values, even when using the same similarity metric. Each encoding may be of value when applied to other problems with objective or target functions, implying that a priori none are “better” than the others, nor than encoding-free metrics such as maximum common substructure (MCSS). We here introduce a novel approach to molecular similarity, in the form of a variational autoencoder (VAE). This learns the joint distribution p(z|x) where z is a latent vector and x are the (same) input/output data. It takes the form of a “bowtie”-shaped artificial neural network. In the middle is a “bottleneck layer” or latent vector in which inputs are transformed into, and represented as, a vector of numbers (encoding), with a reverse process (decoding) seeking to return the SMILES string that was the input. We train a VAE on over six million druglike molecules and natural products (including over one million in the final holdout set). The VAE vector distances provide a rapid and novel metric for molecular similarity that is both easily and rapidly calculated. We describe the method and its application to a typical similarity problem in cheminformatics.


Introduction
The concept of molecular similarity lies at the core of cheminformatics [1][2][3]. It implies that molecules of "similar" structure tend to have similar properties. Thus, a typical question can be formulated as follows: "given a molecule of interest M, possibly showing some kind of chemical activity, find me the nearest 50 molecules from a potentially huge online collection to purchase that are most similar to M so I can assess their behaviour in a relevant quantitative-structure-activity (QSAR) analysis".
The most common strategies for assessing molecular similarity involve encoding the molecule as a vector of numbers, such that the vectors encoding two molecules may be compared according to their Euclidean or other distance. In the case of binary strings, the Jaccard or Tanimoto similarity Molecules 2020, 25 (TS) is commonly used [4] as a metric (between zero and one). One means for obtaining such a vector for a molecule is to calculate from the structure (or measure) various properties of the molecule ("descriptors" [5][6][7]), such as clogP or total polar surface area, and then to concatenate them. However, a more common strategy for obtaining the encoding vector of numbers is simply to use structural features directly and to encode them as so-called molecular fingerprints [8][9][10][11][12][13][14][15][16][17]. Well-known examples include MACCS [18], atom pairs [19], torsion [20], extended connectivity [21], functional class [22], circular [23], and so on. The similarities so encoded can also then be compared as their Jaccard or Tanimoto similarities. Sometimes a "difference" or "distance" is discussed and formulated as 1-TS (a true metric). An excellent and widely used framework for performing all of this is RDKit (Pathon v3.6.8) (www.rdkit.org/) [24], that presently contains nine methods for producing molecular fingerprints. The problem comes from the fact that the "most similar" molecules to a target molecule often differ wildly both as judged by their structures observable by eye and quantitatively in terms of the value of the TS of the different fingerprints [25]. As a very small and simple dataset, we take the set of molecules observed by Dickens and colleagues [26] to inhibit the transporter-mediated uptake of the second-generation atypical antipsychotic drug clozapine. These are olanzapine, chlorpromazine, quetiapine, prazosin, lamotrigine, indatraline, verapamil and rhein. Of the FDA-approved drugs, we assessed the top 50 drugs in terms of their structural similarity to clozapine using the nine RDKit encodings, with the results shown in Table 1 and Figure 1 (see also Supplementary Figure S1). Only the first four of these are even within the top 50 for any encoding, and only olanzapine appears for each of them. By contrast, the most potent inhibitor is prazosin (which is not even wholly of the same drug class, being both a treatment for anxiety and a high-blood-pressure-lowering agent); however, it appears in the top 50 in only one encoding (torsion) and then with a Tanimoto similarity of just 0.37. That said, visual inspection of their "Kekularised" structures does show a substantial common substructure between prazosin and clozapine (marked in Figure 1). It is clear that the similarities, as judged by standard fingerprint encodings, are highly variable, and are prone to both false negatives and false positives when it comes to attacking the question as set down above. What we need is a different kind of strategy.
Molecules 2020, 25, x FOR PEER REVIEW 2 of 17 is commonly used [4] as a metric (between zero and one). One means for obtaining such a vector for a molecule is to calculate from the structure (or measure) various properties of the molecule ("descriptors" [5][6][7]), such as clogP or total polar surface area, and then to concatenate them. However, a more common strategy for obtaining the encoding vector of numbers is simply to use structural features directly and to encode them as so-called molecular fingerprints [8][9][10][11][12][13][14][15][16][17]. Wellknown examples include MACCS [18], atom pairs [19], torsion [20], extended connectivity [21], functional class [22], circular [23], and so on. The similarities so encoded can also then be compared as their Jaccard or Tanimoto similarities. Sometimes a "difference" or "distance" is discussed and formulated as 1-TS (a true metric). An excellent and widely used framework for performing all of this is RDKit (Pathon v3.6.8) (www.rdkit.org/) [24], that presently contains nine methods for producing molecular fingerprints. The problem comes from the fact that the "most similar" molecules to a target molecule often differ wildly both as judged by their structures observable by eye and quantitatively in terms of the value of the TS of the different fingerprints [25]. As a very small and simple dataset, we take the set of molecules observed by Dickens and colleagues [26] to inhibit the transporter-mediated uptake of the second-generation atypical antipsychotic drug clozapine. These are olanzapine, chlorpromazine, quetiapine, prazosin, lamotrigine, indatraline, verapamil and rhein. Of the FDA-approved drugs, we assessed the top 50 drugs in terms of their structural similarity to clozapine using the nine RDKit encodings, with the results shown in Table 1 and Figure 1 (see also Supplementary Figure S1). Only the first four of these are even within the top 50 for any encoding, and only olanzapine appears for each of them. By contrast, the most potent inhibitor is prazosin (which is not even wholly of the same drug class, being both a treatment for anxiety and a high-blood-pressure-lowering agent); however, it appears in the top 50 in only one encoding (torsion) and then with a Tanimoto similarity of just 0.37. That said, visual inspection of their "Kekularised" structures does show a substantial common substructure between prazosin and clozapine (marked in Figure 1). It is clear that the similarities, as judged by standard fingerprint encodings, are highly variable, and are prone to both false negatives and false positives when it comes to attacking the question as set down above. What we need is a different kind of strategy.   Table 1. Tanimoto similarity to clozapine using nine different RDKit encodings and their ability to inhibit clozapine transport (data extracted from [26]). A shaded cell means that the molecule was not judged to be in the "top 50" using that encoding.  The typical structure of a QSAR type of problem is provided in Figure 2A, where a series of molecules represented as SMILES strings [27] are encoded as molecular fingerprints and used to learn a nonlinear mapping to produce an output in the form of a classification or regression estimation. The architecture of this is implicitly in the form of a multilayer perceptron (a classical neural network [28][29][30][31]), in which weights are modified ("trained") to provide a mapping from input SMILES to a numerical output. Our fundamental problem stems from the fact that these types of encoding are one-way: the SMILES string can generate the molecular fingerprint but the molecular fingerprint cannot generate the SMILES. Put another way, it is the transition from a world of discrete objects (here molecules) with categorical representations (here SMILES strings) to one of a continuous representation (vectors of weights) that is seemingly irreversible in this representation. One key element is the means by which we can go from a non-numerical representation (such as SMILES or similar [32]) to a numerical representation or "embedding" (this, as we shall see, is typically constituted by vectors of numbers in the nodes and weights of multilayer neural networks) [33][34][35][36][37][38]. Deep learning has also been used for the encoding step of 2D chemical structures [39,40].
VAEs are latent-variable generative models that define a joint density p θ (x,z) between some observed data x ∈ R dx and unobserved or latent variables z ∈ R dz [77], given some model parameters θ. They use a variational posterior (also referred to as an encoder), q ϕ (z|x), to construct the latent variables with variational parameters ϕ, and a combination of p(z) and p(x|z) to create a decoder that has the opposite effect. Learning the posterior directly is computationally intractable, so the generic deep learning strategy is to train a neural network to approximate it. The original "error" backpropagated was based on the Kullback-Leibler (KL) divergence between the desired (log likelihood reconstruction error) and the predicted output distributions [67]. A very great many variants of both architectures and divergence metrics have been proposed since then (not all discernibly better [78]), and it is a very active field (e.g., [63,64,[79][80][81][82][83]). Since tuning is necessarily domain-specific [84], and most work is in the processing of images and natural languages rather than in molecules, we merely mention a couple, such as transformers (e.g., [85,86]) and others (e.g., [87,88]). Crucial to such autoencoders (that can also be used for data visualisation [89]) is the concept of a bottleneck layer, that as a series of nodes of lower dimensionality than its predecessors or successors, serves to extract or represent [56] the crucial features of the input molecules that are nonetheless sufficient to admit their reconstruction. Indeed, such strategies are sometimes referred to as representational learning.
A higher-level version of the above might state that a good variational autoencoder will project a set of discrete molecules into a continuous latent space represented for any given molecule by the vector representing the values of the outputs of the nodes in the bottleneck layer when it (or its SMILES representation) is applied to the encoder as an input. As with the commonest neural net training system (but cf. [90][91][92][93][94]), we use backpropagation to update the network so as to minimise the difference between the predicted and the desired output, subject to any other constraints that we may apply. We also recognise the importance of various forms of regularisation, that are all designed to prevent overfitting [49,[95][96][97][98].
Because the outputs of the nodes in the bottleneck layer both (i) encode the molecule of interest and (ii) effectively represent where molecules are in the chemical space on which they have been trained, a simple metric of similarity between two molecules is clearly the Euclidean or other comparable distance (e.g., cosine distance) between these vectors. This thus provides for a novel type of similarity encoding, that in a sense relates the whole chemical space on which the system has been trained and that we suspect may be of general utility. We might refer to this encoding as the "essence of molecules" (EM) encoding, but here, we refer to it as VAE-Sim.
Molecules 2020, 25, x; doi: FOR PEER REVIEW www.mdpi.com/journal/molecules [28][29][30][31]), in which weights are modified ("trained") to provide a mapping from input SMILES to a numerical output. Our fundamental problem stems from the fact that these types of encoding are one-way: the SMILES string can generate the molecular fingerprint but the molecular fingerprint cannot generate the SMILES. Put another way, it is the transition from a world of discrete objects (here molecules) with categorical representations (here SMILES strings) to one of a continuous representation (vectors of weights) that is seemingly irreversible in this representation. One key element is the means by which we can go from a non-numerical representation (such as SMILES or similar [32]) to a numerical representation or "embedding" (this, as we shall see, is typically constituted by vectors of numbers in the nodes and weights of multilayer neural networks) [33][34][35][36][37][38]). Deep learning has also been used for the encoding step of 2D chemical structures [39,40].

Results
Autoencoders that use SMILES as inputs can return three kinds of outputs: (i) the correct SMILES output mirroring the input and/or translating into the input molecular structure (referred to as "perfect"), (ii) an incorrect output of a molecule different from the input but that is still legal SMILES (hence will return a valid molecule), referred to as "good", and (iii) a molecule that is simply not legal SMILES. In practice, our VAE after training returned more than 95% valid SMILES in the test (holdout) set, so those that were invalid could simply be filtered out without significant loss of performance.
Specifically, we partitioned the dataset into 50% for training (  Following training, each molecule (SMILES) could be associated with a normalised vector of 100 dimensions, and the Euclidean distance between them could be calculated.
As previously described [25], we compared the similarities between all drugs and all metabolites using the datasets made available in [99]. We here focus on just the MACCS and Patterned encodings of RDKit, and compare them with the normalised Euclidean distances according to the latent vector obtained from the VAE. As before, we rank ordered each drug in terms of its closest similarity to any metabolite. First, Figure 3A (reading from right to left) shows the Tanimoto similarities for the Patterned and MACCS fingerprints, as well as the VAE-Sim values as judged by two metrics. The first, labelled E-Sim Equation (1), is the Euclidean similarity, based on the raw 100-dimensional hidden vectors, while the second, EU-Sim Equation (2), used the Uniform Manifold Approximation and Projection (UMAP) dimension reduction algorithm [106,107] based on the first two UMAP dimensions was used for purposes of visualisation; clearly, as with other encodings, they do not at all follow the same course, and one that may be modified according to the similarity measure used. Figure 3B,C show the "all-vs.-all" heatmaps for two of the encodings, indicating again that the VAE-Sim encoding falls away considerably more quickly, i.e., that similarities are judged in a certain sense more "locally". Figure 4A shows the Patterned similarity for the "most similar" metabolite for each drug (using TS) compared to that for VAE-Sim (using Euclidean distance), while Figure 4B shows the same for the MACCS encoding. These, again, illustrate how the new encoding provides a quite different readout from the standard fingerprint encodings. dimensions was used for purposes of visualisation; clearly, as with other encodings, they do not at all follow the same course, and one that may be modified according to the similarity measure used. Figure 3B,C show the "all-vs.-all" heatmaps for two of the encodings, indicating again that the VAE-Sim encoding falls away considerably more quickly, i.e., that similarities are judged in a certain sense more "locally".  Figure 4A shows the Patterned similarity for the "most similar" metabolite for each drug (using TS) compared to that for VAE-Sim (using Euclidean distance), while Figure 4B shows the same for the MACCS encoding. These, again, illustrate how the new encoding provides a quite different readout from the standard fingerprint encodings.    Figure 4A shows the Patterned similarity for the "most similar" metabolite for each drug (using TS) compared to that for VAE-Sim (using Euclidean distance), while Figure 4B shows the same for the MACCS encoding. These, again, illustrate how the new encoding provides a quite different readout from the standard fingerprint encodings.  Finally, we used our new metrics to determine the similarity to clozapine of other drugs. Figure 5 shows the two similarity scores based on VAE-Sim, calculated as in Figure 3. Gratifyingly, and while structural similarity is, in part, in the eye of the beholder, a variety of structurally and functionally related antipsychotic drugs such as loxapine, mirtazapine and quetiapine were indeed among the most similar to clozapine, while others not previous considered (such as the antihistamines ketotifen and alcaftadine, and the anti-inflammatory COX inhibitor ketorolac) were also suggested as being similar, providing support for the orthogonal utility of the new VAE-Sim metric. However, the rather promiscuous nature of clozapine binding (e.g., [108,109]), along with that of many of the other drugs (e.g., [110][111][112][113][114][115][116]), means that this is not the place to delve deeper.
Molecules 2020, 25, x FOR PEER REVIEW 9 of 17 Finally, we used our new metrics to determine the similarity to clozapine of other drugs. Figure  5 shows the two similarity scores based on VAE-Sim, calculated as in Figure 3. Gratifyingly, and while structural similarity is, in part, in the eye of the beholder, a variety of structurally and functionally related antipsychotic drugs such as loxapine, mirtazapine and quetiapine were indeed among the most similar to clozapine, while others not previous considered (such as the antihistamines ketotifen and alcaftadine, and the anti-inflammatory COX inhibitor ketorolac) were also suggested as being similar, providing support for the orthogonal utility of the new VAE-Sim metric. However, the rather promiscuous nature of clozapine binding (e.g., [108,109]), along with that of many of the other drugs (e.g., [110][111][112][113][114][115][116]), means that this is not the place to delve deeper.  Figure 3. Some of the "most similar" drugs are labelled, as are some of those in Table 1. (B) Structures of some of the drugs mentioned, together with their Euclidean distances as judged by VAE-Sim.

Methods
We considered and tested grammar-based and junction-tree methods such as those used by Kajino [35], that exploited some of the ideas developed by Dai [117], Kusner [118] and by Jin and their colleagues [34]. However, our preferred method as described here used one-hot encoding as set out  Figure 3. Some of the "most similar" drugs are labelled, as are some of those in Table 1. (B) Structures of some of the drugs mentioned, together with their Euclidean distances as judged by VAE-Sim.

Methods
We considered and tested grammar-based and junction-tree methods such as those used by Kajino [35], that exploited some of the ideas developed by Dai [117], Kusner [118] and by Jin and their colleagues [34]. However, our preferred method as described here used one-hot encoding as set out by Gómez-Bombarelli and colleagues [74]. We varied the number of molecules in the training process from ca 250,000 to over 6 million; the large number of possible hyperparameters would have led to a combinatorial explosion, so exhaustive search was (and is) not possible. The final architecture used here (shown in Figure 2C) required 6 days' training on a 1-GPU machine. It involved a convolutional neural network (CNN) encoder with the following layers ( Figure  Neither dropout nor pooling were used. The optimiser was ADAM [119], the fixed learning rate 0.0001, parameters were initialised using the "Xavier uniform" scheme [120], and a batch size of 128. This was implemented in Python using the Pytorch library (Python v3.8.5). (https://pytorch.org/). Most of the pre-and post-processing cheminformatics workflows were written in the KNIME environment (see [121]).

Discussion
Molecular similarity is at the core of much of cheminformatics (e.g., [3,8,[122][123][124][125]), but is an elusive concept. Our chief interest typically lies in supervised methods such as QSARs, where we use knowledge of paired structures and activities to form a model that allows us to select new structures with potentially desirable activities. Modern modelling methods such as feedforward artificial neural networks based on multilayer perceptrons are very powerful (and they can in fact fit any nonlinear function-the principle of "universal approximation" [126,127]). Under these circumstances, it is usually possible to learn a QSAR using any of the standard fingerprints. However, what we are focused on here is a purely unsupervised representation of the structures themselves (cf [37] which used substructures), and the question of which of these are the "most similar" to a query molecule of interest. Such unsupervised methods may be taken to include any kinds of unsupervised clustering too (e.g., [128][129][130][131][132]). As with any kind of system of this type, the "closeness" is a function of the weighting of any individual features, and it is perhaps not surprising that the different fingerprint methods give vastly different similarities, even when judged by rank order (e.g., [25] and above). One similarity measure that is independent of any fingerprint encoding is represented by the maximum common substructure (MCSS). However, by definition, the MCSS uses only part of a molecule; it is also computationally demanding [101,102], such that "all-against-all" comparisons such as those presented here are out of the question for large numbers of molecules.
Here, we leveraged a new method that uses only the canonical SMILES encoding of the molecules themselves, leading to its representation as a 100-element vector. Simple Euclidean distances can be used to obtain a metric of similarity that, unlike MCSS, is rapidly calculated for any new molecule, even against the entire set of molecules used in the development of the latent space.
In addition, unlike any of the other methods described, methods such as VAEs are generative: moving around in the latent space and applying the vector so created to the decoder allows for the generation of entirely new molecules (e.g., [41][42][43][44][45]48,50,63,65,73,74,133]). This opens up a considerable area of chemical exploration, even in the absence of any knowledge of bioactivities.

What Determines the Extent to Which VAEs can Generate Novel Examples?
The ability of variational autoencoders to generalise is considered to be based on learning a certain "neighbourhood" around each of the training examples [77,134], seen as a manifold of lower dimensionality than the dimensionality of the input space [56]. Put another way, "the reconstruction obtained from an optimal decoder of a VAE is a convex combination of examples in the training data" [135]. On this basis, an effect of training set size on the improvement of generalisation (here defined simply as being able to return an accurate answer from a molecule not in the training set) is to be expected, and our ability to generalise (as judged by test set error) improved as the number of molecules increased up to a few million. However, although we did not explore this, it is possible that our default architecture was simply too large for the smaller number of molecules, as excessive "capacity" can cause a loss of generalisation ability [135]. This, of course, leaves open the details of the size and "closeness" of that neighbourhood, how it varies with the encoding used (our original problem) and what features are used in practice to determine that neighbourhood. The network described here took nearly a week to train on a well-equipped GPU-based machine, and exhaustive analysis of hyperparameters was not possible. Consequently, because an understanding of the importance of local density will vary as a function of the position and nature of the relevant chemical space, we are not going to pursue them here. What is important is (i) that we could indeed learn to navigate these chemical spaces, and (ii) that the VAE approach permits a straightforward and novel estimation of molecular similarity.
Supplementary Materials: The following are available online at http://www.mdpi.com/1420-3049/25/15/3446/s1, Figure S1: Molecules similar to clozapine as judged by molecular fingerprint encodings. Funding: Present funding includes part of the EPSRC project SuSCoRD (EP/S004963/1), partly sponsored by AkzoNobel. DBK is also funded by the Novo Nordisk Foundation (grant NNF10CC1016517). Tim Roberts is funded by the NIHR UCLH Biomedical Research Centre.

Conflicts of Interest:
The authors declare that they have no conflicts of interest.