Preprint
Brief Report

On Addressing the Limitations of Graph Neural Networks

Altmetrics

Downloads

136

Views

25

Comments

0

This version is not peer-reviewed

Submitted:

02 July 2023

Posted:

04 July 2023

You are already at the latest version

Alerts
Abstract
This report gives a comprehensive summary of two problems about graph convolutional networks (GCNs): over-smoothing and heterophily challenges, and outlines future directions to explore.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Many real-world problems can be modeled as graphs. Recently, neural network based approaches have achieved significant progress for solving large, complex, graph-structured problems [9,12,17,21,24,34,50]. Inspired by the success of Convolutional Neural Networks (CNNs) [31] in computer vision [33], graph convolution defined on the graph Fourier domain stands out as the key operator and one of the most powerful tools for using machine learning to solve graph problems. Although with high expressive power, GCNs still suffer from several difficulties, e.g. the over-smoothing problem limits deep GCNs to sufficiently exploit multi-scale information, heterophily problem makes the graph-aware models underperform the graph-agnostic models. This report summarizes the methods we have proposed to address those challenges and puts forward some research problems we will investigate.
To fully explain the above problems, in Section 1.1, we will first introduce the notation and background knowledge of graph networks. In Section 2, we introduce the loss of expressive power of deep graph neural networks (GNNs) and propose snowball and truncated Krylov architecture to address it; in Section 3, we analyze heterophily problems for the existing GNNs and propose Adaptive Channel Mixing (ACM) architecture to address it.

Main Contribution

In Section 2, we first point out that the output of deep GCN with ReLU activation function will suffer from loss of rank problem under certain conditions and this can cause deep GCN lose expressive power. We then prove that Tanh is better at preserving the rank of the output and verify this claim with numerical tests. Then we find a way to deepen GCN in block Krylov form and propose snowball and truncated Krylov networks which perform better than state-of-the-arts (SOTA) model on semi-supervised node classification tasks on 3 benchmark datasets. Besides, we point out that finding a specifically tailored weight initialization scheme for GCNs can be an promising direction to address over-smoothing efficiently in Section 2.2. In Section 3, we first illustrate the insufficiency of the current homophily metrics and propose aggregation homophily based on a new similarity matrix. We then show the advantage of the new homophily metric over the existing ones on synthetic graph. Based on the similarity matrix, we define diversification distinguishability of a node and demonstrate why high-pass filters can help to address heterophily problem. To include both low-pass and high-pass in GNNs, we extend filterbank method and propose ACM and ACMII frameworks that can boost the performance of baseline GNNs on heterophilous graphs.

1.1. Notation and Background Knowledge

Suppose we have an undirected graph G = ( V , E , A ) , where V is the node set with | V | = N ; E is the edge set without self-loop; A R N × N is the symmetric adjacency matrix with A i j = 1 if and only if e i j E , otherwise A i j = 0 ; D is the diagonal degree matrix, i.e. D i i = j A i j and N i = { j : e i j E } is the neighborhood set of node i. A graph signal is a vector x R N defined on V , where x i is defined on the node i. We also have a feature matrix X R N × F whose columns are graph signals and each node i has a corresponding feature vector X i : with dimension F, which is the i-th row of X. We denote Z R N × C as label encoding matrix, where Z i : is the one hot encoding of the label of node i and C is the total number of classes.
The (combinatorial) graph Laplacian is defined as L = D A , which is a Symmetric Positive Semi-Definite (SPSD) matrix [7]. Its eigendecomposition gives L = U Λ U T , where the columns of U R N × N are orthonormal eigenvectors, namely the graph Fourier basis, Λ = diag ( λ 1 , , λ N ) with λ 1 λ N , and these eigenvalues are also called frequencies. The graph Fourier transform of the graph signal x is defined as x F = U 1 x = U T x = [ u 1 T x , , u N T x ] T , where u i T x is the component of x in the direction of u i .
Some graph Laplacian variants are commonly used, e.g. the symmetric normalized Laplacian L sym = D 1 / 2 L D 1 / 2 = I D 1 / 2 A D 1 / 2 and the random walk normalized Laplacian L rw = D 1 L = I D 1 A . The eigenvalues of L rw and L sym are the same and are in [ 0 , 2 ) , and their corresponding eigenvectors satisfy u rw i = D 1 / 2 u sym i .
The affinity (transition) matrices can be derived from the Laplacians, e.g.  A rw = I L rw = D 1 A , A sym = I L sym = D 1 / 2 A D 1 / 2 . Then λ i ( A rw ) = λ i ( A sym ) = 1 λ i ( A sym ) = 1 λ i ( A rw ) ( 1 , 1 ] . Renormalized affinity and Laplacian matrices are introduced in [24] as A ^ sym = D ˜ 1 / 2 A ˜ D ˜ 1 / 2 , L ^ sym = I A ^ sym , where A ˜ A + I , D ˜ D + I and it essentially adds a self-loop and is widely used in Graph Convolutional Network (GCN) as follows:
Y = softmax ( A ^ sym ReLU ( A ^ sym X W 0 ) W 1 )
where W 0 R F × F 1 and W 1 R F 1 × O are parameter matrices. GCN can learn by minimizing the following cross entropy loss
L = trace ( Z T log Y ) .
The random walk renormalized matrix A ^ rw = D ˜ 1 A ˜ can also be applied to GCN and it has the same eigenvalues as A ^ sym . The corresponding Laplacian is defined as L ^ rw = I A ^ rw Specifically, the nature of random walk matrix makes A ^ rw behaves as a mean aggregator ( A ^ rw x ) i = j { N i i } x j / ( D i i + 1 ) which is applied in [17] and is important to bridge the gap between spatial- and spectral-based graph convolution methods.

2. Loss of Expressive Power of Deep Graph Neural Networks

One major problem of the existing GCNs is the low expressive power limited by their shallow learning mechanisms [61,66]. There are mainly two reasons why an architecture that is scalable in depth has not been achieved yet. First, this problem is difficult: considering graph convolution as a special form of Laplacian smoothing [32], networks with multiple convolutional layers will suffer from an over-smoothing problem that makes the representation of even distant nodes indistinguishable [66]. Second, some people think it is unnecessary: for example, [4] states that it is not necessary for the label information to totally traverse the entire graph and one can operate on the multi-scale coarsened input graph and obtain the same flow of information as GCNs with more layers. Acknowledging the difficulty, we hold on to the objective of deepening GCNs since the desired compositionality1 will yield easy articulation and consistent performance for problems with different scales.
In Section 2.1, we first analyze the limits of deep GCNs brought by over-smoothing and the activation functions. Then, we show that any graph convolution with a well-defined analytic spectral filter can be written as a product of a block Krylov matrix and a learnable parameter matrix in a special form. Based on this, we propose two GCN architectures that leverage multi-scale information in different ways and are scalable in depth, with stronger expressive powers and abilities to extract richer representations of graph-structured data. For empirical validation, we test different instances of the proposed architectures on multiple node classification tasks. The results show that even the simplest instance of the architectures achieves state-of-the-art performance, and the complex ones achieve surprisingly higher performance. In Section 2.2, we propose to study an over-smoothing problem and give some ideas.

2.1. A Stronger Multi-Scale Deep GNN with Truncated Krylov Architecture

Suppose we deepen GCN in the same way as [24,32], we have
Y = softmax ( A ^ sym ReLU ( A ^ sym ReLU ( A ^ sym ReLU ( A ^ sym X W 0 ) W 1 ) W 2 ) W n ) softmax ( Y )
For this architecture, without considering the ReLU activation function, [32] shows that Y will converge to a space spanned by the eigenvectors of A ^ sym with eigenvalue 1. Taking activation function into consideration, our analyses on (3) can be summarized in the following theorems (see proof in the appendix of [44]).
Theorem 1. 
Suppose that G has k connected components. Let X R N × F be any feature matrix and let W j be any non-negative parameter matrix with W j 2 1 for j = 0 , 1 , . If G has no bipartite components, then in (3), as n , rank ( Y ) k .
Theorem 2. 
Suppose the n-dimensional x and y are independently sampled from a continuous distribution and the activation function Tan h ( z ) = e z e z e z + e z is applied to [ x , y ] pointwisely, then
P ( rank Tan h ( [ x , y ] ) = rank ( [ x , y ] ) ) = 1
Theorem 1 shows that, even considering ReLU, if we simply deepen GCN as (3), the extracted features will degrade under certain conditions, i.e.  Y only contains the stationary information of the graph structure and loses all the local information in node for being smoothed. In addition, from the proof we see that the pointwise ReLU transformation is a conspirator. Theorem 2 tells us that Tanh is better at keeping linear independence among column features. We design a numerical experiment on synthetic data to test, under a 100-layer GCN architecture, how activation functions affect the rank of the output in each hidden layer during the feedforward process. As Figure 1a shows, the rank of hidden features decreases rapidly with ReLU, while having little fluctuation under Tanh, and even the identity function performs better than ReLU. So we propose to replace ReLU by Tanh.
Besides activation function, to find a way to deepen GCN, we first show that any graph convolution with well-defined analytic spectral filter defined on A ^ sym R N × N can be written as a product of a block Krylov matrix with a learnable parameter matrix in a specific form. Based on this, we propose snowball network and truncated Krylov network.
We take S = R F × F . Given a set of block vectors { X k } k = 1 m R N × F , the S -span of { X k } k = 1 m is defined as span S { X 1 , , X m } = { k = 1 m X k C k : C k S } . Then, the order-m block Krylov subspace with respect to the matrix A R N × N , the block vector B R N × F and the vector space S , and its corresponding block Krylov matrix are respectively defined as
K m S ( A , B ) span S { B , A B , , A m 1 B } , K m ( A , B ) [ B , A B , , A m 1 B ] R N × m F .
It is shown in [11,15] that there exists a smallest m such that for any k m , A k B K m S ( A , B ) , where m depends on A and B.
Let ρ ( A ^ sym ) denote the spectrum radius of A ^ sym and suppose ρ ( A ^ sym ) < R where R is the radius of convergence for a real analytic scalar function g. Based on the above definitions and conclusions, the graph convolution can be written as
g ( A ^ sym ) X = n = 0 g ( n ) ( 0 ) n ! A ^ sym n X X , A ^ sym X , , A ^ sym m 1 X ( Γ 0 S ) T , ( Γ 1 S ) T , , ( Γ m 1 S ) T T K m ( A ^ sym , X ) Γ S
where Γ i S R F × F for i = 1 , , m 1 are parameter matrix blocks and Γ S R m F × F . Then, a graph convolutional layer can generally be written as
g ( A ^ sym ) X W = K m ( A ^ sym , X ) Γ S W = K m ( A ^ sym , X ) W S
where W R F × O is a parameter matrix, and W S Γ S W R m F × O . The essential number of learnable parameters is m F × O .
The block Krylov form provides an insight about why an architecture that concatenates multi-scale features in each layer will boost the expressive power of GCN. Based on this idea, we propose the snowball and truncated Block Krylov architectures [44] shown in Figure 2, where we stack multi-scale information in each layer. From the performance comparison on semi-supervised node classification tasks with different label percentage in Table 1, we can see that the proposed models consistently perform better than the state-of-the-art models, especially when there are less labeled nodes. See detailed experimental results in [44].
Table 1. Accuracy without Validation. For each (column), the greener the cell, the better the performance. The redder, the worse. If our methods achieve better performance than all others, the corresponding cell will be in bold.
Table 1. Accuracy without Validation. For each (column), the greener the cell, the better the performance. The redder, the worse. If our methods achieve better performance than all others, the corresponding cell will be in bold.
Preprints 78322 i001

2.2. Future Works on Over-Smoothing

2.2.0.2. Weight Initialization for GNNs

Even without aggregation in each hidden layer, an NN with deep architecture still suffers from vanishing activation variances and back-propagated gradients variance problem [13], which make the training of deep NN hard. In last decade, designing new parameter initialization methods is proved to be effective [13,18] to address the variance reduction problem during feedforward and backpropagation process. This motivates us to investigate the variance propagation in GNNs and analyze if the current weight initialization methods are suitable for GNNs or not. To this end, we can show that the vanishing variance caused by aggregation operation in GNNs is more serious than NN. Designing a new parameter initialization scheme for GNNs is potentially a feasible way to address this problem and empirically achieves promising performance [45]. we will propose a new method in this subsection.
The current initialization scheme of GNNs still follows the Xavier initialization [13], i.e.  W i U 6 n j + n j + 1 , 6 n j + n j + 1 , or He (or Kaiming) initialization [18], i.e. W i N 0 , 2 / n i , which is designed for traditional multilayer perceptron (MLP) , where W i is the parameter matrix of layer i and n i is the number of hidden units of layer i. These two initialization methods are derived by studying the variance propagation between layers during feedforward and backpropagation process. These two processes are different in GNNs by an extra multiplication of aggregation operator A ^ . To analyze the variance propagation, we use deep GCN as an example, use A ^ = A ^ rw and decompose it as follows,
Y 0 = X , H 1 = A ^ rw X W 0 , Y 1 = f ( H 1 ) , H l + 1 = A ^ rw Y l W l , Y l + 1 = f ( H l + 1 ) , l = 1 , , n Y = softmax ( A ^ rw Y n W n ) softmax ( H n + 1 ) , L = trace ( Z T log Y )
where H l , Y l R N × F l , W l R F l × F l + 1 ; Z R N × C is the ground truth matrix with one-hot label vector. Then the gradient propagates in the following way,
L H l = L Y l f ( H l ) , L W l 1 = Y l 1 T A ^ rw L H l , L Y l 1 = A ^ rw L H l W l 1 T

2.2.0.3. Variance Analysis: Forward View

Consider element i , j in matrix H l + 1 during the feed-forward process in (6),
H l + 1 i j = ( A ^ rw ) i , : Y l ( W l ) : , j = t = 1 F l k = 1 N ( A ^ rw ) i k Y l k t ( W l ) t , j , Y l + 1 = f ( H l + 1 ) , l = 1 , , n
Suppose we have linear activation function such as that proposed in [60]; each element in W l is i.i.d.initialized with E ( W l ) i j = 0 ; E ( Y l ) k t = 0 and all elements in Y l are independent 2. Then, Var ( Y l + 1 ) i j = Var ( Y l + 1 ) can be written as
Var t = 1 F l k = 1 N ( A ^ rw ) i k Y l k t ( W l ) t , j = t = 1 F l k = 1 N Var ( A ^ rw ) i k Y l k t ( W l ) t , j = F l d i + 1 Var Y l ) Var ( W l
Suppose each element in Y l shares the same variance denoted as Var Y l . To prevent variance vanishing between layers, i.e. Var Y l + 1 = Var Y l , from (8) we can approximately have (see computation in Appendix A.2.1)
Var ( W l ) = d i + 1 F l
This tells us that the variance of W l depends on the degree of a node, but since the parameter matrix is shared by all nodes, we cannot design a node specified initialization scheme. Thus, we make a compromise between nodes as follows
Var ( W l ) i = 1 N ( d i + 1 ) N F l = 1 + average node degree F l
Another way is to use weighted average by considering the node degree as the weight of each node. Through this way, we have
Var ( W ) = i = 1 N d i + 1 j = 1 N d j + 1 d i + 1 F l = i = 1 N ( d i + 1 ) 2 ( i = 1 N d i + 1 ) F l

Variance Analysis: Backward View

Under the same assumption as feedforward view and suppose each element in L H l and L W l 1 are independent to each other and has zero mean, from (7) we can approximately (see computation in Appendix A.2.2)
L H l = L Y l = A ^ rw L H l + 1 W l T , L W l 1 = Y l 1 T A ^ rw L H l
Then,
L H l i j = t = 1 F l + 1 k = 1 N ( A ^ rw ) i k L H l + 1 k t ( W l T ) t , j L W l 1 i j = ( A ^ rw Y l 1 ) · i T L H l · j = k = 1 N ( t = 1 N ( A ^ rw ) k t ( Y l 1 ) t i ) L H l k j ,
Thus,
Var L H l i j = Var t = 1 F l + 1 k = 1 N A ^ rw i k L H l + 1 k t ( W l T ) t , j = F l + 1 d i + 1 Var L H l + 1 Var W l Var L W l 1 i j = Var k = 1 N ( t = 1 N ( A ^ rw ) k t ( Y l 1 ) t i ) L H l k j = k = 1 N 1 d k + 1 Var Y l 1 Var L H l
Var ( W l ) = i = 1 N ( d i + 1 ) N F l + 1 1 + average node degree F l + 1
From (9) and (15), Var Y l 1 can be approximately written as
Var L H l N F l + 1 i = 1 N d i + 1 Var L H l + 1 Var W l Var L H n + 1 l = l + 1 n + 1 N F l i = 1 N d i + 1 Var W l 1 Var Y l 1 N F l 2 k d k + 1 Var Y l 2 ) Var ( W l 2 Var ( Y 0 ) l = 0 l 2 N F l k d k + 1 Var ( W l )
From (15), if each Var ( W l ) equals to Var ( W ) and each F l equals to F, then
Var L W l 1 i j k = 1 N 1 d k + 1 Var ( Y 0 ) Var L H n + 1 N F k d k + 1 Var ( W ) n
Combined with (11), we can set the variance of the parameter matrix as
Var ( W l ) 2 i = 1 N ( d i + 1 ) N ( F l + F l + 1 ) = 2 ( 1 + average node degree ) ( F l + F l + 1 )
Thus, each element in W i can be drawn from N ( 0 , 2 ( 1 + average node degree ) ( F l + F l + 1 ) ) .

Adaptive ReLU (AdaReLU) Activation Function

To satisfy the assumption that the activation function is linear at the beginning of training process and to still learn a nonlinear function during training, we design the following adaptive ReLU (AdaReLU) activation function
f ( x i ) = β i x i , if x i > 0 α i x i , if x i 0
where α i and β i are learnable parameters and are initialized to be 1. If α i = α and β i = β for all i, we have channel-shared AdaReLU, otherwise we have channel-wise AdaReLU 3. From the preliminary experimental results, channel-wise works better than channel-shared AdaReLU.
There exist some experimental evidence [45] that controlling variance flow by initialization like (19) can relieve the performance decrease of deep GCN. But more tests and hyperparameter tunning still needs to be done. More theoretical analysis on variance propagation needs to be done.

3. GNNs on Heterophily Graphs

GNNs are considered as an extension of basic Neural Networks (NNs) by additionally making use of graph structure based on the relational inductive bias (homophily assumption), rather than treating the nodes as collections of independent and identically distributed (i.i.d.) samples. Though GNNs are believed to outperform basic NNs in real-world tasks, it is found that in some cases, the graph-aware models have little performance gain or even underperform graph-agnostic models [6,46,51,69,71]. One of the main reasons for the performance degradation is believed to be heterophily, i.e. when the connected nodes tend to have different labels [69,71]. Heterophily challenge has received attention recently and there are increasing number of models being put forward to analyze [39,43] and address this problem [6,36,41,42,64,69,70].
In this section , we first introduce the most commently used homophily metrics in Section 3.1. Then, we show that not all cases of heterophily are harmful for GNNs and propose new metrics based on a similarity matrix which considers the influence of both graph structure and input features on GNNs in Section 3.2. The metrics demonstrate advantages over the commonly used homophily metrics by tests on synthetic graphs. From the metrics and the observations, we find some cases of harmful heterophily can be addressed by diversification operation and its effectiveness can be proved in Section 3.3. With this fact and knowledge of filterbanks, we propose the Adaptive Channel Mixing (ACM) framework in Section 3.4 to adaptively exploit aggregation, diversification and identity channels in each GNN layer to address harmful heterophily. We validate the ACM-augmented baselines with real-world node classification tasks. They consistently achieve significant performance gain and exceed the state-of-the-art GNNs on most of the tasks without incurring significant computational burden. In Section 3.5, we introduce some prior work on addressing heterophily problems and explain their differences with ACM framework. The limitation of diversification operation and remaining challenges of heterophily problems are discussed in Section 3.6

3.1. Metrics of Homophily

The metrics of homophily are defined by considering different relations between node labels and graph structures defined by adjacency matrix. There are three commonly used homophily metrics: edge homophily [1,70], node homophily [51], and class homophily [35] 4 defined as follows:
H edge ( G ) = | { e u v e u v E , Z u , : = Z v , : } | | E | , H node ( G ) = 1 | V | v V | { u u N v , Z u , : = Z v , : } | d v , H class ( G ) = 1 C 1 k = 1 C h k | { v Z v , k = 1 } | N + , h k = v V | { u Z v , k = 1 , u N v , Z u , : = Z v , : } | v { v | Z v , k = 1 } d v
where [ a ] + = max ( a , 0 ) ; h k is the class-wise homophily metric [35]. They are all in the range of [ 0 , 1 ] and a value close to 1 corresponds to strong homophily while a value close to 0 indicates strong heterophily. H edge ( G ) measures the proportion of edges that connect two nodes in the same class; H node ( G ) evaluates the average proportion of edge-label consistency of all nodes; H class ( G ) tries to avoid the sensitivity to imbalanced class, which can cause H edge misleadingly large. The above definitions are all based on the graph-label consistency and imply that the inconsistency will cause harmful effect to the performance of GNNs. With this in mind, we will show a counter example to illustrate the insufficiency of the above metrics and propose new metrics in the following subsection.

3.2. Analysis of Heterophily and Aggregation Homophily Metric

Heterophily is believed to be harmful for message-passing based GNNs [6,51,70] because intuitively features of nodes in different classes will be falsely mixed and this will lead nodes indistinguishable [70]. Nevertheless, it is not always the case, e.g. the bipartite graph shown in Figure 3 is highly heterophilous according to the homophily metrics in (20), but after mean aggregation, the nodes in classes 1 and 2 only exchange colors and are still distinguishable. Authors in [6] also point out the insufficiency of H node by examples to show that different graph typologies with the same H node can carry different label information.
To analyze to what extent the graph structure can affect the output of a GNN, we first simplify the GCN by removing its nonlinearity as [60]. Let A ^ R N × N denote a general aggregation operator. Then, Equation (1) can be simplified as,
Y = softmax ( A ^ X W ) = softmax ( Y )
After each gradient decent step Δ W = γ d L d W , where γ is the learning rate, the update of Y will be,
Δ Y = A ^ X Δ W = γ A ^ X d L d W A ^ X d L d W = A ^ X X T A ^ T ( Z Y ) = S ( A ^ , X ) ( Z Y )
where S ( A ^ , X ) A ^ X ( A ^ X ) T is a post-aggregation node similarity matrix, Z Y is the prediction error matrix. The update direction of node i is essentially a weighted sum of the prediction error, i.e. Δ ( Y ) i , : = j V S ( A ^ , X ) i , j ( Z Y ) j , : .
To study the effect of heterophily, we first define the aggregation similarity score as follows.
Definition 1. 
Aggregation similarity score
S agg S ( A ^ , X ) = v | Mean u { S ( A ^ , X ) v , u | Z u , : = Z v , : } Mean u { S ( A ^ , X ) v , u | Z u , : Z v , : } V
where Mean u { · } takes the average over u of a given multiset of values or variables.
S agg ( S ( A ^ , X ) ) measures the proportion of nodes v V that will put relatively larger similarity weights on nodes in the same class than in other classes after aggregation. It is easy to see that S agg ( S ( A ^ , X ) ) [ 0 , 1 ] . But in practice, we observe that in most datasets, we will have S agg ( S ( A ^ , X ) ) 0.5 . Based on this observation, we rescale (23) to the following modified aggregation similarity for practical usage,
S agg M S ( A ^ , X ) = 2 S agg S ( A ^ , X ) 1 +
In order to measure the consistency between labels and graph structures without considering node features and make a fair comparison with the existing homophily metrics in (20), we define the graph ( G ) aggregation ( A ^ ) homophily and its modified version as
H agg ( G ) = S agg S ( A ^ , Z ) , H agg M ( G ) = S agg M S ( A ^ , Z )
In practice, we will only check H agg ( G ) when H agg M ( G ) = 0 . As Figure 3 shows, when A ^ = A ^ rw , H agg ( G ) = H agg M ( G ) = 1 . Thus, this new metric reflects the fact that nodes in classes 1 and 2 are still highly distinguishable after aggregation, while other metrics mentioned before fail to capture the information and misleadingly give value 0. This shows the advantage of H agg ( G ) and H agg M ( G ) by additionally considering information from aggregation operator A ^ and the similarity matrix.

Comparison of Homophily Metrics on Synthetic Graphs

To comprehensively compare H agg M ( G ) with the metrics in (20) in terms of how they reveal the influence of graph structure on the GNN performance, we generate synthetic graphs (d-regular graphs with edge homophily varied from 0.005 to 0.95 ) and evaluate SGC with 1-hop aggregation (SGC-1) [60] and GCN [24] on them.
The performance of SGC-1 and GCN are expected to be monotonically increasing with a proper and informative homophily metric. However, Figure 4a–c show that the performance curves under H edge ( G ) , H node ( G ) and H class ( G ) are U-shaped 5, while Figure 2d reveals a nearly monotonic curve only with a little numerical perturbation around 1. This indicates that H agg M ( G ) can describe how the graph structure affects the performance of SGC-1 and GCN more appropriately and adequately than the existing metrics.

3.3. How Diversification Operation Helps with Harmful Heterophily

We first consider the example shown in Figure 5. From S ( A ^ , X ) , nodes 1,3 assign relatively large positive weights to nodes in class 2 after aggregation, which will make node 1,3 hard to be distinguished from nodes in class 2. Despite the fact, we can still distinguish between nodes 1,3 and 4,5,6,7 by considering their neighborhood difference: nodes 1,3 are different from most of their neighbors while nodes 4,5,6,7 are similar to most of their neighbors. This indicates, in some cases, although some nodes become similar after aggregation, they are still distinguishable via their surrounding dissimilarities. This leads us to use diversification operation, i.e. high-pass (HP) filter I A ^ [10] (will be introduced in the next subsection) to extract the information of neighborhood differences and address harmful heterophily. As S ( I A ^ , X ) in Figure 5 shows, nodes 1,3 will assign negative weights to nodes 4,5,6,7 after diversification operation, i.e. nodes 1,3 treat nodes 4,5,6,7 as negative samples and will move away from them during backpropagation. Base on this example, we first propose diversification distinguishability as follows to measures the proportion of nodes that diversification operation is potentially helpful for,
Definition 2. 
Diversification Distinguishability (DD) based on S ( I A ^ , X ) .
Given S ( I A ^ , X ) , a node v is diversification distinguishable if the following two conditions are satisfied at the same time,
1 . Mean u { S ( I A ^ , X ) v , u | u V Z u , : = Z v , : } > 0 ; 2 . Mean u { S ( I A ^ , X ) v , u | u V Z u , : Z v , : } 0
Then, graph diversification distinguishability value is defined as
DD A ^ , X ( G ) = 1 V | { v | v i s d i v e r s i f i c a t i o n d i s t i n g u i s h a b l e } |
We can see that DD A ^ , X ( G ) [ 0 , 1 ] . The effectiveness of diversification operation can be proved for binary classification problems under certain conditions based on definition 2, leading us to:
Theorem 3. 
Suppose X = Z , A ^ = A ^ rw . Then, for a binary classification problem, i.e.  C = 2 , all nodes are diversification distinguishable, i.e. DD A ^ , Z ( G ) = 1 . Theorem 3 theoretically demonstrates the importance of diversification operation to extract high-frequency information of graph signal [10]. Combined with aggregation operation, which is a low-pass filter [10,48], we can get a filterbank which uses both aggregation and diversification operations to distinctively extract the low- and high-frequency information from graph signals. We will introduce filterbank in the next subsection.

3.4. Filterbank and Adaptive Channel Mixing(ACM) GNN Framework

Filterbank

For the graph signal x defined on G , a 2-channel linear (analysis) filterbank [10]6 includes a pair of low-pass(LP) and high-pass(HP) filters H LP , H HP , where H LP and H HP retain the low-frequency and high-frequency content of x , respectively. Filterbanks with H LP + H HP = I will not lose any information of the input signal, i.e. perfect reconstruction property [10].
However, most existing GNNs are under uni-channel filtering architecture [17,24,58] with either H LP or H HP channel that only partially preserves the input information. Generally, the Laplacian matrices ( L sym , L rw , L ^ sym , L ^ rw ) can be regarded as HP filters [10] and affinity matrices ( A sym , A rw , A ^ sym , A ^ rw ) can be treated as LP filters [16,48]. Moreover, we consider MLPs as owing a special identity filterbank with matrix I that satisfies H LP + H HP = I + 0 = I .

Filterbank in Spatial Form

Filterbank methods can also be extended to spatial GNNs. Formally, on the node level, left multiplying H LP and H HP on x performs as aggregation and diversification operations, respectively. For example, suppose H LP = A ^ and H HP = I A ^ , then for node i we have
( H LP x ) i = j { N i i } A ^ i , j x j , ( H HP x ) i = x i j { N i i } A ^ i , j x j
where A ^ i , j is the connection weight between two nodes. To leverage HP and identity channels in GNNs, we propose the Adaptive Channel Mixing (ACM) framework which can be applied to lots of baseline GNN. We use GCN as an example and introduce ACM framework in matrix form. We use H LP and H HP to represent general LP and HP filters. The ACM framework includes 3 steps as follows,
Step 1 . Feature Extraction for Each Channel : Option 1 : H L l = ReLU H LP H l 1 W L l 1 , H H l = ReLU H HP H l 1 W H l 1 , H I l = ReLU I H l 1 W I l 1 ; Option 2 : H L l = H LP ReLU H l 1 W L l 1 , H H l = H HP ReLU H l 1 W H l 1 , H I l = I ReLU H l 1 W I l 1 ; W L l 1 , W H l 1 , W I l 1 R F l 1 × F l ; Step 2 . Feature based Weight Learning α ˜ L l = σ H L l W ˜ L l , α ˜ H l = σ H H l W ˜ H l , α ˜ I l = σ H I l W ˜ I l , W ˜ L l 1 , W ˜ H l 1 , W ˜ I l 1 R F l × 1 α L l , α H l , α I l = Softmax α ˜ L l , α ˜ H l , α ˜ I l W Mix l / T , , W Mix l R 3 × 3 , T R is the temperature ; Step 3 . Node wise Channel Mixing : H l = diag ( α L l ) H L l + diag ( α H l ) H H l + diag ( α I l ) H I l .
The framework with option 1 in step 1 is ACM framework and with option 2 is ACMII framework. ACM(II)-GCN first implement distinct feature extractions for 3 channels, respectively. After processed by a set of filterbanks, 3 filtered components H L l , H H l , H I l are obtained. Different nodes may have different needs for the information in the 3 channels, e.g. in Figure 5, nodes 1,3 demand high-frequency information while node 2 only needs low-frequency information. To adaptively exploit information from different channels, ACM(II)-GCN learns row-wise (node-wise) feature-conditioned weights to combine the 3 channels. ACM(II) can be easily plugged into spatial GNNs by replacing H LP and H HP by aggregation and diversification operations as shown in (28).

Complexity

Number of learnable parameters in layer l of ACM(II)-GCN is 3 F l 1 ( F l + 1 ) + 9 , while it is F l 1 F l in GCN. The computation of step 1-3 takes N F l ( 8 + 6 F l 1 ) + 2 F l ( nnz ( H LP ) + nnz ( H HP ) ) + 18 N flops, while GCN layer takes 2 N F l 1 F l + 2 F l ( nnz ( H LP ) ) flops, where nnz ( · ) is the number of non-zero elements.

Performance Comparison

We implement SGC [60] with 1 hop and 2 hops (SGC-1, SGC-2), GCNII [5], GCNII * [5], GCN [24] and snowball networks with 2 and 3 layers (snowball-2, snowball-3) and apply them in ACM or ACMII framework: we use A ^ rw as LP filter and the corresponding HP filter is I A ^ rw . We compare them with several baseline and SOTA GNN models: MLP with 2 layers (MLP-2), GAT [58], APPNP [25], GPRGNN [6], H 2 GCN [70], MixHop [1], GCN+JK [24,35,63], GAT+JK [35,58,63], FAGCN [2] GraphSAGE [17] and Geom-GCN [51]. Besides the 9 benchmark datasets Cornell, Wisconsin, Texas, Film, Chameleon, Squirrel, Cora, Citeseer and Pubmed used in [51], we further test the above models on a new benchmark dataset, Deezer-Europe, that is proposed in [35]. On each dataset used in [51], we test the models 10 times following the same early stopping strategy, the same random data splitting method and Adam [23] optimizer as used in GPRGNN [6]. For Deezer-Europe, we test the above models 5 times with the same early stopping strategy, the same fixed splits and AdamW [37] used in [35].
To better visualize the performance boost and the comparison with SOTA models, in Figure 6, we plot the bar charts of the test accuracy of SOTA models, 3 selected baselines (GCN, snowball-2, snowball-3) and their ACM and ACMII augmented models on 6 most commonly used benchmark heterophily datasets (See [40] for the full results and comparison). We can see that after being applied in ACM or ACMII framework, the performance of the 3 baseline models are significantly boosted on all tasks and can achieve SOTA performance. Especially on Cornell, Texas, Film and Squirrel, the augmented models significantly outperform the current SOTA models. Overall, It suggests that ACM or ACMII framework can help GNNs to generalize better on node classification tasks on heterophilous graphs.

3.5. Prior Work

We discuss relevant work of GNNs on addressing heterophily challenge in this part. Authors in [1] acknowledge the difficulty of learning on graphs with weak homophily and propose MixHop to extract features from multi-hop neighborhood to get more information. Geom-GCN [51] precomputes unsupervised node embeddings and uses graph structure defined by geometric relationships in the embedding space to define the bi-level aggregation process. Authors in [20] propose measurements based on feature smoothness and label smoothness that are potentially helpful to guide GNNs on dealing with heterophilous graphs. H 2 GCN [70] combines 3 key designs to address heterophily: (1) ego- and neighbor-embedding separation; (2) higher-order neighborhoods; (3) combination of intermediate representations. CPGNN [69] models label correlations by the compatibility matrix, which is beneficial for heterophily settings, and propagates a prior belief estimation into GNNs by the compatibility matrix. FBGNN [47] first proposes to use filterbank to address heterophily problem, but it does not fully explain the insights behind HP filters and does not contain identity channel and node-wise channel mixing mechanism. FAGCN [2] learns edge-level aggregation weights as GAT [58] but allows the weights to be negative which enables the network to capture the high-frequency components in graph signals. GPRGNN [6] uses learnable weights that can be both positive and negative for feature propagation, it allows GRPGNN to adapt heterophily structure of graph and is able to handle both high- and low-frequency parts of the graph signals.

3.6. Future Work

Limitation of Diversification Operation

Diversification operation does not work well in all harmful heterophily cases. For example, consider an imbalanced dataset where several small clusters with distinctive labels are densely connected to a large cluster. In this case, the surrounding differences of nodes in small clusters are similar, i.e. the neighborhood differences are mainly from their connection to the same large cluster, and this possibly makes diversification operation fail to discriminate them. Thus, it is obvious that ACM framework is not able to handle all heterophily cases.
From Figure 4, we can see that GNNs consistently perform well in the high homophily area. This reveals the fact that all homophily cases are helpful. This reminds us that instead of using a fixed adjacency matrix, we can learn a new adjacency matrix with different homophily level. With this in mind, we design an architecture with additional adjacency learner as shown in Figure 7: instead of using a fixed predefined adjacency matrix, we will learn an adjacency matrix with edges that can reveal the label similarity between nodes, i.e. homophily. . This adjacency learner should ideally be trained end-to end. From some preliminary experimental results (not included in this report) of a GCN with a pretrained adjacency learner, this method is promising although there are some stability issues need to be fixed.

3.6.0.12. Exploring Different Ways for Adjacency Candidate Selection

Some tricks can be explored when we are selecting the adjacency candidates for the adjacency learner:
  • Sample or select (top- k 1 ) nodes from complementary graph, put them together with the pre-defined neighborhood set to form adjacency candidate set, then sample or select (top- k 2 ) adjacency candidates for training. Try to train it end-to-end.
  • Consider modeling the candidate selection process as a multi-armed bandit problem. Find an efficient way to learn to select good candidates from complementary graph. Can use pseudo count to prevent selecting the same nodes repeatedly.

4. Graph Representation Learning for Reinforcement Learning

4.1. Markov Decision Process (MDP)

MDP is a framework to model the learning process that the agent learns from the interaction with the environment[56,67,68]. The interaction happens in discrete time steps, t = 0 , 1 , 2 , 3 , . At step t, given a state S t = s t S , the agent picks an action a t A ( s t ) according to a policy π ( · | s t ) , which is a rule of choosing actions given a state. Then, at time t + 1 , the environmental dynamics p : S × R × A × S [ 0 , 1 ] take the agent to a new state S t + 1 = s t + 1 S and provide a numerical reward R t + 1 = r t + 1 ( s t , a t , s t + 1 ) R . Such a sequence of interaction gives us a trajectory τ = { S 0 , A 0 , R 1 , S 1 , A 1 , R 2 , S 2 , A 2 , R 3 , } . The objective is to find an optimal policy to maximize the expected long-term discounted cumulative reward V π ( s ) = E π [ k = 0 γ k R t + k + 1 | S t = s ] for each state s, where γ is the discount factor.
For a given policy π , solving its value function V π is equivalent to solving the following linear system,
V π = r π + γ P π V π
where V π = [ V π ( s ) ] s S T R | S | , r π = [ r π ( s ) ] s S T R | S | , P π = [ P π ( s | s ) ] s , s S R | S | × | S | . The state transition matrix P π essentially defines a graph structure over states and the reward vector r π is a signal defines on graph. Thus, solving value function can be considered as a (supervised or semi-supervised) node regression tasks over graph. Besides solving V π , the graph structure can also be used for reward propagation and representation learning in Reinforcement Learning (RL) [26,27,28].

4.2. Graph Representation Learning for MDP

Treating MDP as a graph is an old but never outdated idea. Traditional methods use graph Laplacian for a fixed policy to estimate V π , e.g. proto-value function [49]. In addition to value function estimation, [28] proposes to use GCN to learn potential-based reward shaping, which can accelerate the learning process of the agent.
The above methods both construct the graph from the sampled trajectory data. With modern Graph Representation Learning (GRL) methods e.g. node embedding methods [3], link prediction methods [52,54,57], we can learn to reconstruct the underlying graph (adjacency matrix) from sampled data more efficiently. And label propagation [53], which is a commonly used algorithm for graph semi-supervised learning, can be helpful for efficient reward propagation. In Section 4.3, we will introduce the potential of using GRL for reward propagation and representation learning in reinforcement learning.

4.3. Reinforcement Learning with Graph Representation Learning

In this section, we will draw how to represent Markov Decision Process (MDP) with graph and introduce two possible ways of using graph representation learning to address the problems defined on MDP.
Each state can be treated as a node on a graph, the transition probability between each pair of nodes (an element in state transition matrix) can be represented by the edge (or weight) between them and value function is a function defined on each node of the graph. The details (for finite MDP) are introduced in matrix form as follows [38,59]:
  • Denote | S | | A | × | S | environment transition matrix as P, where
    P s a , s = r p s , r | s , a
    and P s a , s 0 , s P s a , s = 1 , for all s , a . Note that P is not a square matrix.
  • We rewrite the policy π by an | S | × | S | | A | matrix Π , where Π s , s a = π ( a | s ) if s = s , otherwise 0:
    Π = diag ( π ( · | s 1 ) T , , π ( · | s | S | ) T )
    where π ( · | s i ) T is an | A | -dimensional row vector. From this definition, one can quickly verify that the matrix product Π P gives the | S | × | S | state-to-state transition matrix P π (asymmetric) induced by the policy π in the environment P, and the | S | | A | × | S | | A | matrix product P Π gives the state-action-to-state-action transition matrix P π (asymmetric) induced by policy π in the environment P.
  • We denote the | S | | A | × 1 reward vector as r , whose entry r ( s a ) specifies the reward obtained when taking action a in state s, i.e.
    r ( s a ) = E [ r | s , a ] = s S P s a , s · r ( s , a , s ) .
  • The state value function and state-action value function can be represented by
    V π = i = 0 γ i ( Π P ) i Π r = Π r + γ Π P V π R | S | × 1 , Q π = i = 0 γ i ( P Π ) i r = r + γ P Π Q π R | S | | A | × 1

4.3.1. Learn Reward Propagation as Label Propagation

The sampling process from an MDP can be considered as a random walk defined on a graph, because the relation (edge) between each pair of states is essentially a transition probability 7. Discovering the underlying graph of a MDP can help us to leverage the correlation between states to learn value function or do to efficient exploration in sparse reward environment.
Usually, the graph is constructed from the trajectory data, i.e. the pairwise state transition data. But once we update the policy, we need to reconstruct the graph. With graph embedding methods for link prediction tasks, e.g., Deepwalk [52], node2vec [14], Line[57], we are able to learn graph reconstruction by inferring some unobserved transition. To be more specifically, instead of learning P π ( s | s ) for a fixed policy π , we can learn the state-action transition probability P ( s | s , a ) , which is independent of π . In this way, we can take use of trajectory data in all history no matter the policy changes or not. And once we are given a policy, we can infer the graph by combining π ( a | s ) and P ( s | s , a ) .

4.3.2. Graph Embedding as Auxiliary Task for Representation Learning

Learning auxiliary tasks is showed to be helpful for state representation learning [22], which is critical to learn a good policy for agents. Among the methods, successor representation is showed to be theoretically and empirically important for learning a good state representations [8,29]. Modeling the successor triplet ( s , a , s ) for MDP is essentially equivalent to modeling the triplet head , relation , tail in knowledge graph. And there exist a lot of algorithms in knowledge graph embedding community to address triplet embedding problem, e.g., TransE [3], RotateE [55], QuatE [65] and DihEdral [62]. These methods can be borrowed to learn richer representation for RL tasks.

Appendix A. Calculation of Variances

Appendix A.1. Background

We first decompose the deep GCN architecture as follows
Y 0 = X , H 1 = A ^ X W 0 , Y 1 = f ( H 1 ) H l + 1 = A ^ Y l W l , Y l + 1 = f ( H l + 1 ) , l = 1 , , n Y = softmax ( A ^ Y n W n ) softmax ( H n + 1 ) L = trace ( Z T log Y )
where H l , Y l R N × F l , W l R F l × F l + 1 ; Z R N × C is the ground truth matrix with one-hot label vector Z i , : in each row, C is number of classes; L is the scalar loss. Then the gradient propagates in the following way
Output L H n + 1 = softmax ( H n + 1 ) Z L W n = Y n T A ^ L H n + 1 , L Y n = A ^ L H n + 1 W n T Hidden L H l = L Y l f ( H l ) , L W l 1 = Y l 1 T A ^ L H l , L Y l 1 = A ^ L H l W l 1 T
where ⊙ is the Hadamard product. The gradient propagation of GCN differs from that of multi-layer perceptron (MLP) by an extra multiplication of A ^ when the gradient signal flows through Y l .

Appendix A.2. Variance Analysis

Appendix A.2.1. Forward View

H l + 1 = A ^ Y l W l , H l + 1 i j = A ^ i , : Y l ( W l ) : , j = t = 1 F l k = 1 N A ^ i k Y l k t ( W l ) t , j Y l + 1 = f ( H l + 1 ) , l = 1 , , n
Suppose the activation function is identity function such as [60], all element in W share the same variance and each element in X are independent and share the same variance, E ( Y l ) k t = 0 and E ( W l ) i j = 0 , Var ( Y l + 1 ) i j can be written as
Var t = 1 F l k = 1 N A ^ i k Y l k t W t , j = t = 1 F l k = 1 N Var A ^ i k Y l k t W t , j = F l ( d i + 1 ) · 1 ( d i + 1 ) 2 Var Y l Var W = F l d i + 1 Var Y l ) Var ( W = Var Y l + 1
Then, if we want Var Y l + 1 = Var Y l we will have
Var ( W ) = d i + 1 F l
Since the parameter matrix is shared by all nodes, we cannot design a node specified initialization scheme for the W. Thus, we initialize each element in W by the average values as follows
Var ( W ) = i = 1 N ( d i + 1 ) N F l = 1 + average node degree F l
If we relax the assumption E ( Y l ) k t = 0 and assume it nonzero, as shown in [18], if we use ReLU activation function, we will have
Var ( W ) = 2 i = 1 N ( d i + 1 ) N F l = 2 ( 1 + average node degree ) F l
If we consider the correlation between nodes and keep the assumption that each dimension of the feature (columns in X ) is independent and E ( X ) 0 , we will have
Var ( H l + 1 ) i j = Var t = 1 F l k = 1 N A ^ i k Y l k t W t , j = F l · Var k = 1 N A i k Y l k W = F l · E k = 1 N A i k Y l k 2 Var ( W ) = F l ( d i + 1 ) 2 E k N i Y l k 2 Var ( W ) = F l ( d i + 1 ) 2 E k N i Y l k 2 + k , j N i , k j Y l k Y l j Var ( W )
Since
E Y l k Y l j = Cov Y l k , Y l j + E Y l k E Y l j
we can have several reasonable assumptions over Cov Y l k , Y l j to get different results
  • The adjacency matrix with self-loop can be considered as a prior covariance matrix and thus a reasonable assumption is Cov Y l k , Y l j = Var Y l k = Var Y l j .
  • Consider symmetric normalized A ^ as a prior covariance matrix and we have Cov Y l k , Y l j = Var Y l k Var Y l j = Var Y l k
These assumptions all lead us to
E Y l k Y l j = Var Y l k + E 2 Y l k = E Y l k 2
Thus we have
Var ( H l + 1 ) i j = F l · 1 ( d i + 1 ) 2 E ( d i + 1 ) 2 Y l k 2 Var ( W ) = F l · E Y l k 2 Var ( W ) = F l · 1 2 Var H l Var ( W )
Thus
Var ( W ) = 2 F l
Suppose E ( Y l k Y l j ) 0 and C o v ( Y l k , Y l j ) 0 , since C o v ( Y l k , Y l j ) Var ( Y l k ) Var ( Y l j ) = Var ( Y l k ) , we have E ( Y l k Y l j ) E Y l k 2 and we assume E ( Y l k Y l j ) = α E Y l k 2 , α [ 0 , 1 ] . Then,
Var ( H l + 1 ) i j = F l · 1 ( d i + 1 ) 2 ( d i + 1 ) ( α d i + 1 ) E Y l k 2 Var ( W ) = F l · α d i + 1 d i + 1 E Y l k 2 Var ( W ) = F l · α d i + 1 2 ( d i + 1 ) Var H l Var ( W )
Thus,
Var ( W ) = 2 ( d i + 1 ) F l ( α d i + 1 )
( d i + 1 ) / ( α d i + 1 ) can be considered as the effective degree of node i and an estimation is
Var ^ ( W ) = i 2 ( d i + 1 ) / ( α d i + 1 ) N F l = 2 × average effective node degree F l

Appendix A.2.2. Backward View

Under linear assumption, we have H l = Y l and
L H l = L Y l = A ^ L H l + 1 W l T , L W l 1 = Y l 1 T A ^ L H l ,
We have
L H l i j = t = 1 F l + 1 k = 1 N A ^ i k L H l + 1 k t ( W l T ) t , j L W l 1 i j = ( A ^ Y l 1 ) · i T L H l · j = k = 1 N ( t = 1 N A ^ k t ( Y l 1 ) t i ) L H l k j ,
And
Var L H l i j = Var t = 1 F l + 1 k = 1 N A ^ i k L H l + 1 k t ( W l T ) t , j = F l + 1 d i + 1 Var L H l + 1 k t ( W l T ) t , j = F l + 1 d i + 1 Var L H l + 1 Var W l N F l + 1 i ( d i + 1 ) Var L H l + 1 Var W l Var L H n + 1 l = l n N F l + 1 i ( d i + 1 ) Var W l Var L W l 1 i j = Var k = 1 N ( t = 1 N A ^ k t ( Y l 1 ) t i ) L H l k j = k = 1 N Var t = 1 N A ^ k t ( Y l 1 ) t i Var L H l k j = k = 1 N 1 d k + 1 Var Y l 1 Var L H l
From (A4), Var Y l 1 can be approximately written as
Var Y l 1 = Var H l 1 N F l 2 k d k + 1 Var Y l 2 ) Var ( W Var ( Y 0 ) l = 0 l 2 N F l k d k + 1 Var ( W )
Then
Var L W l 1 i j k = 1 N 1 d k + 1 Var ( Y 0 ) Var L H n + 1 l = 0 , l l 1 n N F l k d k + 1 Var ( W )

Appendix A.3. Energy Analysis

Another way to design weight initialization is from the flow of energy perspective. Under the linear assumption and suppose we can do QR factorization of the weight matrices to make them orthogonal and W T W = W W T = α I , then we have
trace ( H l + 1 T H l + 1 ) = trace ( A ^ Y l W l ) T A ^ Y l W l = trace W l T Y l T A ^ T A ^ Y l W l = trace W l W l T Y l T A ^ T A ^ Y l = trace α i = 1 N λ i 2 Y l T u i u i T Y l = trace α i = 1 N λ i 2 u i T Y l ( u i T Y l ) T = trace α i = 1 N λ i 2 u i T Y l 2 2 = trace i = 1 N u i T H l + 1 2 2
Suppose all u i T Y l 2 2 and u i T H l + 1 are equal, then
α = N i = 1 N λ i 2 = N A ^ F 2 = N i = 1 N 1 d i + 1
If we use ReLU activation function, we have
α = 2 N i = 1 N λ i 2 = 2 N A ^ F 2 = 2 N i = 1 N 1 d i + 1

References

  1. S. Abu-El-Haija, B. Perozzi, A. Kapoor, N. Alipourfard, K. Lerman, H. Harutyunyan, G. Ver Steeg, and A. Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, pages 21–29. PMLR, 2019.
  2. D. Bo, X. Wang, C. Shi, and H. Shen. Beyond low-frequency information in graph convolutional networks. arXiv preprint, arXiv:2101.00797, 2021.
  3. A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787–2795, 2013.
  4. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: Going beyond euclidean data. arXiv, abs/1611.08097, 2016.
  5. M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks. arXiv preprint, arXiv:2007.02133.2020.
  6. E. Chien, J. Peng, P. Li, and O. Milenkovic. Adaptive universal generalized pagerank graph neural network. In International Conference on Learning Representations. https://openreview. net/forum, 2021.
  7. F. R. Chung and F. C. Graham. Spectral graph theory Number 92. American Mathematical Soc., 1997.
  8. P. Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613–624. 1993.
  9. M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. arXiv, abs/1606.09375, 2016.
  10. V. N. Ekambaram. Graph structured data viewed through a fourier lens. University of California, Berkeley, 2014.
  11. A. Frommer, K. Lund, and D. B. Szyld. Block Krylov subspace methods for functions of matrices. Electronic Transactions on Numerical Analysis. 47:100–126, 2017.
  12. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263–1272. JMLR. org, 2017.
  13. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256, 2010.
  14. A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
  15. M. H. Gutknecht and T. Schmelzer. The block grade of a block krylov space. Linear Algebra and its Applications, 430(1):174–185, 2009.
  16. W. L. Hamilton. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning, 14(3):1–159. 2020.
  17. W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large graphs. arXiv, abs/1706.02216, 2017.
  18. K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
  19. G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  20. Y. Hou, J. Zhang, J. Cheng, K. Ma, R. T. Ma, H. Chen, and M.-C. Yang. Measuring and improving the use of graph information in graph neural networks. In International Conference on Learning Representations, 2019.
  21. C. Hua, S. Luan, Q. Zhang, and J. Fu. Graph neural networks intersect probabilistic graphical models: A survey. arXiv preprint, arXiv:2206.06089, 2022.
  22. M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint, arXiv:1611.05397.2016.
  23. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint, arXiv:1412.6980.2014.
  24. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv, abs/1609.02907, 2016.
  25. J. Klicpera, A. Bojchevski, and S. Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint, arXiv:1810.05997.2018.
  26. M. Klissarov and D. Precup. Diffusion-based approximate value functions. In the 35th international conference on Machine learning ECA Workshop, 2018.
  27. M. Klissarov and D. Precup. Graph convolutional networks as reward shaping functions. In ICLR 2019, Representation Learning on Graphs and Manifolds Workshop, 2019.
  28. M. Klissarov and D. Precup. Reward propagation using graph convolutional networks. arXiv preprint, arXiv:2010.02474.2020.
  29. T. D. Kulkarni, A. Saeedi, S. Gautam, and S. J. Gershman. Deep successor reinforcement learning. arXiv preprint, arXiv:1606.02396.2016.
  30. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. nature, 521(7553):436, . 2015.
  31. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  32. Q. Li, Z. Han, and X.Wu. Deeper insights into graph convolutional networks for semi-supervised learning. arXiv, abs/1801.07606, 2018.
  33. R. Li, S. Wang, F. Zhu, and J. Huang. Adaptive graph convolutional neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  34. R. Liao, Z. Zhao, R. Urtasun, and R. S. Zemel. Lanczosnet: Multi-scale deep graph convolutional networks. arXiv, abs/1901.01484, 2019.
  35. D. Lim, X. Li, F. Hohne, and S.-N. Lim. New benchmarks for learning on non-homophilous graphs. arXiv preprint, arXiv:2104.01404.2021.
  36. M. Liu, Z. Wang, and S. Ji. Non-local graph neural networks. arXiv preprint, arXiv:2005.14612.2020.
  37. I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprint, arXiv:1711.05101.2017.
  38. S. Luan, X.-W. Chang, and D. Precup. Revisit policy optimization in matrix form. arXiv preprint, arXiv:1909.09186.2019.
  39. S. Luan, C. Hua, Q. Lu, J. Zhu, X.-W. Chang, and D. Precup. When do we need gnn for node classification? arXiv preprint, arXiv:2210.16979.2022.
  40. S. Luan, C. Hua, Q. Lu, J. Zhu, M. Zhao, S. Zhang, X.-W. Chang, and D. Precup. Is heterophily a real nightmare for graph neural networks on performing node classification?
  41. S. Luan, C. Hua, Q. Lu, J. Zhu, M. Zhao, S. Zhang, X.-W. Chang, and D. Precup. Is heterophily a real nightmare for graph neural networks to do node classification? arXiv preprint, arXiv:2109.05641.2021.
  42. S. Luan, C. Hua, Q. Lu, J. Zhu, M. Zhao, S. Zhang, X.-W. Chang, and D. Precup. Revisiting heterophily for graph neural networks. Advances in neural information processing systems, 35:1362–1375, 2022.
  43. S. Luan, C. Hua, M. Xu, Q. Lu, J. Zhu, X.-W. Chang, J. Fu, J. Leskovec, and D. Precup. When do graph neural networks help with node classification: Investigating the homophily principle on node distinguishability. arXiv preprint, arXiv:2304.14274.2023.
  44. S. Luan, M. Zhao, X.-W. Chang, and D. Precup. Break the ceiling: Stronger multi-scale deep graph convolutional networks. Advances in neural information processing systems, 32, 2019.
  45. S. Luan, M. Zhao, X.-W. Chang, and D. Precup. Training matters: Unlocking potentials of deeper graph convolutional neural networks. arXiv preprint, arXiv:2008.08838.2020.
  46. S. Luan, M. Zhao, C. Hua, X.-W. Chang, and D. Precup. Complete the missing half: Augmenting aggregation filtering with diversification for graph convolutional networks. arXiv preprint, arXiv:2008.08844.2020.
  47. S. Luan, M. Zhao, C. Hua, X.-W. Chang, and D. Precup. Complete the missing half: Augmenting aggregation filtering with diversification for graph convolutional neural networks. arXiv preprint, arXiv:2212.10822.2022.
  48. T. Maehara. Revisiting graph neural networks: All we have is low-pass filters. arXiv preprint, arXiv:1905.09550.2019.
  49. S. Mahadevan. Proto-value functions: Developmental reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pages 553–560. ACM, 2005.
  50. F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115–5124, 2017.
  51. H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint, arXiv:2002.05287.2020.
  52. B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
  53. U. N. Raghavan, R. Albert, and S. Kumara. Near linear time algorithm to detect community structures in large-scale networks. Physical review E, 76(3):036106, 2007.
  54. L. F. Ribeiro, P. H. Saverese, and D. R. Figueiredo. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 385–394, 2017.
  55. Z. Sun, Z.-H. Deng, J.-Y. Nie, and J. Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint, arXiv:1902.10197.2019.
  56. R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, MIT press,. 2018.
  57. J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015.
  58. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. arXiv, abs/1710.10903, 2017.
  59. T. Wang, M. Bowling, and D. Schuurmans. Dual representations for dynamic programming and reinforcement learning. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages 44–51. IEEE, 2007.
  60. F. Wu, T. Zhang, A. H. d. Souza Jr, C. Fifty, T. Yu, and K. Q. Weinberger. Simplifying graph convolutional networks. arXiv preprint, arXiv:1902.07153.2019.
  61. Z.Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu. A comprehensive survey on graph neural networks. arXiv, abs/1901.00596, 2019.
  62. C. Xu and R. Li. Relation embedding with dihedral group in knowledge graph. arXiv preprint, arXiv:1906.00687.2019.
  63. K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka. Representation learning on graphs with jumping knowledge networks. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5453–5462. PMLR, 10–15 Jul 2018.
  64. Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint, arXiv:2102.06462.2021.
  65. S. Zhang, Y. Tay, L. Yao, and Q. Liu. Quaternion knowledge graph embedding. arXiv preprint, arXiv:1904.10281.2019.
  66. S. Zhang, H. Tong, J. Xu, and R. Maciejewski. Graph convolutional networks: Algorithms, applications and open challenges. In International Conference on Computational Social Networks, pages 79–91. Springer, 2018.
  67. M. Zhao, Z. Liu, S. Luan, S. Zhang, D. Precup, and Y. Bengio. A consciousness-inspired planning agent for model-based reinforcement learning. Advances in neural information processing systems, 34:1569–1581, 2021.
  68. M. Zhao, S. Luan, I. Porada, X.-W. Chang, and D. Precup. Meta-learning state-based eligibility traces for more sample-efficient policy evaluation. arXiv preprint, arXiv:1904.11439.2019.
  69. J. Zhu, R. A. Rossi, A. Rao, T. Mai, N. Lipka, N. K. Ahmed, and D. Koutra. Graph neural networks with heterophily. arXiv preprint, arXiv:2009.13566.2020.
  70. J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33, 2020.
  71. J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Generalizing graph neural networks beyond homophily. arXiv preprint, arXiv:2006.11468.2020.
1
The expressive power of a sound deep Neural Network (NN) architecture should be expected to grow with the increment of network depth [19,30].
2
For simplicity, the independence assumption is directly borrowed from [13], but theoretically it is too strong for GNNs. We will try to relax this assumption in the future.
3
The words "channel-shared" and "channel-wise" are borrowed from [18], which indicate if we share the same learning parameter between each feature dimension or not.
4
The authors in [35] did not name this homophily metric. We name it class homophily based on its definition.
5
A similar J-shaped curve is found in [70], though using different data generation processes. It does not mention the insufficiency of edge homophily.
6
In graph signal processing, an additional synthesis filter [10] is required to form the 2-channel filterbank. But synthesis filter is not needed in our framework, so we do not introduce it in our paper.
7
From this perspective, we should not treat the trajectory as sequential data, because we do not necessarily have an ordered relation between states on a graph, even for directed graph. Although the observation seems to have an order in it, we actually only have transition relation.
Figure 1. Changes in the number of independent features with the increment of network depth.
Figure 1. Changes in the number of independent features with the increment of network depth.
Preprints 78322 g001
Figure 2. Snowball and Truncated Krylov Architectures
Figure 2. Snowball and Truncated Krylov Architectures
Preprints 78322 g002
Figure 3. Example of harmless heterophily
Figure 3. Example of harmless heterophily
Preprints 78322 g003
Figure 4. Comparison of baseline performance under different homophily metrics.
Figure 4. Comparison of baseline performance under different homophily metrics.
Preprints 78322 g004
Figure 5. Example of how HP filter addresses harmful heterophily.
Figure 5. Example of how HP filter addresses harmful heterophily.
Preprints 78322 g005
Figure 6. Comparison of SOTA models (magenta), selected baseline GNNs (red) and their ACM (green) and ACMII (blue) augmented models on 6 selected datasets. The black line and the error bar indicate the standard deviation. The symbol “↑” means the amount of improvement of the best ACM-baseline and ACM-baseline over the SOTA models.
Figure 6. Comparison of SOTA models (magenta), selected baseline GNNs (red) and their ACM (green) and ACMII (blue) augmented models on 6 selected datasets. The black line and the error bar indicate the standard deviation. The symbol “↑” means the amount of improvement of the best ACM-baseline and ACM-baseline over the SOTA models.
Preprints 78322 g006
Figure 7. GNN with adjacency learner.
Figure 7. GNN with adjacency learner.
Preprints 78322 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated