A research team from Google Brain, Columbia University and University of Oxford proposes Graph Kernel Attention Transformers (GKATs), a new class of graph neural network that achieves greater expressive power than SOTA GNNs while reducing computation burdens.

Here is a quick read: Graph Kernel Attention Transformers: Toward Expressive and Scalable Graph Processing.

The paper Graph Kernel Attention Transformers is on arXiv.



Source link