Transformers Are Graph Neural Networks

3 hours ago 3

[Submitted on 27 Jun 2025]

View PDF HTML (experimental)

Abstract:We establish connections between the Transformer architecture, originally introduced for natural language processing, and Graph Neural Networks (GNNs) for representation learning on graphs. We show how Transformers can be viewed as message passing GNNs operating on fully connected graphs of tokens, where the self-attention mechanism capture the relative importance of all tokens w.r.t. each-other, and positional encodings provide hints about sequential ordering or structure. Thus, Transformers are expressive set processing networks that learn relationships among input elements without being constrained by apriori graphs. Despite this mathematical connection to GNNs, Transformers are implemented via dense matrix operations that are significantly more efficient on modern hardware than sparse message passing. This leads to the perspective that Transformers are GNNs currently winning the hardware lottery.

Submission history

From: Chaitanya K. Joshi [view email]
[v1] Fri, 27 Jun 2025 10:15:33 UTC (316 KB)

Read Entire Article