Skip to main content

Table 7 Comparison of the GNN model with Advantages, Disadvantages, and application areas: [30]

From: A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions

Model

Features/characteristics

Message Passing Mechanism

Attention Mechanism

Aggregation Strategy

Scalability

Use Cases

Advantages

Disadvantages

Graph Convolution Neural Network (GCN)

• Propagates information through neighbors

• The simple and basic method

• It uses a linear transformation

• Homophily (focuses on immediate neighbors)

• Smoothness assumption (neighbors should have similar representations)

Fixed weighted sum

No attention

Aggregation shown in 

Limited

• Node classification

• Semi-supervised Learning

• Recommendation systems

• Simplicity & interpretability

• Stable training often requires fewer epochs

• We have limited expressive power

• Inability to capture local structures

• No edge features

Graph Attention Network (GAT)

• Learns weights for each neighbor's message

• Used for transductive and inductive Learning, i.e., you can work with graph structures you've never seen before

• Can handle varying neighborhood sizes

Weighted sum with learned weights

Self-attention

Aggregation

Moderate

• Node classification

• Link prediction

• Any task requiring localized information

• Ability to capture fine-grained relationships

• Improved performance on tasks requiring attention to specific neighbors

• Computationally more expensive than GCN

• It can be more sensitive to hyperparameters

GraphSAGE

• Aggregates information from sampled neighbors

• The number of model parameters is independent of the number of graph nodes. This makes GraphSAGE able to handle larger graphs

• It can handle both supervised and unsupervised tasks

• Sample-based approach with random or predefined sampling strategies

Fixed weighted sum

No attention

Aggregation

Highly scalable

• Node classification

• Link prediction

• Tasks where scalability is crucial

• Scalability to large graphs

• Flexible sampling strategies

• Suitable for graphs with varying node degrees

• Limited ability to capture global graph structures

• May require more epochs to train effectively