Skip to main content

Table 9 Different Domains with their Tasks in Graph Neural Networks

From: A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions

Refs.

Technology domain

Task

Details

GNN Model applied

[78]

(2021)

Natural Language Processing

Text Sentiment Analysis

Their innovation involved introducing a multi-level graph neural network (MLGNN) tailored for text sentiment analysis. Their approach effectively incorporated both local and global features, utilizing node connection windows of varying sizes across different levels. Additionally, they seamlessly integrated a scaled dot-product attention mechanism as a means of message passing within their method, allowing for the integration of features from individual word nodes in the graph

MLGNN

GAT

[79]

(2022)

Natural Language Processing

Text Classification

GNNs were chosen for their aptness in handling 2D vectors, which aligns with the two-dimensional nature of text data. In their approach, Self-Organizing Maps (SOM) was employed to determine the closest neighbors within the graphs, facilitating the computation of actual distances between these neighboring elements

GNN

Self-Organizing Maps (For calculating distance)

[80]

(2023)

Natural Language Processing

Question Generation

They created a graph from the input text, where nodes represent words or phrases, and edges show their relationships. An auto-encoder model compresses the graph, capturing key information. This compressed representation helps generate context-relevant questions by selecting nodes and edges dynamically

Context-Aware Auto-Encoded Graph Neural Model

[81]–[83]

(2022)

Computer Vision

Graph Construction

There are three methods for graph construction

1. Segmenting the image or video frame into uniform grid sections, with each grid section serving as an individual vertex within the visual graph

2. Utilizing preprocessed structures, like scene graphs, for direct vertex representation

3. Incorporating semantic information to group visually similar pixel features into the same vertex

GNN

[67], (2021)

[68] (2020)

Computer Vision

3D object detection

Image and video understanding. 3D object detection in a point cloud

GCN, GAT

[84]

(2021)

Bioinformatics

Multispecies Protein Function Prediction

DeepGraph Go has 3 Features:

1. InterPro for representation vector

2. Multiple graph convolutional neural (GCN) layers

3. Multispecies strategy

DeepGraphGO: A semi-supervised deep learning approach that harnesses the strengths of both protein sequence and network data by utilizing a graph neural network (GNN)

[85]

(2022)

Bioinformatics

Link Prediction in Biomedical Networks

1. Leveraging GCN to extract node-specific features from both sequence and structural data

2. Employing a GCN-based encoder to enhance the node features by capturing inter-node dependencies within the network effectively

3. Pre-training the node features using graph reconstruction tasks as a foundational step

Pre-Training Graph Neural Networks- (PT-GNN)

[86]

(2022)

Bioinformatics

Predicting Drug–Protein Interactions

The network undergoes optimization through supervised signals derived from the downstream task, specifically the DPI prediction. By engaging in information propagation within the drug-protein association network, a Graph Neural Network can grasp network-level insights encompassing a variety of drugs and proteins. This approach amalgamates network-level information with learning-based techniques

Bridge Drug–Protein Interactions

(Bridge-DPI)

[87]

(2022)

Bioinformatics

Predicting Molecular

LR-GNN utilizes a graph convolutional network (GCN) encoder to acquire node embeddings. To depict the relationships between molecules, a propagation rule has been crafted to encapsulate the node embeddings at each GCN-encoder layer, forming the LR representation

Link Representation (LR-GNN)