By: Haoxian Chen, RIG Inc. Intern Researcher

A graph is a kind of data structure that comprises nodes and edges, where a node represents an object (e.g., user, molecule, etc.), and an edge refers to the relationship between every two nodes. Recently, Graph Neural Networks (GNNs) have become more and more popular because of the great expressive power of graph data [1]. Thanks to such expressiveness, the applications of GNNs are extensively applied to model complex and large networks and generate predictions on nodes, edges, or even the whole graph. It was not until recently, however, that GNNs became popular. Normal neural networks, shown to work well for modeling structured data (e.g., images, text, etc.), were found hard to model the unstructured topology of graph structure [2].

*Figure 1. Citation of Article The graph neural network model*

* Figure 2. Structured grid data (left) and unstructured graph data (right), credit to [2]*

Although GNN is a relatively new active research topic, many powerful models have been proposed and verified to perform well on miscellaneous tasks. The breakthrough of Graph Convolutional Networks (GCNs) [3] in 2016 adapted convolutional operations on unstructured graph data. This solved the problems in modeling graph data using neural networks and opened up a new research direction. The main idea of GCN is to leverage the topological structure of graph data and perform information aggregation from neighboring nodes, which is also called a message-passing step. In this way, convolutional operations on graph data are realized. An elaboration of the methodology is shown in Figure 3 below. The learned node representations can be used as input to normal neural networks for any type of prediction task.

*Figure 3. Graph convolutional neural network mechanism explanation, credit to [3]*

As mentioned previously, the main reason why GNNs remain to be an active research area is due to the expressiveness of graph data in modeling many use cases such as: social networks, chemical structures, fraud detection, computer vision, natural language processing, etc. Some examples will be described below:

**Social network **Social network is one of the most essential applications of GNNs [4]. In a social network, users and their interactions can be represented as nodes and edges. Therefore, any structure that involves users and their interactions can be modeled thoroughly. Detailed examples can be human behavior detection, citation network, traffic control, recommender system, program verification, logical reasoning, transaction fraud detection, etc.

*Figure 4. Social network modeled by graph data. credit to [4]*

**Chemical structure **GNNs are also extensively applied in chemistry and biology [5]. For example, one can treat proteins as graphs and use GNNs to measure the similarity of their functionalities by measuring the similarity between two proteins’ structures. In detail, GNNs can detect the topological patterns in graphs, like amino acid motifs, and further determine the similarity [5].

**Semi-supervised learning** Additionally, semi-supervised learning is one of the main functions of GNNs. To realize semi-structured learning, one only needs to generate labels for a small number of nodes, and then a GNN can be trained on these labeled nodes and generate predictions on other unlabeled nodes in the graph.

Apart from normal graph data, a heterogeneous network is a more general type of graph data, reflected by the fact that its nodes and edges can be in different types and determined by two end nodes. Therefore, a heterogeneous graph is a kind of multimodal multi-typed interconnected dataset. Representation learning of GNN on this kind of generic data is much harder than homogeneous graph data. Current efforts on developing representation learning of heterogeneous graphs fall into several categories, including meta-path based similarity, node classification and clustering, and recommendations [6]. In terms of message passing methods, many state-of-the-art models are adapted to fit in heterogeneous network learning. For instance,GCN in classical graph data is adapted to R-GCN [7], where most of the architecture remains unchanged but each type of heterogeneous network has its corresponding convolutional matrices. Also, attention-based GNNs are becoming more and more impactful. Thus, Heterogeneous Attention Network (HAN) is proposed to perform attention computation based on meta path [8]. Additionally, there are many more neural network based representation learning algorithms proposed for heterogeneous networks.

RIG’s Dynamic Trust™ is an important technique for trust evaluation. In principle, Dynamic Trust™ is built upon a heterogeneous multi-agent network and dynamically predicts the trust level of every single agent or device. NIMBUS™ serves as an essential part of the Dynamic Trust™ as a continually updated repository for the agent’s related data. In practice, Dynamic Trust™ has extensive applications, for example, social network verification, fraud detection, supply chain management, etc. Therefore, a potential developing direction is to apply heterogeneous GNNs to Dynamic Trust™. For instance, agents of different types can be modeled as heterogeneous nodes in the graph, their data stored in NIMBUS can serve as the feature for the nodes, and interaction data between agents can be regarded as the edge features. In this way, we can not only realize dynamic updates for the GNN model but also improve the accuracy of generating trust levels. Thereby, a more secure and efficient AI-powered trust evaluation system can be developed.

In summary, apart from GNN, there are many more AI/ML models that we can study and leverage their advantages to enhance Dynamic Trust™ due to its great compatibility. Therefore, in the future, Dynamic Trust™ will surely be applied to many areas and provide significantly more efficient and accurate results.

References:

[1] Scarselli, Franco, et al. “The graph neural network model.” *IEEE transactions on neural networks *20.1 (2008): 61-80.

[2] Zhou, Jie, et al. “Graph neural networks: A review of methods and applications.” AI Open 1 (2020): 57-81.

[3] Kipf, Thomas N., and Max Welling. “Semi-supervised classification with graph convolutional networks.” *arXiv preprint arXiv*:1609.02907 (2016).

[4] Hong, S. (2021, June 28). An introduction to graph neural network(gnn) for analysing structured data. Medium. https://towardsdatascience.com/an-introduction-to-graph-neural-network-gnn-for-analysing-structured-data-afce79f4cfdc.

[5] Borgwardt, Karsten M., et al. “Protein function prediction via graph kernels.” Bioinformatics 21.suppl_1 (2005): i47-i56.

[6] Yang, Carl, et al. “Heterogeneous network representation learning: A unified framework with survey and benchmark.” *IEEE Transactions on Knowledge and Data Engineering*(2020).

[7] Schlichtkrull, Michael, et al. “Modeling relational data with graph convolutional networks.” *European semantic web conference*. Springer, Cham, 2018.

[8] Wang, Xiao, et al. “Heterogeneous graph attention network.” *The World Wide Web Conference*. 2019.