Learning about Graph Network

These days, I was taken care of by teacher Zhang Wei, I contacted the network. In 2018, the paper "Relational inductive biases, deep learning, and graph networks", co-authored by 27 scientists from companies such as DeepMind, Google Brain, MIT, and the University of Edinburgh, has drawn attention.

Although I have been reading the knowledge of machine learning for a long time, I feel that I am just a layman. So now I am reading this paper with a heart of admiration.

Abstract is the soul of the paper. From the abstract, we know that in the process of development of artificial intelligence, it is very difficult to reach human experience. Masters from companies such as DeepMind gave their thoughts and results:”We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.”

Next is the author's introduction to the topic of graphical networks. Feeling very philosophical, I can only try to think of the problem simply. In fact, "relational inductive biases" always appears inadvertently in machine learning.The paper says that the implicit relational inductive bias in a fully connected layer is thus very weak,the differences between a fully connected layer and a convolutional layer impose some important relational inductive biases: locality and translation invariance.  The rule for combining the entities takes a step’s inputs and hidden state as arguments to update the hidden state in the recurrent layers. Sets are a natural representation for systems which are described by entities whose order is undefined or irrelevant; in particular, their relational inductive bias does not come from the presenceof something, but rather from the absence. In general, graphics support arbitrary (paired) relational structures. According to the author's words, computational calculations on the graph provide a strong relationship of inductive bias beyond the convolution and loop layers.

The main computational unit in the GN framework is the GN block, where both input and output are graphical. And in GN, the concept of triples continues to be used. And we get the Steps of computation in a full GN block which is as follows:

Learning about Graph Network

The specific calculation process is not covered here.

GN uses graphs as input and output, which makes the data representation more colorful. According to the paper, GN can easily implement the use of machine learning algorithms such as neural networks.There are many variants of graph networks, such as NLNN, MPNN, and other network types.

Ah, when I read the paper, I forgot to write something. In short, when I was dizzy at the end of the reading, I suddenly came up with the idea that the graph network is based on the network, and the graph that combines the nodes and the edges is used as input. Of course, after such combination, training, and speculation, the result must also be a graph. In my impression, it is not easy to degenerate into a simpler structure.

For the time being, the graph network is understood as a collection of collections, but this will produce more wonderful algorithms and applications. Go to the author's github and get started!