In mathematics and computer science, a finite graph is often represented by its adjacency matrix. The adjacency matrix of a directed or undirected graph (with n vertices, say) is the n-by-n matrix whose entry in row i and column j gives the number of edges from the i-th to the j-th vertex. (An alternative way to store graphs in computers, especially graphs with few edges, is given by the incidence matrix.)
The modified adjacency matrix is generated by replacing all entries greater than 1 in the adjacency matrix by 1.
The adjacency matrix for the example graph
The adjacency matrix of undirected graphs is always symmetric.
Suppose two directed or undirected graphs G1 and G2 with adjacency matrices A1 and A2 are given. G1 and G2 are isomorphic if and only if there exists a permutation matrix P such that
Examples
is
Properties
In particular, A1 and A2 are similar and have therefore the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace. These can therefore serve as isomorphism invariants of graphs.
If A is the adjacency matrix of the directed or undirected graph G, then the matrix An (i.e. the matrix product of n copies of A) has an interesting interpretation: the entry in row i and column j gives the number of (directed or undirected) paths of length n from vertex i to vertex j.
The matrix I−A (where I denotes the n-by-n identity matrix) is invertible if and only if there are no directed cycles in the graph G. In this case, the inverse (I−A)−1 has the following interpretation: the entry in row i and column j gives the number of directed paths from vertex i to vertex j (which is always finite if there are no directed cycles). This can be understood using the geometric series for matrices: