Inferring network structure

graph-tool includes algorithms to identify the large-scale structure of networks in the inference submodule. Here we explain the basic functionality with self-contained examples.

Background: Nonparametric statistical inference

A common task when analyzing networks is to characterize their structures in simple terms, often by dividing the nodes into modules or “communities”.

A principled approach to perform this task is to formulate generative models that include the idea of “modules” in their descriptions, which then can be detected by inferring the model parameters from data. More precisely, given the partition \(\boldsymbol b = \{b_i\}\) of the network into \(B\) groups, where \(b_i\in[0,B-1]\) is the group membership of node \(i\), we define a model that generates a network \(\boldsymbol G\) with a probability

(1)\[P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)\]

where \(\boldsymbol\theta\) are additional model parameters. Therefore, if we observe a network \(\boldsymbol G\), the likelihood that it was generated by a given partition \(\boldsymbol b\) is obtained via the Bayesian posterior

(2)\[P(\boldsymbol b | \boldsymbol G) = \frac{\sum_{\boldsymbol\theta}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}\]

where \(P(\boldsymbol\theta, \boldsymbol b)\) is the prior likelihood of the model parameters, and

(3)\[P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)\]

is called the model evidence. The particular types of model that will be considered here have “hard constraints”, such that there is only one choice for the remaining parameters \(\boldsymbol\theta\) that is compatible with the generated network, such that Eq. (2) simplifies to

(4)\[P(\boldsymbol b | \boldsymbol G) = \frac{P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}\]

with \(\boldsymbol\theta\) above being the only choice compatible with \(\boldsymbol G\) and \(\boldsymbol b\). The inference procedures considered here will consist in either finding a network partition that maximizes Eq. (4), or sampling different partitions according its posterior probability.

As we will show below, this approach will also enable the comparison of different models according to statistical evidence (a.k.a. model selection).

Minimum description length (MDL)

We note that Eq. (4) can be written as

\[P(\boldsymbol b | \boldsymbol G) = \frac{\exp(-\Sigma)}{P(\boldsymbol G)}\]

where

(5)\[\Sigma = -\ln P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b)\]

is called the description length of the network \(\boldsymbol G\). It measures the amount of information required to describe the data, if we encode it using the particular parametrization of the generative model given by \(\boldsymbol\theta\) and \(\boldsymbol b\), as well as the parameters themselves. Therefore, if we choose to maximize the posterior likelihood of Eq. (4) it will be fully equivalent to the so-called minimum description length method. This approach corresponds to an implementation of Occam’s razor, where the simplest model is selected, among all possibilities with the same explanatory power. The selection is based on the statistical evidence available, and therefore will not overfit, i.e. mistake stochastic fluctuations for actual structure.

The stochastic block model (SBM)

The stochastic block model is arguably the simplest generative process based on the notion of groups of nodes [holland-stochastic-1983]. The microcanonical formulation [peixoto-nonparametric-2016] of the basic or “traditional” version takes as parameters the partition of the nodes into groups \(\boldsymbol b\) and a \(B\times B\) matrix of edge counts \(\boldsymbol e\), where \(e_{rs}\) is the number of edges between groups \(r\) and \(s\). Given these constraints, the edges are then placed randomly. Hence, nodes that belong to the same group possess the same probability of being connected with other nodes of the network.

An example of a possible parametrization is given in the following figure.

../../_images/sbm-example-ers.svg

Matrix of edge counts \(\boldsymbol e\) between groups.

../../_images/sbm-example.svg

Generated network.

Note

We emphasize that no constraints are imposed on what kind of modular structure is allowed. Hence, we can detect the putatively typical pattern of “community structure”, i.e. when nodes are connected mostly to other nodes of the same group, if it happens to be the most likely network description, but we can also detect a large multiplicity of other patterns, such as bipartiteness, core-periphery, and many others, all under the same inference framework.

Although quite general, the traditional model assumes that the edges are placed randomly inside each group, and as such the nodes that belong to the same group have very similar degrees. As it turns out, this is often a poor model for many networks, which possess highly heterogeneous degree distributions. A better model for such networks is called the degree-corrected stochastic block model [karrer-stochastic-2011], and it is defined just like the traditional model, with the addition of the degree sequence \(\boldsymbol k = \{k_i\}\) of the graph as an additional set of parameters (assuming again a microcanonical formulation [peixoto-nonparametric-2016]).

The nested stochastic block model

The regular SBM has a drawback when applied to very large networks. Namely, it cannot be used to find relatively small groups in very large networks: The maximum number of groups that can be found scales as \(B_{\text{max}}\sim\sqrt{N}\), where \(N\) is the number of nodes in the network, if Bayesian inference is performed [peixoto-parsimonious-2013]. In order to circumvent this, we need to replace the noninformative priors used by a hierarchy of priors and hyperpriors, which amounts to a nested SBM, where the groups themselves are clustered into groups, and the matrix \(e\) of edge counts are generated from another SBM, and so on recursively [peixoto-hierarchical-2014].

../../_images/nested-diagram.svg

Example of a nested SBM with three levels.

In addition to being able to find small groups in large networks, this model also provides a multilevel hierarchical description of the network, that describes its structure at multiple scales.

Inferring the best partition

The simplest and most efficient approach is to find the best partition of the network by maximizing Eq. (4) according to some version of the model. This is obtained via the functions minimize_blockmodel_dl() or minimize_nested_blockmodel_dl(), which employs an agglomerative multilevel Markov chain Monte Carlo (MCMC) algorithm [peixoto-efficient-2014].

We focus first on the non-nested model, and we illustrate its use with a network of American football teams, which we load from the collection module:

g = gt.collection.data["football"]
print(g)

which yields

<Graph object, undirected, with 115 vertices and 613 edges at 0x...>

we then fit the traditional model by calling

state = gt.minimize_blockmodel_dl(g, deg_corr=False)

This returns a BlockState object that includes the inference results.

Note

The inference algorithm used is stochastic by nature, and may return a slightly different answer each time it is run. This may be due to the fact that there are alternative partitions with similar likelihoods, or that the optimum is difficult to find. Note that the inference problem here is, in general, NP-Hard, hence there is no efficient algorithm that is guaranteed to always find the best answer.

Because of this, typically one would call the algorithm many times, and select the partition with the largest posterior likelihood of Eq. (4), or equivalently, the minimum description length of Eq. (5). The description length of a fit can be obtained with the entropy() method. See also Hierarchical partitions below.

We may perform a drawing of the partition obtained via the draw method, that functions as a convenience wrapper to the graph_draw() function

state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")

which yields the following image.

../../_images/football-sbm-fit.svg

Stochastic block model inference of a network of American college football teams. The colors correspond to inferred group membership of the nodes.

We can obtain the group memberships as a PropertyMap on the vertices via the get_blocks method:

b = state.get_blocks()
r = b[10]   # group membership of vertex 10
print(r)

which yields:

3

We may also access the matrix of edge counts between groups via get_matrix

e = state.get_matrix()

matshow(e.todense())
savefig("football-edge-counts.svg")
../../_images/football-edge-counts.svg

Matrix of edge counts between groups.

We may obtain the same matrix of edge counts as a graph, which has internal edge and vertex property maps with the edge and vertex counts, respectively:

bg = state.get_bg()
ers = bg.ep.count    # edge counts
nr = bg.vp.count     # node counts

Hierarchical partitions

The inference of the nested family of SBMs is done in a similar manner, but we must use instead the minimize_nested_blockmodel_dl() function. We illustrate its use with the neural network of the C. elegans worm:

g = gt.collection.data["celegansneural"]
print(g)

which has 297 vertices and 2359 edges.

<Graph object, directed, with 297 vertices and 2359 edges at 0x...>

A hierarchical fit of the degree-corrected model is performed as follows.

state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

The object returned is an instance of a NestedBlockState class, which encapsulates the results. We can again draw the resulting hierarchical clustering using the draw() method:

state.draw(output="celegans-hsbm-fit.svg")
../../_images/celegans-hsbm-fit.svg

Most likely hierarchical partition of the neural network of the C. elegans worm according to the nested degree-corrected SBM.

Note

If the output parameter to draw() is omitted, an interactive visualization is performed, where the user can re-order the hierarchy nodes using the mouse and pressing the r key.

A summary of the inferred hierarchy can be obtained with the print_summary() method, which shows the number of nodes and groups in all levels:

state.print_summary()
l: 0, N: 297, B: 13
l: 1, N: 13, B: 5
l: 2, N: 5, B: 2
l: 3, N: 2, B: 1

The hierarchical levels themselves are represented by individual BlockState() instances obtained via the get_levels() method:

levels = state.get_levels()
for s in levels:
    print(s)
<BlockState object with 13 blocks (13 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges at 0x...>, at 0x...>
<BlockState object with 5 blocks (5 nonempty), for graph <Graph object, directed, with 13 vertices and 105 edges at 0x...>, at 0x...>
<BlockState object with 2 blocks (2 nonempty), for graph <Graph object, directed, with 5 vertices and 21 edges at 0x...>, at 0x...>
<BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 2 vertices and 4 edges at 0x...>, at 0x...>

This means that we can inspect the hierarchical partition just as before:

r = levels[0].get_blocks()[46]    # group membership of node 46 in level 0
print(r)
r = levels[0].get_blocks()[r]     # group membership of node 46 in level 1
print(r)
r = levels[0].get_blocks()[r]     # group membership of node 46 in level 2
print(r)
2
1
0

Model selection

As mentioned above, one can select the best model according to the choice that yields the smallest description length. For instance, in case of the C. elegans network we have

g = gt.collection.data["celegansneural"]

state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
Non-degree-corrected DL:      8507.97432099
Degree-corrected DL:  8228.11609772

Since it yields the smallest description length, the degree-corrected fit should be preferred. The statistical significance of the choice can be accessed by inspecting the posterior odds ratio [peixoto-nonparametric-2016]

\[\begin{split}\Lambda &= \frac{P(\boldsymbol b, \mathcal{H}_\text{NDC} | \boldsymbol G)}{P(\boldsymbol b, \mathcal{H}_\text{DC} | \boldsymbol G)} \\ &= \frac{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{NDC})}{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{DC})}\times\frac{P(\mathcal{H}_\text{NDC})}{P(\mathcal{H}_\text{DC})} \\ &= \exp(-\Delta\Sigma)\end{split}\]

where \(\mathcal{H}_\text{NDC}\) and \(\mathcal{H}_\text{DC}\) correspond to the non-degree-corrected and degree-corrected model hypotheses (assumed to be equally likely a priori), respectively, and \(\Delta\Sigma\) is the difference of the description length of both fits. In our particular case, we have

print(u"ln Λ: ", state_dc.entropy() - state_ndc.entropy())
ln Λ:  -279.858223272

The precise threshold that should be used to decide when to reject a hypothesis is subjective and context-dependent, but the value above implies that the particular degree-corrected fit is around \(e^{280} \sim 10^{121}\) times more likely than the non-degree corrected one, and hence it can be safely concluded that it provides a substantially better fit.

Although it is often true that the degree-corrected model provides a better fit for many empirical networks, there are also exceptions. For example, for the American football network above, we have:

g = gt.collection.data["football"]

state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
print(u"ln Λ:\t\t\t", state_ndc.entropy() - state_dc.entropy())
Non-degree-corrected DL:      1749.51938237
Degree-corrected DL:  1780.57671694
ln Λ:                         -31.0573345685

Hence, with a posterior odds ratio of \(\Lambda \sim e^{-36} \sim 10^{-16}\) in favor of the non-degree-corrected model, it seems like the degree-corrected variant is an unnecessarily complex description for this network.

Averaging over models

When analyzing empirical networks, one should be open to the possibility that there will be more than one fit of the SBM with similar posterior likelihoods. In such situations, one should instead sample partitions from the posterior likelihood, instead of simply finding its maximum. One can then compute quantities that are averaged over the different model fits, weighted according to their posterior likelihoods.

Full support for model averaging is implemented in graph-tool via an efficient Markov chain Monte Carlo (MCMC) algorithm [peixoto-efficient-2014]. It works by attempting to move nodes into different groups with specific probabilities, and accepting or rejecting such moves such that, after a sufficiently long time, the partitions will be observed with the desired posterior probability. The algorithm is so designed, that its run-time is independent on the number of groups being used in the model, and hence is suitable for use on very large networks.

In order to perform such moves, one needs again to operate with BlockState or NestedBlockState instances, and calling their mcmc_sweep() methods. For example, the following will perform 1000 sweeps of the algorithm with the network of characters in the novel Les Misérables, starting from a random partition into 20 groups

g = gt.collection.data["lesmis"]

state = gt.BlockState(g, B=20)   # This automatically initializes the state
                                 # with a random partition into B=20
                                 # nonempty groups; The user could
                                 # also pass an arbitrary initial
                                 # partition using the 'b' parameter.

# If we work with the above state object, we will be restricted to
# partitions into at most B=20 groups. But since we want to consider
# an arbitrary number of groups in the range [1, N], we transform it
# into a state with B=N groups (where N-20 will be empty).

state = state.copy(B=g.num_vertices())

# Now we run 1,000 sweeps of the MCMC

dS, nmoves = state.mcmc_sweep(niter=1000)

print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: -355.396342...
Number of accepted vertex moves: 4561

Note

Starting from a random partition is rarely the best option, since it may take a long time for it to equilibrate; It was done above simply as an illustration on how to initialize BlockState by hand. Instead, a much better option in practice is to start from the “ground state” obtained with minimize_blockmodel_dl(), e.g.

state = gt.minimize_blockmodel_dl(g)
state = state.copy(B=g.num_vertices())
dS, nmoves = state.mcmc_sweep(niter=1000)

print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: 7.34234097...
Number of accepted vertex moves: 3939

Although the above is sufficient to implement model averaging, there is a convenience function called mcmc_equilibrate() that is intend to simplify the detection of equilibration, by keeping track of the maximum and minimum values of description length encountered and how many sweeps have been made without a “record breaking” event. For example,

# We will accept equilibration if 10 sweeps are completed without a
# record breaking event, 2 consecutive times.

gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10), verbose=True)

will output:

niter:     1  count:    0  breaks:  0  min_S: 709.95524  max_S: 726.36140  S: 726.36140  ΔS:      16.4062  moves:    57
niter:     2  count:    1  breaks:  0  min_S: 709.95524  max_S: 726.36140  S: 721.68682  ΔS:     -4.67459  moves:    67
niter:     3  count:    0  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 709.37313  ΔS:     -12.3137  moves:    47
niter:     4  count:    1  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 711.61100  ΔS:      2.23787  moves:    57
niter:     5  count:    2  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 716.08147  ΔS:      4.47047  moves:    28
niter:     6  count:    3  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 712.93940  ΔS:     -3.14207  moves:    47
niter:     7  count:    4  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 712.38780  ΔS:    -0.551596  moves:    46
niter:     8  count:    5  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 718.00449  ΔS:      5.61668  moves:    40
niter:     9  count:    0  breaks:  0  min_S: 709.37313  max_S: 731.89940  S: 731.89940  ΔS:      13.8949  moves:    50
niter:    10  count:    0  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 707.07048  ΔS:     -24.8289  moves:    45
niter:    11  count:    1  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 711.91030  ΔS:      4.83982  moves:    31
niter:    12  count:    2  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 726.56358  ΔS:      14.6533  moves:    56
niter:    13  count:    3  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 731.77165  ΔS:      5.20807  moves:    72
niter:    14  count:    4  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 707.08606  ΔS:     -24.6856  moves:    57
niter:    15  count:    0  breaks:  0  min_S: 707.07048  max_S: 735.85102  S: 735.85102  ΔS:      28.7650  moves:    65
niter:    16  count:    1  breaks:  0  min_S: 707.07048  max_S: 735.85102  S: 707.29116  ΔS:     -28.5599  moves:    43
niter:    17  count:    0  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 702.18860  ΔS:     -5.10256  moves:    39
niter:    18  count:    1  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 716.40444  ΔS:      14.2158  moves:    55
niter:    19  count:    2  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 703.51896  ΔS:     -12.8855  moves:    32
niter:    20  count:    3  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 714.30455  ΔS:      10.7856  moves:    34
niter:    21  count:    4  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 707.26722  ΔS:     -7.03733  moves:    25
niter:    22  count:    5  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 730.23976  ΔS:      22.9725  moves:    21
niter:    23  count:    6  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 730.56562  ΔS:     0.325858  moves:    59
niter:    24  count:    0  breaks:  0  min_S: 702.18860  max_S: 738.45136  S: 738.45136  ΔS:      7.88574  moves:    60
niter:    25  count:    0  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 740.29015  ΔS:      1.83879  moves:    88
niter:    26  count:    1  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 720.86367  ΔS:     -19.4265  moves:    68
niter:    27  count:    2  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 723.60308  ΔS:      2.73941  moves:    48
niter:    28  count:    3  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 732.81310  ΔS:      9.21002  moves:    44
niter:    29  count:    4  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 729.62283  ΔS:     -3.19028  moves:    62
niter:    30  count:    5  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 730.15676  ΔS:     0.533935  moves:    59
niter:    31  count:    6  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 728.27350  ΔS:     -1.88326  moves:    65
niter:    32  count:    7  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 732.19406  ΔS:      3.92056  moves:    57
niter:    33  count:    8  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 730.53906  ΔS:     -1.65500  moves:    72
niter:    34  count:    9  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 725.59638  ΔS:     -4.94268  moves:    72
niter:    35  count:    0  breaks:  1  min_S: 733.07687  max_S: 733.07687  S: 733.07687  ΔS:      7.48049  moves:    54
niter:    36  count:    0  breaks:  1  min_S: 728.56326  max_S: 733.07687  S: 728.56326  ΔS:     -4.51361  moves:    57
niter:    37  count:    0  breaks:  1  min_S: 728.56326  max_S: 755.55140  S: 755.55140  ΔS:      26.9881  moves:    83
niter:    38  count:    0  breaks:  1  min_S: 728.56326  max_S: 761.09434  S: 761.09434  ΔS:      5.54294  moves:    96
niter:    39  count:    0  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 713.60740  ΔS:     -47.4869  moves:    71
niter:    40  count:    1  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 713.98904  ΔS:     0.381637  moves:    67
niter:    41  count:    2  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 729.22460  ΔS:      15.2356  moves:    68
niter:    42  count:    3  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 724.70143  ΔS:     -4.52317  moves:    69
niter:    43  count:    0  breaks:  1  min_S: 703.51896  max_S: 761.09434  S: 703.51896  ΔS:     -21.1825  moves:    40
niter:    44  count:    0  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 702.85027  ΔS:    -0.668696  moves:    33
niter:    45  count:    1  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 722.46508  ΔS:      19.6148  moves:    49
niter:    46  count:    2  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 714.77930  ΔS:     -7.68578  moves:    62
niter:    47  count:    3  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 722.04551  ΔS:      7.26621  moves:    55
niter:    48  count:    4  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 708.96879  ΔS:     -13.0767  moves:    37
niter:    49  count:    5  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 714.84009  ΔS:      5.87130  moves:    37
niter:    50  count:    6  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 718.28558  ΔS:      3.44549  moves:    55
niter:    51  count:    7  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 720.86398  ΔS:      2.57840  moves:    44
niter:    52  count:    8  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 710.93672  ΔS:     -9.92726  moves:    45
niter:    53  count:    9  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 735.06773  ΔS:      24.1310  moves:    28
niter:    54  count:   10  breaks:  2  min_S: 702.85027  max_S: 761.09434  S: 738.16756  ΔS:      3.09983  moves:   115

Note that the value of wait above was made purposefully low so that the output would not be overly long. The most appropriate value requires experimentation, but a typically good value is wait=1000.

The function mcmc_equilibrate() accepts a callback argument that takes an optional function to be invoked after each call to mcmc_sweep(). This function should accept a single parameter which will contain the actual BlockState instance. We will use this in the example below to collect the posterior vertex marginals, i.e. the posterior probability that a node belongs to a given group:

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

pv = None

def collect_marginals(s):
   global pv
   pv = s.collect_vertex_marginals(pv)

# Now we collect the marginals for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

# Now the node marginals are stored in property map pv. We can
# visualize them as pie charts on the nodes:
state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv,
           edge_gradient=None, output="lesmis-sbm-marginals.svg")
../../_images/lesmis-sbm-marginals.svg

Marginal probabilities of group memberships of the network of characters in the novel Les Misérables, according to the degree-corrected SBM. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.

We can also obtain a marginal probability on the number of groups itself, as follows.

h = np.zeros(g.num_vertices() + 1)

def collect_num_groups(s):
    B = s.get_nonempty_B()
    h[B] += 1

# Now we collect the marginal distribution for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_num_groups)
../../_images/lesmis-B-posterior.svg

Marginal posterior likelihood of the number of nonempty groups for the network of characters in the novel Les Misérables, according to the degree-corrected SBM.

Hierarchical partitions

We can also perform model averaging using the nested SBM, which will give us a distribution over hierarchies. The whole procedure is fairly analogous, but now we make use of NestedBlockState instances.

Note

When using NestedBlockState instances to perform model averaging, they need to be constructed with the option sampling=True.

Here we perform the sampling of hierarchical partitions using the same network as above.

g = gt.collection.data["lesmis"]

state = gt.minimize_nested_blockmodel_dl(g) # Initialize he Markov
                                            # chain from the "ground
                                            # state"

# Before doing model averaging, the need to create a NestedBlockState
# by passing sampling = True.

# We also want to increase the maximum hierarchy depth to L = 10

# We can do both of the above by copying.

bs = state.get_bs()                     # Get hierarchical partition.
bs += [np.zeros(1)] * (10 - len(bs))    # Augment it to L = 10 with
                                        # single-group levels.

state = state.copy(bs=bs, sampling=True)

# Now we run 1000 sweeps of the MCMC

dS, nmoves = state.mcmc_sweep(niter=1000)

print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: 6.222068...
Number of accepted vertex moves: 7615

Similarly to the the non-nested case, we can use mcmc_equilibrate() to do most of the boring work, and we can now obtain vertex marginals on all hierarchical levels:

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

pv = [None] * len(state.get_levels())

def collect_marginals(s):
   global pv
   pv = [sl.collect_vertex_marginals(pv[l]) for l, sl in enumerate(s.get_levels())]

# Now we collect the marginals for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

# Now the node marginals for all levels are stored in property map
# list pv. We can visualize the first level as pie charts on the nodes:
state_0 = state.get_levels()[0]
state_0.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv[0],
             edge_gradient=None, output="lesmis-nested-sbm-marginals.svg")
../../_images/lesmis-nested-sbm-marginals.svg

Marginal probabilities of group memberships of the network of characters in the novel Les Misérables, according to the nested degree-corrected SBM. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.

We can also obtain a marginal probability of the number of groups itself, as follows.

h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()]

def collect_num_groups(s):
    for l, sl in enumerate(s.get_levels()):
       B = sl.get_nonempty_B()
       h[l][B] += 1

# Now we collect the marginal distribution for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_num_groups)
../../_images/lesmis-nested-B-posterior.svg

Marginal posterior likelihood of the number of nonempty groups \(B_l\) at each hierarchy level \(l\) for the network of characters in the novel Les Misérables, according to the nested degree-corrected SBM.

Below we obtain some hierarchical partitions sampled from the posterior distribution.

for i in range(10):
    state.mcmc_sweep(niter=1000)
    state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False)
../../_images/lesmis-partition-sample-0.svg ../../_images/lesmis-partition-sample-1.svg ../../_images/lesmis-partition-sample-2.svg ../../_images/lesmis-partition-sample-3.svg ../../_images/lesmis-partition-sample-4.svg ../../_images/lesmis-partition-sample-5.svg ../../_images/lesmis-partition-sample-6.svg ../../_images/lesmis-partition-sample-7.svg ../../_images/lesmis-partition-sample-8.svg ../../_images/lesmis-partition-sample-9.svg

Model class selection

When averaging over partitions, we may be interested in evaluating which model class provides a better fit of the data, considering all possible parameter choices. This is done by evaluating the model evidence [peixoto-nonparametric-2016]

\[P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G,\boldsymbol\theta, \boldsymbol b) = \sum_{\boldsymbol b}P(\boldsymbol G,\boldsymbol b).\]

This quantity is analogous to a partition function in statistical physics, which we can write more conveniently as a negative free energy by taking its logarithm

(6)\[\begin{split}\ln P(\boldsymbol G) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(\boldsymbol G,\boldsymbol b)}_{-\left<\Sigma\right>}\; \underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}}\end{split}\]

where

\[q(\boldsymbol b) = \frac{P(\boldsymbol G,\boldsymbol b)}{\sum_{\boldsymbol b'}P(\boldsymbol G,\boldsymbol b')}\]

is the posterior likelihood of partition \(\boldsymbol b\). The first term of Eq. (6) (the “negative energy”) is minus the average of description length \(\left<\Sigma\right>\), weighted according to the posterior distribution. The second term \(\mathcal{S}\) is the entropy of the posterior distribution, and measures, in a sense, the “quality of fit” of the model: If the posterior is very “peaked”, i.e. dominated by a single partition with a very large likelihood, the entropy will tend to zero. However, if there are many partitions with similar likelihoods — meaning that there is no single partition that describes the network uniquely well — it will take a large value instead.

Since the MCMC algorithm samples partitions from the distribution \(q(\boldsymbol b)\), it can be used to compute \(\left<\Sigma\right>\) easily, simply by averaging the description length values encountered by sampling from the posterior distribution many times.

The computation of the posterior entropy \(\mathcal{S}\), however, is significantly more difficult, since it involves measuring the precise value of \(q(\boldsymbol b)\). A direct “brute force” computation of \(\mathcal{S}\) is implemented via collect_partition_histogram() and microstate_entropy(), however this is only feasible for very small networks. For larger networks, we are forced to perform approximations. The simplest is a “mean field” one, where we assume the posterior factorizes as

\[q(\boldsymbol b) \approx \prod_i{q_i(b_i)}\]

where

\[q_i(r) = P(b_i = r | \boldsymbol G)\]

is the marginal group membership distribution of node \(i\). This yields an entropy value given by

\[S \approx -\sum_i\sum_rq_i(r)\ln q_i(r).\]

This approximation should be seen as an upper bound, since any existing correlation between the nodes (which are ignored here) will yield smaller entropy values.

A more accurate assumption is called the Bethe approximation [mezard-information-2009], and takes into account the correlation between adjacent nodes in the network,

\[\begin{split}q(\boldsymbol b) \approx \prod_{i<j}q_{ij}(b_i,b_j)^{A_{ij}}\prod_iq_i(b_i)^{1-k_i}\end{split}\]

where \(A_{ij}\) is the adjacency matrix, \(k_i\) is the degree of node \(i\), and

\[q_{ij}(r, s) = P(b_i = r, b_j = s|\boldsymbol G)\]

is the joint group membership distribution of nodes \(i\) and \(j\) (a.k.a. the edge marginals). This yields an entropy value given by

\[\begin{split}S \approx -\sum_{i<j}A_{ij}\sum_{rs}q_{ij}(r,s)\ln q_{ij}(r,s) - \sum_i(1-k_i)\sum_rq_i(r)\ln q_i(r).\end{split}\]

Typically, this approximation yields smaller values than the mean field one, and is generally considered to be superior. However, formally, it depends on the graph being sufficiently locally “tree-like”, and the posterior being indeed strongly correlated with the adjacency matrix itself — two characteristics which do not hold in general. Although the approximation often gives reasonable results even when these conditions do not strictly hold, in some situations when they are strongly violated this approach can yield meaningless values, such as a negative entropy. Therefore, it is useful to compare both approaches whenever possible.

With these approximations, it possible to estimate the full model evidence efficiently, as we show below, using collect_vertex_marginals(), collect_edge_marginals(), mf_entropy() and bethe_entropy().

g = gt.collection.data["lesmis"]

for deg_corr in [True, False]:
    state = gt.minimize_blockmodel_dl(g, deg_corr=deg_corr)     # Initialize the Markov
                                                                # chain from the "ground
                                                                # state"
    state = state.copy(B=g.num_vertices())

    dls = []         # description length history
    vm = None        # vertex marginals
    em = None        # edge marginals

    def collect_marginals(s):
        global vm, em
        vm = s.collect_vertex_marginals(vm)
        em = s.collect_edge_marginals(em)
        dls.append(s.entropy())

    # Now we collect the marginal distributions for exactly 200,000 sweeps
    gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10),
                        callback=collect_marginals)

    S_mf = gt.mf_entropy(g, vm)
    S_bethe = gt.bethe_entropy(g, em)[0]
    L = -mean(dls)

    print("Model evidence for deg_corr = %s:" % deg_corr,
          L + S_mf, "(mean field),", L + S_bethe, "(Bethe)")
Model evidence for deg_corr = True: -575.864972067 (mean field), -802.39062289 (Bethe)
Model evidence for deg_corr = False: -584.307313493 (mean field), -707.827204203 (Bethe)

If we consider the more accurate approximation, the outcome shows a preference for the non-degree-corrected model.

When using the nested model, the approach is entirely analogous. The only difference now is that we have a hierarchical partition \(\{\boldsymbol b_l\}\) in the equations above, instead of simply \(\boldsymbol b\). In order to make the approach tractable, we assume the factorization

\[q(\{\boldsymbol b_l\}) \approx \prod_lq_l(\boldsymbol b_l)\]

where \(q_l(\boldsymbol b_l)\) is the marginal posterior for the partition at level \(l\). For \(q_0(\boldsymbol b_0)\) we may use again either the mean-field or Bethe approximations, however for \(l>0\) only the mean-field approximation is applicable, since the adjacency matrix of the higher layers is not constant. We show below the approach for the same network, using the nested model.

g = gt.collection.data["lesmis"]

L = 10

for deg_corr in [True, False]:
    state = gt.minimize_nested_blockmodel_dl(g, deg_corr=deg_corr)     # Initialize the Markov
                                                                       # chain from the "ground
                                                                       # state"
    bs = state.get_bs()                     # Get hierarchical partition.
    bs += [np.zeros(1)] * (L - len(bs))     # Augment it to L = 10 with
                                            # single-group levels.

    state = state.copy(bs=bs, sampling=True)

    dls = []                               # description length history
    vm = [None] * len(state.get_levels())  # vertex marginals
    em = None                              # edge marginals

    def collect_marginals(s):
        global vm, em
        levels = s.get_levels()
        vm = [sl.collect_vertex_marginals(vm[l]) for l, sl in enumerate(levels)]
        em = levels[0].collect_edge_marginals(em)
        dls.append(s.entropy())

    # Now we collect the marginal distributions for exactly 200,000 sweeps
    gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10),
                        callback=collect_marginals)

    S_mf = [gt.mf_entropy(sl.g, vm[l]) for l, sl in enumerate(state.get_levels())]
    S_bethe = gt.bethe_entropy(g, em)[0]
    L = -mean(dls)

    print("Model evidence for deg_corr = %s:" % deg_corr,
          L + sum(S_mf), "(mean field),", L + S_bethe + sum(S_mf[1:]), "(Bethe)")
Model evidence for deg_corr = True: -346.618790006 (mean field), -601.313781849 (Bethe)
Model evidence for deg_corr = False: -374.614350884 (mean field), -563.256840699 (Bethe)

The results are similar: If we consider the most accurate approximation, the non-degree-corrected model possesses the largest evidence. Note also that we observe a better evidence for the nested models themselves, when comparing to the evidences for the non-nested model — which is not quite surprising, since the non-nested model is a special case of the nested one.

Edge layers and covariates

In many situations, the edges of the network may posses discrete covariates on them, or they may be distributed in discrete “layers”. Extensions to the SBM may be defined for such data, and they can be inferred using the exact same interface shown above, except one should use the LayeredBlockState class, instead of BlockState. This class takes two additional parameters: the ec parameter, that must correspond to an edge PropertyMap with the layer/covariate values on the edges, and the Boolean layers parameter, which if True specifies a layered model, otherwise one with edge covariates.

If we use minimize_blockmodel_dl(), this can be achieved simply by passing the option layers=True as well as the appropriate value of state_args, which will be propagated to LayeredBlockState‘s constructor.

For example, consider again the Les Misérables network, where we consider the number of co-appearances between characters as edge covariates.

g = gt.collection.data["lesmis"]

# Note the different meaning of the two 'layers' parameters below: The
# first enables the use of LayeredBlockState, and the second selects
# the 'edge covariates' version.

state = gt.minimize_blockmodel_dl(g, deg_corr=False, layers=True,
                                  state_args=dict(ec=g.ep.value, layers=False))

state.draw(pos=g.vp.pos, edge_color=g.ep.value, edge_gradient=None,
           output="lesmis-sbm-edge-cov.svg")
../../_images/lesmis-sbm-edge-cov.svg

Best fit of the non-degree-corrected SBM with edge covariates for the network of characters in the novel Les Misérables, using the number of co-appearances as edge covariates. The edge colors correspond to the edge covariates.

In the case of the nested model, we still should use the NestedBlockState class, but it must be initialized with the parameter base_type = LayeredBlockState. But if we use minimize_nested_blockmodel_dl(), it works identically to the above:

state = gt.minimize_nested_blockmodel_dl(g, deg_corr=False, layers=True,
                                         state_args=dict(ec=g.ep.value, layers=False))

state.draw(eprops=dict(color=g.ep.value, gradient=None),
           output="lesmis-nested-sbm-edge-cov.svg")
../../_images/lesmis-nested-sbm-edge-cov.svg

Best fit of the nested non-degree-corrected SBM with edge covariates for the network of characters in the novel Les Misérables, using the number of co-appearances as edge covariates. The edge colors correspond to the edge covariates.

It is possible to perform model averaging of all layered variants exactly like for the regular SBMs as was shown above.

Predicting spurious and missing edges

An important application of generative models is to be able to generalize from observations and make predictions that go beyond what is seen in the data. This is particularly useful when the network we observe is incomplete, or contains errors, i.e. some of the edges are either missing or are outcomes of mistakes in measurement. In this situation, the fit we make of the observed network can help us predict missing or spurious edges in the network [clauset-hierarchical-2008] [guimera-missing-2009].

We do so by dividing the edges into two sets \(\boldsymbol G\) and \(\delta \boldsymbol G\), where the former corresponds to the observed network and the latter either to the missing or spurious edges. In the case of missing edges, we may compute the posterior of \(\delta \boldsymbol G\) as

(7)\[P(\delta \boldsymbol G | \boldsymbol G) = \frac{\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)}{P_{\delta}(\boldsymbol G)}\]

where

\[P_{\delta}(\boldsymbol G) = \sum_{\delta \boldsymbol G}\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)\]

is a normalization constant. Although the value of \(P_{\delta}(\boldsymbol G)\) is difficult to obtain in general (since we need to perform a sum over all possible spurious/missing edges), the numerator of Eq. (7) can be computed by sampling partitions from the posterior, and then inserting or deleting edges from the graph and computing the new likelihood. This means that we can easily compare alternative predictive hypotheses \(\{\delta \boldsymbol G_i\}\) via their likelihood ratios

\[\lambda_i = \frac{P(\delta \boldsymbol G_i | \boldsymbol G)}{\sum_j P(\delta \boldsymbol G_j | \boldsymbol G)} = \frac{\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G_i | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)} {\sum_j \sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G_j | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)}\]

which do not depend on the value of \(P_{\delta}(\boldsymbol G)\).

The values \(P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)\) can be computed with get_edges_prob(). Hence, we can compute spurious/missing edge probabilities just as if we were collecting marginal distributions when doing model averaging.

Below is an example for predicting the two following edges in the football network, using the nested model (for which we need to replace \(\boldsymbol b\) by \(\{\boldsymbol b_l\}\) in the equations above).

../../_images/football_missing.svg

Two non-existing edges in the football network (in red): \((101,102)\) in the middle, and \((17,56)\) in the upper right region of the figure.

g = gt.collection.data["football"]

missing_edges = [(101, 102), (17, 56)]

L = 10

state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

bs = state.get_bs()                     # Get hierarchical partition.
bs += [np.zeros(1)] * (L - len(bs))     # Augment it to L = 10 with
                                        # single-group levels.

state = state.copy(bs=bs, sampling=True)

probs = ([], [])

def collect_edge_probs(s):
    p1 = s.get_edges_prob([missing_edges[0]], entropy_args=dict(partition_dl=False))
    p2 = s.get_edges_prob([missing_edges[1]], entropy_args=dict(partition_dl=False))
    probs[0].append(p1)
    probs[1].append(p2)

# Now we collect the probabilities for exactly 10,000 sweeps
gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10),
                    callback=collect_edge_probs)


def get_avg(p):
   p = np.array(p)
   pmax = p.max()
   p -= pmax
   return pmax + log(exp(p).mean())

p1 = get_avg(probs[0])
p2 = get_avg(probs[1])

p_sum = get_avg([p1, p2]) + log(2)

l1 = p1 - p_sum
l2 = p2 - p_sum

print("likelihood-ratio for %s: %g" % (missing_edges[0], exp(l1)))
print("likelihood-ratio for %s: %g" % (missing_edges[1], exp(l2)))
likelihood-ratio for (101, 102): 0.372308
likelihood-ratio for (17, 56): 0.627692

From which we can conclude that edge \((17, 56)\) is around twice as likely as \((101, 102)\) to be a missing edge.

The prediction using the non-nested model can be performed in an entirely analogous fashion.

References

[holland-stochastic-1983]Paul W. Holland, Kathryn Blackmond Laskey, Samuel Leinhardt, “Stochastic blockmodels: First steps”, Social Networks Volume 5, Issue 2, Pages 109-137 (1983), DOI: 10.1016/0378-8733(83)90021-7
[karrer-stochastic-2011]Brian Karrer, M. E. J. Newman “Stochastic blockmodels and community structure in networks”, Phys. Rev. E 83, 016107 (2011), DOI: 10.1103/PhysRevE.83.016107, arXiv: 1008.3926
[peixoto-nonparametric-2016](1, 2, 3, 4) Tiago P. Peixoto, “Nonparametric Bayesian inference of the microcanonical stochastic block model” arXiv: 1610.02703
[peixoto-parsimonious-2013]Tiago P. Peixoto, “Parsimonious module inference in large networks”, Phys. Rev. Lett. 110, 148701 (2013), DOI: 10.1103/PhysRevLett.110.148701, arXiv: 1212.4794.
[peixoto-hierarchical-2014]Tiago P. Peixoto, “Hierarchical block structures and high-resolution model selection in large networks”, Phys. Rev. X 4, 011047 (2014), DOI: 10.1103/PhysRevX.4.011047, arXiv: 1310.4377.
[peixoto-model-2016]Tiago P. Peixoto, “Model selection and hypothesis testing for large-scale network models with overlapping groups”, Phys. Rev. X 5, 011033 (2016), DOI: 10.1103/PhysRevX.5.011033, arXiv: 1409.3059.
[peixoto-efficient-2014](1, 2) Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804, arXiv: 1310.4378
[clauset-hierarchical-2008]Aaron Clauset, Cristopher Moore, M. E. J. Newman, “Hierarchical structure and the prediction of missing links in networks”, Nature 453, 98-101 (2008), DOI: 10.1038/nature06830
[guimera-missing-2009]Roger Guimerà, Marta Sales-Pardo, “Missing and spurious interactions and the reconstruction of complex networks”, PNAS vol. 106 no. 52 (2009), DOI: 10.1073/pnas.0908366106
[mezard-information-2009]Marc Mézard, Andrea Montanari, “Information, Physics, and Computation”, Oxford Univ Press, 2009. DOI: 10.1093/acprof:oso/9780198570837.001.0001