Inferring modular network structure#

graph-tool includes algorithms to identify the large-scale structure of networks via statistical inference in the inference submodule. Here we explain the basic functionality with self-contained examples. For a more thorough theoretical introduction to the methods described here, the reader is referred to [peixoto-bayesian-2019].

See also [peixoto-descriptive-2021] and the corresponding blog post for an overall discussion about inferential approaches to structure identification in networks, and how they contrast with descriptive approaches.

Background: Nonparametric statistical inference#

A common task when analyzing networks is to characterize their structures in simple terms, often by dividing the nodes into modules or “communities”.

A principled approach to perform this task is to formulate generative models that include the idea of modules in their descriptions, which then can be detected by inferring the model parameters from data. More precisely, given the partition \(\boldsymbol b = \{b_i\}\) of the network into \(B\) groups, where \(b_i\in[0,B-1]\) is the group membership of node \(i\), we define a model that generates a network \(\boldsymbol A\) with a probability

(1)#\[P(\boldsymbol A|\boldsymbol\theta, \boldsymbol b)\]

where \(\boldsymbol\theta\) are additional model parameters that control how the node partition affects the structure of the network. Therefore, if we observe a network \(\boldsymbol A\), the likelihood that it was generated by a given partition \(\boldsymbol b\) is obtained via the Bayesian posterior probability

(2)#\[P(\boldsymbol b | \boldsymbol A) = \frac{\sum_{\boldsymbol\theta}P(\boldsymbol A|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol A)}\]

where \(P(\boldsymbol\theta, \boldsymbol b)\) is the prior probability of the model parameters, and

(3)#\[P(\boldsymbol A) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol A|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)\]

is called the evidence, and corresponds to the total probability of the data summed over all model parameters. The particular types of model that will be considered here have “hard constraints”, such that there is only one choice for the remaining parameters \(\boldsymbol\theta\) that is compatible with the generated network, which means Eq. (2) simplifies to

(4)#\[P(\boldsymbol b | \boldsymbol A) = \frac{P(\boldsymbol A|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol A)}\]

with \(\boldsymbol\theta\) above being the only choice compatible with \(\boldsymbol A\) and \(\boldsymbol b\). The inference procedures considered here will consist in either finding a network partition that maximizes Eq. (4), or sampling different partitions according its posterior probability.

As we will show below, this approach also enables the comparison of different models according to statistical evidence (a.k.a. model selection).

Minimum description length (MDL)#

We note that Eq. (4) can be written as

\[P(\boldsymbol b | \boldsymbol A) = \frac{\exp(-\Sigma)}{P(\boldsymbol A)}\]

where

(5)#\[\Sigma = -\ln P(\boldsymbol A|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b)\]

is called the description length of the network \(\boldsymbol A\). It measures the amount of information required to describe the data, if we encode it using the particular parametrization of the generative model given by \(\boldsymbol\theta\) and \(\boldsymbol b\), as well as the parameters themselves. Therefore, if we choose to maximize the posterior distribution of Eq. (4) it will be fully equivalent to the so-called minimum description length method. This approach corresponds to an implementation of Occam’s razor, where the simplest model is selected, among all possibilities with the same explanatory power. The selection is based on the statistical evidence available, and therefore will not overfit, i.e. mistake stochastic fluctuations for actual structure. In particular this means that we will not find modules in networks if they could have arisen simply because of stochastic fluctuations, as they do in fully random graphs [guimera-modularity-2004].

The stochastic block model (SBM)#

The stochastic block model is arguably the simplest generative process based on the notion of groups of nodes [holland-stochastic-1983]. The microcanonical formulation [peixoto-nonparametric-2017] of the basic or “traditional” version takes as parameters the partition of the nodes into groups \(\boldsymbol b\) and a \(B\times B\) matrix of edge counts \(\boldsymbol e\), where \(e_{rs}\) is the number of edges between groups \(r\) and \(s\). Given these constraints, the edges are then placed randomly. Hence, nodes that belong to the same group possess the same probability of being connected with other nodes of the network.

An example of a possible parametrization is given in the following figure.

../../_images/sbm-example-ers.svg

Matrix of edge counts \(\boldsymbol e\) between groups.#

../../_images/sbm-example.svg

Generated network.#

Note

With the SBM, no constraints are imposed on what kind of modular structure is allowed, as the matrix of edge counts \(e\) is unconstrained. Hence, we can detect the putatively typical pattern of assortative “community structure”, i.e. when nodes are connected mostly to other nodes of the same group, if it happens to be the most likely network description, but we can also detect a large multiplicity of other patterns, such as bipartiteness, core-periphery, and many others, all under the same inference framework. If you are interested in searching exclusively for assortative structures, see Sec. Assortative community structure.

Although quite general, the traditional model assumes that the edges are placed randomly inside each group, and because of this the nodes that belong to the same group tend to have very similar degrees. As it turns out, this is often a poor model for many networks, which possess highly heterogeneous degree distributions. A better model for such networks is called the degree-corrected stochastic block model [karrer-stochastic-2011], and it is defined just like the traditional model, with the addition of the degree sequence \(\boldsymbol k = \{k_i\}\) of the graph as an additional set of parameters (assuming again a microcanonical formulation [peixoto-nonparametric-2017]).

The nested stochastic block model#

The regular SBM has a drawback when applied to large networks. Namely, it cannot be used to find relatively small groups, as the maximum number of groups that can be found scales as \(B_{\text{max}}=O(\sqrt{N})\), where \(N\) is the number of nodes in the network, if Bayesian inference is performed [peixoto-parsimonious-2013]. In order to circumvent this, we need to replace the noninformative priors used by a hierarchy of priors and hyperpriors, which amounts to a nested SBM, where the groups themselves are clustered into groups, and the matrix \(e\) of edge counts are generated from another SBM, and so on recursively [peixoto-hierarchical-2014], as illustrated below.

../../_images/nested-diagram.svg

Example of a nested SBM with three levels.#

With this model, the maximum number of groups that can be inferred scales as \(B_{\text{max}}=O(N/\log(N))\). In addition to being able to find small groups in large networks, this model also provides a multilevel hierarchical description of the network. With such a description, we can uncover structural patterns at multiple scales, representing different levels of coarse-graining.

Inferring the best partition#

The simplest and most efficient approach is to find the best partition of the network by maximizing Eq. (4) according to some version of the model. This is obtained via the functions minimize_blockmodel_dl() or minimize_nested_blockmodel_dl(), which employs an agglomerative multilevel Markov chain Monte Carlo (MCMC) algorithm [peixoto-efficient-2014].

We focus first on the non-nested model, and we illustrate its use with a network of American football teams, which we load from the collection module:

g = gt.collection.data["football"]
print(g)

which yields

<Graph object, undirected, with 115 vertices and 613 edges, 4 internal vertex properties, 2 internal graph properties, at 0x...>

We then fit the degree-corrected model by calling:

state = gt.minimize_blockmodel_dl(g)

This returns a BlockState object that includes the inference results.

Note

The inference algorithm used is stochastic by nature, and may return a different answer each time it is run. This may be due to the fact that there are alternative partitions with similar probabilities, or that the optimum is difficult to find. Note that the inference problem here is, in general, NP-Hard, hence there is no efficient algorithm that is guaranteed to always find the best answer.

Because of this, typically one would call the algorithm many times, and select the partition with the largest posterior probability of Eq. (4), or equivalently, the minimum description length of Eq. (5). The description length of a fit can be obtained with the entropy() method. See also Sec. Hierarchical partitions below.

We may perform a drawing of the partition obtained via the draw method, that functions as a convenience wrapper to the graph_draw() function

state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")

which yields the following image.

../../_images/football-sbm-fit.svg

Stochastic block model inference of a network of American college football teams. The colors correspond to inferred group membership of the nodes.#

We can obtain the group memberships as a PropertyMap on the vertices via the get_blocks method:

b = state.get_blocks()
r = b[10]   # group membership of vertex 10
print(r)

which yields:

99

Note

For reasons of algorithmic efficiency, the group labels returned are not necessarily contiguous, and they may lie in any subset of the range \([0, N-1]\), where \(N\) is the number of nodes in the network.

We may also access the matrix of edge counts between groups via get_matrix

# let us obtain a contiguous range first, which will facilitate
# visualization

b = gt.contiguous_map(state.get_blocks())
state = state.copy(b=b)

e = state.get_matrix()

B = state.get_nonempty_B()
matshow(e.todense()[:B, :B])
savefig("football-edge-counts.svg")
../../_images/football-edge-counts.svg

Matrix of edge counts between groups.#

We may obtain the same matrix of edge counts as a graph, which has internal edge and vertex property maps with the edge and vertex counts, respectively:

bg = state.get_bg()
ers = state.mrs    # edge counts
nr = state.wr      # node counts

Hierarchical partitions#

The inference of the nested family of SBMs is done in a similar manner, but we must use instead the minimize_nested_blockmodel_dl() function. We illustrate its use with the neural network of the C. elegans worm:

g = gt.collection.data["celegansneural"]
print(g)

which has 297 vertices and 2359 edges.

<Graph object, directed, with 297 vertices and 2359 edges, 2 internal vertex properties, 1 internal edge property, 2 internal graph properties, at 0x...>

A hierarchical fit of the degree-corrected model is performed as follows.

state = gt.minimize_nested_blockmodel_dl(g)

The object returned is an instance of a NestedBlockState class, which encapsulates the results. We can again draw the resulting hierarchical clustering using the draw() method:

state.draw(output="celegans-hsbm-fit.pdf")
../../_images/celegans-hsbm-fit.png

Most likely hierarchical partition of the neural network of the C. elegans worm according to the nested degree-corrected SBM.#

Note

If the output parameter to draw() is omitted, an interactive visualization is performed, where the user can re-order the hierarchy nodes using the mouse and pressing the r key.

A summary of the inferred hierarchy can be obtained with the print_summary() method, which shows the number of nodes and groups in all levels:

state.print_summary()
l: 0, N: 297, B: 24
l: 1, N: 24, B: 8
l: 2, N: 8, B: 3
l: 3, N: 3, B: 1
l: 4, N: 1, B: 1

The hierarchical levels themselves are represented by individual BlockState() instances obtained via the get_levels() method:

levels = state.get_levels()
for s in levels:
    print(s)
    if s.get_N() == 1:
        break
<BlockState object with 297 blocks (24 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges, 2 internal vertex properties, 1 internal edge property, 2 internal graph properties, at 0x...>, at 0x...>
<BlockState object with 24 blocks (8 nonempty), for graph <Graph object, directed, with 297 vertices and 271 edges, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
<BlockState object with 8 blocks (3 nonempty), for graph <Graph object, directed, with 24 vertices and 49 edges, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
<BlockState object with 3 blocks (1 nonempty), for graph <Graph object, directed, with 8 vertices and 9 edges, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
<BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 3 vertices and 1 edge, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>

This means that we can inspect the hierarchical partition just as before:

r = levels[0].get_blocks()[46]    # group membership of node 46 in level 0
print(r)
r = levels[1].get_blocks()[r]     # group membership of node 46 in level 1
print(r)
r = levels[2].get_blocks()[r]     # group membership of node 46 in level 2
print(r)
248
14
4

Refinements using merge-split MCMC#

The agglomerative algorithm behind minimize_blockmodel_dl() and minimize_nested_blockmodel_dl() has a log-linear complexity on the size of the network, and it usually works very well in finding a good estimate of the optimum partition. Nevertheless, it’s often still possible to find refinements without starting the whole algorithm from scratch using a greedy algorithm based on a merge-split MCMC with zero temperature [peixoto-merge-split-2020]. This is achieved by following the instructions in Sec. Sampling from the posterior distribution, while setting the inverse temperature parameter beta to infinity. For example, an equivalent to the above minimization for the C. elegans network is the following:

g = gt.collection.data["celegansneural"]

state = gt.minimize_nested_blockmodel_dl(g)

S1 = state.entropy()

for i in range(1000): # this should be sufficiently large
    state.multiflip_mcmc_sweep(beta=np.inf, niter=10)

S2 = state.entropy()

print("Improvement:", S2 - S1)
Improvement: -81.412028...

Whenever possible, this procedure should be repeated several times, and the result with the smallest description length (obtained via the entropy() method) should be chosen. In more demanding situations, better results still can be obtained, at the expense of a longer computation time, by using the mcmc_anneal() function, which implements simulated annealing:

g = gt.collection.data["celegansneural"]

state = gt.minimize_nested_blockmodel_dl(g)

gt.mcmc_anneal(state, beta_range=(1, 10), niter=1000, mcmc_equilibrate_args=dict(force_niter=10))

Model selection#

As mentioned above, one can select the best model according to the choice that yields the smallest description length [peixoto-model-2016]. For instance, in case of the C. elegans network we have

g = gt.collection.data["celegansneural"]

state_ndc = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=False))
state_dc  = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=True))

print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
Non-degree-corrected DL:      8504.411444...
Degree-corrected DL:  8542.336883...

Since it yields the smallest description length, the non-degree-corrected fit should be preferred. The statistical significance of the choice can be accessed by inspecting the posterior odds ratio [peixoto-nonparametric-2017]

\[\begin{split}\Lambda &= \frac{P(\boldsymbol b, \mathcal{H}_\text{NDC} | \boldsymbol A)}{P(\boldsymbol b, \mathcal{H}_\text{DC} | \boldsymbol A)} \\ &= \frac{P(\boldsymbol A, \boldsymbol b | \mathcal{H}_\text{NDC})}{P(\boldsymbol A, \boldsymbol b | \mathcal{H}_\text{DC})}\times\frac{P(\mathcal{H}_\text{NDC})}{P(\mathcal{H}_\text{DC})} \\ &= \exp(-\Delta\Sigma)\end{split}\]

where \(\mathcal{H}_\text{NDC}\) and \(\mathcal{H}_\text{DC}\) correspond to the non-degree-corrected and degree-corrected model hypotheses (assumed to be equally likely a priori), respectively, and \(\Delta\Sigma\) is the difference of the description length of both fits. In our particular case, we have

print(u"ln \u039b: ", state_dc.entropy() - state_ndc.entropy())
ln Λ:  37.925438...

The precise threshold that should be used to decide when to reject a hypothesis is subjective and context-dependent, but the value above implies that the particular non-degree-corrected fit is around \(\mathrm{e}^{37.9} \approx 10^{16}\) times more likely than the non-degree corrected one, and hence it can be safely concluded that it provides a substantially better fit.

Although it is often true that the degree-corrected model provides a better fit for many empirical networks, there are also exceptions. For example, for the American football network above, we have:

g = gt.collection.data["football"]

state_ndc = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=False))
state_dc  = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=True))

print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
print(u"ln \u039b:\t\t\t", state_ndc.entropy() - state_dc.entropy())
Non-degree-corrected DL:      1733.525685...
Degree-corrected DL:  1780.576716...
ln Λ:                         -47.051031...

Hence, with a posterior odds ratio of \(\Lambda \approx \mathrm{e}^{-47} \approx 10^{-21}\) in favor of the non-degree-corrected model, we conclude that the degree-corrected variant is an unnecessarily complex description for this network.

Sampling from the posterior distribution#

When analyzing empirical networks, one should be open to the possibility that there will be more than one fit of the SBM with similar posterior probabilities. In such situations, one should instead sample partitions from the posterior distribution, instead of simply finding its maximum. One can then compute quantities that are averaged over the different model fits, weighted according to their posterior probabilities.

Full support for model averaging is implemented in graph-tool via an efficient Markov chain Monte Carlo (MCMC) algorithm [peixoto-efficient-2014]. It works by attempting to move nodes into different groups with specific probabilities, and accepting or rejecting such moves so that, after a sufficiently long time, the partitions will be observed with the desired posterior probability. The algorithm is designed so that its run-time (i.e. each sweep of the MCMC) is linear on the number of edges in the network, and independent on the number of groups being used in the model, and hence is suitable for use on very large networks.

In order to perform such moves, one needs again to operate with BlockState or NestedBlockState instances, and calling either their mcmc_sweep() or multiflip_mcmc_sweep() methods. The former implements a simpler MCMC where a single node is moved at a time, where the latter is a more efficient version that performs merges and splits [peixoto-merge-split-2020], which should be in general preferred.

For example, the following will perform 1000 sweeps of the algorithm with the network of characters in the novel Les Misérables, starting from a random partition into 20 groups

g = gt.collection.data["lesmis"]

state = gt.BlockState(g)   # This automatically initializes the state with a partition
                           # into one group. The user could also pass a higher number
                           # to start with a random partition of a given size, or pass a
                           # specific initial partition using the 'b' parameter.

# Now we run 1,000 sweeps of the MCMC. Note that the number of groups
# is allowed to change, so it will eventually move from the initial
# value of B=1 to whatever is most appropriate for the data.

dS, nattempts, nmoves = state.multiflip_mcmc_sweep(niter=1000)

print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: -67.747649...
Number of accepted vertex moves: 143407

Although the above is sufficient to implement sampling from the posterior, there is a convenience function called mcmc_equilibrate() that is intend to simplify the detection of equilibration, by keeping track of the maximum and minimum values of description length encountered and how many sweeps have been made without a “record breaking” event. For example,

# We will accept equilibration if 10 sweeps are completed without a
# record breaking event, 2 consecutive times.

gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10))

Note that the value of wait above was made purposefully low so that the output would not be overly long. The most appropriate value requires experimentation, but a typically good value is wait=1000.

The function mcmc_equilibrate() accepts a callback argument that takes an optional function to be invoked after each call to multiflip_mcmc_sweep(). This function should accept a single parameter which will contain the actual BlockState instance. We will use this in the example below to collect the posterior vertex marginals (via PartitionModeState, which disambiguates group labels [peixoto-revealing-2021]), i.e. the posterior probability that a node belongs to a given group:

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

bs = [] # collect some partitions

def collect_partitions(s):
   global bs
   bs.append(s.b.a.copy())

# Now we collect partitions for exactly 100,000 sweeps, at intervals
# of 10 sweeps:
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_partitions)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(g)

# Now the node marginals are stored in property map pv. We can
# visualize them as pie charts on the nodes:
state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv,
           output="lesmis-sbm-marginals.svg")
../../_images/lesmis-sbm-marginals.svg

Marginal probabilities of group memberships of the network of characters in the novel Les Misérables, according to the degree-corrected SBM. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

We can also obtain a marginal probability on the number of groups itself, as follows.

h = np.zeros(g.num_vertices() + 1)

def collect_num_groups(s):
    B = s.get_nonempty_B()
    h[B] += 1

# Now we collect partitions for exactly 100,000 sweeps, at intervals
# of 10 sweeps:
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_num_groups)
../../_images/lesmis-B-posterior.svg

Marginal posterior probability of the number of nonempty groups for the network of characters in the novel Les Misérables, according to the degree-corrected SBM.#

Hierarchical partitions#

We can also perform model averaging using the nested SBM, which will give us a distribution over hierarchies. The whole procedure is fairly analogous, but now we make use of NestedBlockState instances.

Here we perform the sampling of hierarchical partitions using the same network as above.

g = gt.collection.data["lesmis"]

state = gt.NestedBlockState(g)   # By default this creates a state with an initial single-group
                                 # hierarchy of depth ceil(log2(g.num_vertices()).

# Now we run 1000 sweeps of the MCMC

dS, nmoves = 0, 0
for i in range(100):
    ret = state.multiflip_mcmc_sweep(niter=10)
    dS += ret[0]
    nmoves += ret[1]

print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: -76.639728...
Number of accepted vertex moves: 465959

Warning

When using NestedBlockState, a single call to multiflip_mcmc_sweep() or mcmc_sweep() performs niter sweeps at each hierarchical level once. This means that in order for the chain to equilibrate, we need to call these functions several times, i.e. it is not enough to call it once with a large value of niter.

Similarly to the the non-nested case, we can use mcmc_equilibrate() to do most of the boring work, and we can now obtain vertex marginals on all hierarchical levels:

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

# collect nested partitions
bs = []

def collect_partitions(s):
   global bs
   bs.append(s.get_bs())

# Now we collect the marginals for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_partitions)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, nested=True, converge=True)
pv = pmode.get_marginal(g)

# Get consensus estimate
bs = pmode.get_max_nested()

state = state.copy(bs=bs)

# We can visualize the marginals as pie charts on the nodes:
state.draw(vertex_shape="pie", vertex_pie_fractions=pv,
           output="lesmis-nested-sbm-marginals.svg")
../../_images/lesmis-nested-sbm-marginals.svg

Marginal probabilities of group memberships of the network of characters in the novel Les Misérables, according to the nested degree-corrected SBM. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

We can also obtain a marginal probability of the number of groups itself, as follows.

h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()]

def collect_num_groups(s):
    for l, sl in enumerate(s.get_levels()):
       B = sl.get_nonempty_B()
       h[l][B] += 1

# Now we collect the marginal distribution for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_num_groups)
../../_images/lesmis-nested-B-posterior.svg

Marginal posterior probability of the number of nonempty groups \(B_l\) at each hierarchy level \(l\) for the network of characters in the novel Les Misérables, according to the nested degree-corrected SBM.#

Below we obtain some hierarchical partitions sampled from the posterior distribution.

for i in range(10):
    for j in range(100):
        state.multiflip_mcmc_sweep(niter=10)
    state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False)
../../_images/lesmis-partition-sample-0.svg ../../_images/lesmis-partition-sample-1.svg ../../_images/lesmis-partition-sample-2.svg ../../_images/lesmis-partition-sample-3.svg ../../_images/lesmis-partition-sample-4.svg ../../_images/lesmis-partition-sample-5.svg ../../_images/lesmis-partition-sample-6.svg ../../_images/lesmis-partition-sample-7.svg ../../_images/lesmis-partition-sample-8.svg ../../_images/lesmis-partition-sample-9.svg

Characterizing the posterior distribution#

The posterior distribution of partitions can have an elaborate structure, containing multiple possible explanations for the data. In order to summarize it, we can infer the modes of the distribution using ModeClusterState, as described in [peixoto-revealing-2021]. This amounts to identifying clusters of partitions that are very similar to each other, but sufficiently different from those that belong to other clusters. Collective, such “modes” represent the different stories that the data is telling us through the model. Here is an example using again the Les Misérables network:

g = gt.collection.data["lesmis"]

state = gt.NestedBlockState(g)

# Equilibration
gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10))

bs = []

def collect_partitions(s):
   global bs
   bs.append(s.get_bs())

# We will collect only partitions 1000 partitions. For more accurate
# results, this number should be increased.
gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10),
                    callback=collect_partitions)

# Infer partition modes
pmode = gt.ModeClusterState(bs, nested=True)

# Minimize the mode state itself
gt.mcmc_equilibrate(pmode, wait=1, mcmc_args=dict(niter=1, beta=np.inf))

# Get inferred modes
modes = pmode.get_modes()

for i, mode in enumerate(modes):
    b = mode.get_max_nested()    # mode's maximum
    pv = mode.get_marginal(g)    # mode's marginal distribution

    print(f"Mode {i} with size {mode.get_M()/len(bs)}")
    state = state.copy(bs=b)
    state.draw(vertex_shape="pie", vertex_pie_fractions=pv,
               output="lesmis-partition-mode-%i.svg" % i)

Running the above code gives us the relative size of each mode, corresponding to their collective posterior probability.

Mode 0 with size 0.625625...
Mode 1 with size 0.309309...
Mode 2 with size 0.065065...

Below are the marginal node distributions representing the partitions that belong to each inferred mode:

../../_images/lesmis-partition-mode-0.svg ../../_images/lesmis-partition-mode-1.svg ../../_images/lesmis-partition-mode-2.svg ../../_images/lesmis-partition-mode-3.svg ../../_images/lesmis-partition-mode-4.svg

Model class selection#

When averaging over partitions, we may be interested in evaluating which model class provides a better fit of the data, considering all possible parameter choices. This is done by evaluating the model evidence summed over all possible partitions [peixoto-nonparametric-2017]:

\[P(\boldsymbol A) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol A,\boldsymbol\theta, \boldsymbol b) = \sum_{\boldsymbol b}P(\boldsymbol A,\boldsymbol b).\]

This quantity is analogous to a partition function in statistical physics, which we can write more conveniently as a negative free energy by taking its logarithm

(6)#\[\ln P(\boldsymbol A) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(\boldsymbol A,\boldsymbol b)}_{-\left<\Sigma\right>}\; \underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}}\]

where

\[q(\boldsymbol b) = \frac{P(\boldsymbol A,\boldsymbol b)}{\sum_{\boldsymbol b'}P(\boldsymbol A,\boldsymbol b')}\]

is the posterior probability of partition \(\boldsymbol b\). The first term of Eq. (6) (the “negative energy”) is minus the average of description length \(\left<\Sigma\right>\), weighted according to the posterior distribution. The second term \(\mathcal{S}\) is the entropy of the posterior distribution, and measures, in a sense, the “quality of fit” of the model: If the posterior is very “peaked”, i.e. dominated by a single partition with a very large probability, the entropy will tend to zero. However, if there are many partitions with similar probabilities — meaning that there is no single partition that describes the network uniquely well — it will take a large value instead.

Since the MCMC algorithm samples partitions from the distribution \(q(\boldsymbol b)\), it can be used to compute \(\left<\Sigma\right>\) easily, simply by averaging the description length values encountered by sampling from the posterior distribution many times.

The computation of the posterior entropy \(\mathcal{S}\), however, is significantly more difficult, since it involves measuring the precise value of \(q(\boldsymbol b)\). A direct “brute force” computation of \(\mathcal{S}\) is implemented via collect_partition_histogram() and microstate_entropy(), however this is only feasible for very small networks. For larger networks, we are forced to perform approximations. One possibility is to employ the method described in [peixoto-revealing-2021], based on fitting a mixture “random label” model to the posterior distribution, which allows us to compute its entropy. In graph-tool this is done by using ModeClusterState, as we show in the example below.

from scipy.special import gammaln

g = gt.collection.data["lesmis"]

for deg_corr in [True, False]:
    state = gt.minimize_blockmodel_dl(g, state_args=dict(deg_corr=deg_corr))  # Initialize the Markov
                                                                              # chain from the "ground
                                                                              # state"
    dls = []         # description length history
    bs = []          # partitions

    def collect_partitions(s):
        global bs, dls
        bs.append(s.get_state().a.copy())
        dls.append(s.entropy())

    # Now we collect 2000 partitions; but the larger this is, the
    # more accurate will be the calculation

    gt.mcmc_equilibrate(state, force_niter=2000, mcmc_args=dict(niter=10),
                        callback=collect_partitions)

    # Infer partition modes
    pmode = gt.ModeClusterState(bs)

    # Minimize the mode state itself
    gt.mcmc_equilibrate(pmode, wait=1, mcmc_args=dict(niter=1, beta=np.inf))

    # Posterior entropy
    H = pmode.posterior_entropy()

    # log(B!) term
    logB = mean(gammaln(np.array([len(np.unique(b)) for b in bs]) + 1))

    # Evidence
    L = -mean(dls) + logB + H

    print(f"Model log-evidence for deg_corr = {deg_corr}: {L}")
Model log-evidence for deg_corr = True: -678.153636...
Model log-evidence for deg_corr = False: -673.376428...

The outcome shows a preference for the non-degree-corrected model.

When using the nested model, the approach is entirely analogous. We show below the approach for the same network, using the nested model.

from scipy.special import gammaln

g = gt.collection.data["lesmis"]

for deg_corr in [True, False]:
    state = gt.NestedBlockState(g, state_args=dict(deg_corr=deg_corr))

    # Equilibrate
    gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10))

    dls = []         # description length history
    bs = []          # partitions

    def collect_partitions(s):
        global bs, dls
        bs.append(s.get_state())
        dls.append(s.entropy())

    # Now we collect 2000 partitions; but the larger this is, the
    # more accurate will be the calculation

    gt.mcmc_equilibrate(state, force_niter=2000, mcmc_args=dict(niter=10),
                        callback=collect_partitions)

    # Infer partition modes
    pmode = gt.ModeClusterState(bs, nested=True)

    # Minimize the mode state itself
    gt.mcmc_equilibrate(pmode, wait=1, mcmc_args=dict(niter=1, beta=np.inf))

    # Posterior entropy
    H = pmode.posterior_entropy()

    # log(B!) term
    logB = mean([sum(gammaln(len(np.unique(bl))+1) for bl in b) for b in bs])

    # Evidence
    L = -mean(dls) + logB + H

    print(f"Model log-evidence for deg_corr = {deg_corr}: {L}")
Model log-evidence for deg_corr = True: -664.271777...
Model log-evidence for deg_corr = False: -655.768987...

The results are similar: The non-degree-corrected model possesses the largest evidence. Note also that we observe a better evidence for the nested models themselves, when comparing to the evidences for the non-nested model — which is not quite surprising, since the non-nested model is a special case of the nested one.

Edge weights and covariates#

Very often networks cannot be completely represented by simple graphs, but instead have arbitrary “weights” \(x_{ij}\) on the edges. Edge weights can be continuous or discrete numbers, and either strictly positive or positive or negative, depending on context. The SBM can be extended to cover these cases by treating edge weights as covariates that are sampled from some distribution conditioned on the node partition [aicher-learning-2015] [peixoto-weighted-2017], i.e.

\[P(\boldsymbol x,\boldsymbol A|\boldsymbol b) = P(\boldsymbol x|\boldsymbol A,\boldsymbol b) P(\boldsymbol A|\boldsymbol b),\]

where \(P(\boldsymbol A|\boldsymbol b)\) is the likelihood of the unweighted SBM described previously, and \(P(\boldsymbol x|\boldsymbol A,\boldsymbol b)\) is the integrated likelihood of the edge weights

\[P(\boldsymbol x|\boldsymbol A,\boldsymbol b) = \prod_{r\le s}\int P({\boldsymbol x}_{rs}|\gamma)P(\gamma)\,\mathrm{d}\gamma,\]

where \(P({\boldsymbol x}_{rs}|\gamma)\) is some model for the weights \({\boldsymbol x}_{rs}\) between groups \((r,s)\), conditioned on some parameter \(\gamma\), sampled from its prior \(P(\gamma)\). A hierarchical version of the model can also be implemented by replacing this prior by a nested sequence of priors and hyperpriors, as described in [peixoto-weighted-2017]. The posterior partition distribution is then simply

\[P(\boldsymbol b | \boldsymbol A,\boldsymbol x) = \frac{P(\boldsymbol x|\boldsymbol A,\boldsymbol b) P(\boldsymbol A|\boldsymbol b) P(\boldsymbol b)}{P(\boldsymbol A,\boldsymbol x)},\]

which can be sampled from, or maximized, just like with the unweighted case, but will use the information on the weights to guide the partitions.

A variety of weight models is supported, reflecting different kinds of edge covariates:

Name

Domain

Bounds

Shape

"real-exponential"

Real \((\mathbb{R})\)

\([0,\infty]\)

Exponential

"real-normal"

Real \((\mathbb{R})\)

\([-\infty,\infty]\)

Normal

"discrete-geometric"

Natural \((\mathbb{N})\)

\([0,\infty]\)

Geometric

"discrete-binomial"

Natural \((\mathbb{N})\)

\([0,M]\)

Binomial

"discrete-poisson"

Natural \((\mathbb{N})\)

\([0,\infty]\)

Poisson

In fact, the actual model implements microcanonical versions of these distributions that are asymptotically equivalent, as described in [peixoto-weighted-2017]. These can be combined with arbitrary weight transformations to achieve a large family of associated distributions. For example, to use a log-normal weight model for positive real weights \(\boldsymbol x\), we can use the transformation \(y_{ij} = \ln x_{ij}\) together with the "real-normal" model for \(\boldsymbol y\). To model weights that are positive or negative integers in \(\mathbb{Z}\), we could either subtract the minimum value, \(y_{ij} = x_{ij} - x^*\), with \(x^*=\operatorname{min}_{ij}x_{ij}\), and use any of the above models for non-negative integers in \(\mathbb{N}\), or alternatively, consider the sign as an additional covariate, i.e. \(s_{ij} = [\operatorname{sign}(x_{ij})+1]/2 \in \{0,1\}\), using the Binomial distribution with \(M=1\) (a.k.a. the Bernoulli distribution), and any of the other discrete distributions for the magnitude, \(y_{ij} = \operatorname{abs}(x_{ij})\).

The support for weighted networks is activated by passing the parameters recs and rec_types to BlockState (or OverlapBlockState), that specify the edge covariates (an edge PropertyMap) and their types (a string from the table above), respectively. Note that these parameters expect lists, so that multiple edge weights can be used simultaneously.

For example, let us consider a network of suspected terrorists involved in the train bombing of Madrid on March 11, 2004 [hayes-connecting-2006]. An edge indicates that a connection between the two persons have been identified, and the weight of the edge (an integer in the range \([0,3]\)) indicates the “strength” of the connection. We can apply the weighted SBM, using a Binomial model for the weights, as follows:

g = gt.collection.ns["train_terrorists"]

# This network contains an internal edge property map with name
# "weight" that contains the strength of interactions. The values
# integers in the range [0, 3].

state = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[g.ep.weight],
                                                            rec_types=["discrete-binomial"]))

# improve solution with merge-split

for i in range(100):
    ret = state.multiflip_mcmc_sweep(niter=10, beta=np.inf)

state.draw(edge_color=g.ep.weight, ecmap=(matplotlib.cm.inferno, .6),
           eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 2, 8, power=1),
           edge_gradient=[], output="moreno-train-wsbm.pdf")
../../_images/moreno-train-wsbm.png

Best fit of the Binomial-weighted degree-corrected SBM for a network of terror suspects, using the strength of connection as edge covariates. The edge colors and widths correspond to the strengths.#

Model selection#

In order to select the best weighted model, we proceed in the same manner as described in Sec. Model selection. However, when using transformations on continuous weights, we must include the associated scaling of the probability density, as described in [peixoto-weighted-2017].

For example, consider a food web between species in south Florida [ulanowicz-network-2005]. A directed link exists from species \(i\) to \(j\) if a energy flow exists between them, and a weight \(x_{ij}\) on this edge indicates the magnitude of the energy flow (a positive real value, i.e. \(x_{ij}\in [0,\infty]\)). One possibility, therefore, is to use the "real-exponential" model, as follows:

g = gt.collection.ns["foodweb_baywet"]

# This network contains an internal edge property map with name
# "weight" that contains the energy flow between species. The values
# are continuous in the range [0, infinity].

state = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[g.ep.weight],
                                                            rec_types=["real-exponential"]))

# improve solution with merge-split

for i in range(100):
    ret = state.multiflip_mcmc_sweep(niter=10, beta=np.inf)


state.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=(matplotlib.cm.inferno, .6),
           eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1, log=True),
           edge_gradient=[], output="foodweb-wsbm.pdf")
../../_images/foodweb-wsbm.png

Best fit of the exponential-weighted degree-corrected SBM for a food web, using the energy flow as edge covariates (indicated by the edge colors and widths).#

Alternatively, we may consider a transformation of the type

(7)#\[y_{ij} = \ln x_{ij}\]

so that \(y_{ij} \in [-\infty,\infty]\). If we use a model "real-normal" for \(\boldsymbol y\), it amounts to a log-normal model for \(\boldsymbol x\). This can be a better choice if the weights are distributed across many orders of magnitude, or show multi-modality. We can fit this alternative model simply by using the transformed weights:

# Apply the weight transformation
y = g.ep.weight.copy()
y.a = log(y.a)

state_ln = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[y],
                                                               rec_types=["real-normal"]))

# improve solution with merge-split

for i in range(100):
    ret = state_ln.multiflip_mcmc_sweep(niter=10, beta=np.inf)

state_ln.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=(matplotlib.cm.inferno, .6),
              eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1, log=True),
              edge_gradient=[], output="foodweb-wsbm-lognormal.pdf")
../../_images/foodweb-wsbm-lognormal.png

Best fit of the log-normal-weighted degree-corrected SBM for a food web, using the energy flow as edge covariates (indicated by the edge colors and widths).#

At this point, we ask ourselves which of the above models yields the best fit of the data. This is answered by performing model selection via posterior odds ratios just like in Sec. Model selection. However, here we need to take into account the scaling of the probability density incurred by the variable transformation, i.e.

\[P(\boldsymbol x | \boldsymbol A, \boldsymbol b) = P(\boldsymbol y(\boldsymbol x) | \boldsymbol A, \boldsymbol b) \prod_{ij}\left[\frac{\mathrm{d}y_{ij}}{\mathrm{d}x_{ij}}(x_{ij})\right]^{A_{ij}}.\]

In the particular case of Eq. (7), we have

\[\prod_{ij}\left[\frac{\mathrm{d}y_{ij}}{\mathrm{d}x_{ij}}(x_{ij})\right]^{A_{ij}} = \prod_{ij}\frac{1}{x_{ij}^{A_{ij}}}.\]

Therefore, we can compute the posterior odds ratio between both models as:

L1 = -state.entropy()
L2 = -state_ln.entropy() - log(g.ep.weight.a).sum()

print(u"ln \u039b: ", L2 - L1)
ln Λ:  -20.635724...

A value of \(\Lambda \approx \mathrm{e}^{83} \approx 10^{36}\) in favor the exponential model indicates that the log-normal model does not provide a better fit for this particular data.

Posterior sampling#

The procedure to sample from the posterior distribution is identical to what is described in Sec. Sampling from the posterior distribution, but with the appropriate initialization, e.g..

g = gt.collection.ns["foodweb_baywet"]

state = gt.NestedBlockState(g, state_args=dict(recs=[g.ep.weight], rec_types=["real-exponential"]))

gt.mcmc_equilibrate(state, force_niter=100, mcmc_args=dict(niter=10))

Layered networks#

The edges of the network may be distributed in discrete “layers”, representing distinct types if interactions [peixoto-inferring-2015]. Extensions to the SBM may be defined for such data, and they can be inferred using the exact same interface shown above, except one should use the LayeredBlockState class, instead of BlockState. This class takes two additional parameters: the ec parameter, that must correspond to an edge PropertyMap with the layer/covariate values on the edges, and the Boolean layers parameter, which if True specifies a layered model, otherwise one with categorical edge covariates (not to be confused with the weighted models in Sec. Edge weights and covariates).

If we use minimize_blockmodel_dl(), this can be achieved simply by passing the option layers=True as well as the appropriate value of state_args, which will be propagated to LayeredBlockState’s constructor.

As an example, let us consider a social network of tribes, where two types of interactions were recorded, amounting to either friendship or enmity [read-cultures-1954]. We may apply the layered model by separating these two types of interactions in two layers:

g = gt.collection.ns["new_guinea_tribes"]

# The edge types are stored in the edge property map "weights".

# Note the different meanings of the two 'layers' parameters below: The
# first enables the use of LayeredBlockState, and the second selects
# the 'edge layers' version (instead of 'edge covariates').

state = gt.minimize_nested_blockmodel_dl(g,
                                         state_args=dict(base_type=gt.LayeredBlockState,
                                                         state_args=dict(ec=g.ep.weight, layers=True)))

state.draw(edge_color=g.ep.weight.copy("double"), edge_gradient=[],
           ecmap=(matplotlib.cm.coolwarm_r, .6), edge_pen_width=5,  eorder=g.ep.weight,
           output="tribes-sbm-edge-layers.svg")
../../_images/tribes-sbm-edge-layers.svg

Best fit of the degree-corrected SBM with edge layers for a network of tribes, with edge layers shown as colors. The groups show two enemy tribes.#

It is possible to perform model averaging of all layered variants exactly like for the regular SBMs as was shown above.

Assortative community structure#

Traditionally, “community structure” in the proper sense refers to groups of nodes that are more connected to each other than to nodes of other communities. The SBM is capable of representing this kind of structure without any problems, but in some circumstances it might make sense to search exclusively for assortative communities [lizhi-statistical-2020]. A version of the SBM that is constrained in this way is called the “planted partition model”, which can be inferred with graph-tool using PPBlockState. This class behaves just like BlockState, therefore all algorithms described in this documentation work in the same way. Below we show how this model can be inferred for the football network considered previously

g = gt.collection.data["football"]

# We can use the same agglomerative heuristic as before, but we need
# to specify PPBlockState as the internal state.

state = gt.minimize_blockmodel_dl(g, state=gt.PPBlockState)

# Now we run 100 sweeps of the MCMC with zero temperature, as a
# refinement. This is often not necessary.

state.multiflip_mcmc_sweep(beta=np.inf, niter=100)

state.draw(pos=g.vp.pos, output="football-pp.svg")
../../_images/football-pp.svg

Best fit of the degree-corrected planted partition model to a network of American college football teams.#

It is possible to perform model comparison with other model variations in the same manner as described in Hierarchical partitions below.

Ordered community structure#

The modular structure of directed networks might possess an inherent ordering of the groups, such that most edges flow either “downstream” or “upstream” according to that ordering. The directed version of the SBM will inherently capture this ordering, but it will not be visible from the model parameters — in particular the group labels — since the model is invariant to group permutations. This ordering can be obtained from a modified version of the model [peixoto-ordered-2022], which can be inferred with graph-tool using RankedBlockState. This class behaves just like BlockState, therefore all algorithms described in this documentation work in the same way (including when NestedBlockState is used).

Below we show how this model can be inferred for a faculty_hiring network.

g = gt.collection.ns["faculty_hiring/computer_science"].copy()

# For visualization purposes, it will be more useful to work with a
# weighted graph than with a multigraph, but the results are
# insensitive to this.

ew = gt.contract_parallel_edges(g)

# We will use a nested SBM, with the base state being the ordered SBM.

state = gt.NestedBlockState(g, base_type=gt.RankedBlockState, state_args=dict(eweight=ew))

# The number of iterations below is sufficient for a good estimate of
# the ground state for this network.

for i in range(100):
    state.multiflip_mcmc_sweep(beta=np.inf, niter=10)

# We can use sfdp_layout() to obtain a ranked visualization.

pos = gt.sfdp_layout(g, cooling_step=0.99, multilevel=False, R=50000,
                     rmap=state.levels[0].get_vertex_order())

# Stretch the layout somewhat
for v in g.vertices():
    pos[v][1] *= 2

state.levels[0].draw(pos=pos, edge_pen_width=gt.prop_to_size(ew, 1, 5),
                     output="hiring.pdf")
../../_images/hiring.png

Best fit of the ordered degree-corrected SBM to a faculty hiring network. The vertical position indicates the rank, and the edge color the edge direction: upstream (blue), downstream (red), lateral (grey).#

It is possible to perform model comparison with other model variations in the same manner as described in Hierarchical partitions below.

Network reconstruction#

An important application of generative models is to be able to generalize from observations and make predictions that go beyond what is seen in the data. This is particularly useful when the network we observe is incomplete, or contains errors, i.e. some of the edges are either missing or are outcomes of mistakes in measurement, or is not even observed at all. In this situation, we can use statistical inference to reconstruct the original network. Following [peixoto-reconstructing-2018], if \(\boldsymbol{\mathcal{D}}\) is the observed data, the network can be reconstructed according to the posterior distribution,

(8)#\[P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}) = \frac{P(\boldsymbol{\mathcal{D}} | \boldsymbol A)P(\boldsymbol A, \boldsymbol b)}{P(\boldsymbol{\mathcal{D}})}\]

where the likelihood \(P(\boldsymbol{\mathcal{D}}|\boldsymbol A)\) models the measurement process, and for the prior \(P(\boldsymbol A, \boldsymbol b)\) we use the SBM as before. This means that when performing reconstruction, we sample both the community structure \(\boldsymbol b\) and the network \(\boldsymbol A\) itself from the posterior distribution. From it, we can obtain the marginal probability of each edge,

\[\pi_{ij} = \sum_{\boldsymbol A, \boldsymbol b}A_{ij}P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}).\]

Based on the marginal posterior probabilities, the best estimate for the whole underlying network \(\boldsymbol{\hat{A}}\) is given by the maximum of this distribution,

\[\begin{split}\hat A_{ij} = \begin{cases} 1 & \text{ if } \pi_{ij} > \frac{1}{2},\\ 0 & \text{ if } \pi_{ij} < \frac{1}{2}.\\ \end{cases}\end{split}\]

We can also make estimates \(\hat y\) of arbitrary scalar network properties \(y(\boldsymbol A)\) via posterior averages,

\[\begin{split}\begin{align} \hat y &= \sum_{\boldsymbol A, \boldsymbol b}y(\boldsymbol A)P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}),\\ \sigma^2_y &= \sum_{\boldsymbol A, \boldsymbol b}(y(\boldsymbol A)-\hat y)^2P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}) \end{align}\end{split}\]

with uncertainty given by \(\sigma_y\). This is gives us a complete probabilistic reconstruction framework that fully reflects both the information and the uncertainty in the measurement data. Furthermore, the use of the SBM means that the reconstruction can take advantage of the correlations observed in the data to further inform it, which generally can lead to substantial improvements [peixoto-reconstructing-2018] [peixoto-network-2019].

In graph-tool there is support for reconstruction with the above framework for three measurement processes: 1. Repeated measurements with uniform errors (via MeasuredBlockState), 2. Repeated measurements with heterogeneous errors (via MixedMeasuredBlockState), and 3. Extraneously obtained edge probabilities (via UncertainBlockState), which we describe in the following.

In addition, it is also possible to reconstruct networks from observed dynamical, as described in Reconstruction from dynamics.

Measured networks#

This model assumes that the node pairs \((i,j)\) were measured \(n_{ij}\) times, and an edge has been recorded \(x_{ij}\) times, where a missing edge occurs with probability \(p\) and a spurious edge occurs with probability \(q\), uniformly for all node pairs, yielding a likelihood

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, p, q) = \prod_{i<j}{n_{ij}\choose x_{ij}}\left[(1-p)^{x_{ij}}p^{n_{ij}-x_{ij}}\right]^{A_{ij}} \left[q^{x_{ij}}(1-q)^{n_{ij}-x_{ij}}\right]^{1-A_{ij}}.\]

In general, \(p\) and \(q\) are not precisely known a priori, so we consider the integrated likelihood

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha,\beta,\mu,\nu) = \int P(\boldsymbol x | \boldsymbol n, \boldsymbol A, p, q) P(p|\alpha,\beta) P(q|\mu,\nu)\;\mathrm{d}p\,\mathrm{d}q\]

where \(P(p|\alpha,\beta)\) and \(P(q|\mu,\nu)\) are Beta distributions, which specify the amount of prior knowledge we have on the noise parameters. An important special case, which is the default unless otherwise specified, is when we are completely agnostic a priori about the noise magnitudes, and all hyperparameters are unity,

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A) \equiv P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha=1,\beta=1,\mu=1,\nu=1).\]

In this situation the priors \(P(p|\alpha=1,\beta=1)\) and \(P(q|\mu=1,\nu=1)\) are uniform distribution in the interval \([0,1]\).

Note

Since this approach also makes use of the correlations between edges to inform the reconstruction, as described by the inferred SBM, this means it can also be used when only single measurements have been performed, \(n_{ij}=1\), and the error magnitudes \(p\) and \(q\) are unknown. Since every arbitrary adjacency matrix can be cast in this setting, this method can be used to reconstruct networks for which no error assessments of any kind have been provided.

Below, we illustrate how the reconstruction can be performed with a simple example, using MeasuredBlockState:

g = gt.collection.data["lesmis"].copy()

# pretend we have measured and observed each edge twice

n = g.new_ep("int", 2)   # number of measurements
x = g.new_ep("int", 2)   # number of observations

e = g.edge(11, 36)
x[e] = 1                 # pretend we have observed edge (11, 36) only once

e = g.add_edge(15, 73)
n[e] = 2                 # pretend we have measured non-edge (15, 73) twice,
x[e] = 1                 # but observed it as an edge once.

# We inititialize MeasuredBlockState, assuming that each non-edge has
# been measured only once (as opposed to twice for the observed
# edges), as specified by the 'n_default' and 'x_default' parameters.

state = gt.MeasuredBlockState(g, n=n, n_default=1, x=x, x_default=0)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

def collect_marginals(s):
   global u, bs, cs
   u = s.collect_marginal(u)
   bstate = s.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())
   cs.append(gt.local_clustering(s.get_graph()).fa.mean())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

Which yields the following output:

Posterior probability of edge (11, 36): 0.859885...
Posterior probability of non-edge (15, 73): 0.083008...
Estimated average local clustering: 0.572547 ± 0.004556...

We have a successful reconstruction, where both ambiguous adjacency matrix entries are correctly recovered. The value for the average clustering coefficient is also correctly estimated, and is compatible with the true value \(0.57313675\), within the estimated error.

Below we visualize the maximum marginal posterior estimate of the reconstructed network:

# The maximum marginal posterior estimator can be obtained by
# filtering the edges with probability larger than .5

u = gt.GraphView(u, efilt=u.ep.eprob.fa > .5)

# Mark the recovered true edges as red, and the removed spurious edges as green
ecolor = u.new_ep("vector<double>", val=[0, 0, 0, .6])
for e in u.edges():
    if g.edge(e.source(), e.target()) is None or (e.source(), e.target()) == (11, 36):
        ecolor[e] = [1, 0, 0, .6]
for e in g.edges():
    if u.edge(e.source(), e.target()) is None:
        ne = u.add_edge(e.source(), e.target())
        ecolor[ne] = [0, 1, 0, .6]

# Duplicate the internal block state with the reconstructed network
# u, for visualization purposes.

bstate = state.get_block_state()
bstate = bstate.levels[0].copy(g=u)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

edash = u.new_ep("vector<double>")
edash[u.edge(15, 73)] = [.1, .1, 0]
bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_color=ecolor, edge_dash_style=edash, edge_gradient=None,
            output="lesmis-reconstruction-marginals.svg")
../../_images/lesmis-reconstruction-marginals.svg

Reconstructed network of characters in the novel Les Misérables, assuming that each edge has been measured and recorded twice, and each non-edge has been measured only once, with the exception of edge (11, 36), shown in red, and non-edge (15, 73), shown in green, which have been measured twice and recorded as an edge once. Despite the ambiguity, both errors are successfully corrected by the reconstruction. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Heterogeneous errors#

In a more general scenario the measurement errors can be different for each node pair, i.e. \(p_{ij}\) and \(q_{ij}\) are the missing and spurious edge probabilities for node pair \((i,j)\). The measurement likelihood then becomes

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \boldsymbol p, \boldsymbol q) = \prod_{i<j}{n_{ij}\choose x_{ij}}\left[(1-p_{ij})^{x_{ij}}p_{ij}^{n_{ij}-x_{ij}}\right]^{A_{ij}} \left[q_{ij}^{x_{ij}}(1-q_{ij})^{n_{ij}-x_{ij}}\right]^{1-A_{ij}}.\]

Since the noise magnitudes are a priori unknown, we consider the integrated likelihood

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha,\beta,\mu,\nu) = \prod_{i<j}\int P(x_{ij} | n_{ij}, A_{ij}, p_{ij}, q_{ij}) P(p_{ij}|\alpha,\beta) P(q_{ij}|\mu,\nu)\;\mathrm{d}p_{ij}\,\mathrm{d}q_{ij}\]

where \(P(p_{ij}|\alpha,\beta)\) and \(P(q_{ij}|\mu,\nu)\) are Beta prior distributions, like before. Instead of pre-specifying the hyperparameters, we include them from the posterior distribution

\[P(\boldsymbol A, \boldsymbol b, \alpha,\beta,\mu,\nu | \boldsymbol x, \boldsymbol n) = \frac{P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha,\beta,\mu,\nu)P(\boldsymbol A, \boldsymbol b)P(\alpha,\beta,\mu,\nu)}{P(\boldsymbol x| \boldsymbol n)},\]

where \(P(\alpha,\beta,\mu,\nu)\propto 1\) is a uniform hyperprior.

Operationally, the inference with this model works similarly to the one with uniform error rates, as we see with the same example:

state = gt.MixedMeasuredBlockState(g, n=n, n_default=1, x=x, x_default=0)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=200, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

Which yields:

Posterior probability of edge (11, 36): 0.685268...
Posterior probability of non-edge (15, 73): 0.046904...
Estimated average local clustering: 0.567948 ± 0.006440...

The results are very similar to the ones obtained with the uniform model in this case, but can be quite different in situations where a large number of measurements has been performed (see [peixoto-reconstructing-2018] for details).

Extraneous error estimates#

In some situations the edge uncertainties are estimated by means other than repeated measurements, using domain-specific models. Here we consider the general case where the error estimates are extraneously provided as independent edge probabilities \(\boldsymbol Q\),

\[P_Q(\boldsymbol A | \boldsymbol Q) = \prod_{i<j}Q_{ij}^{A_{ij}}(1-Q_{ij})^{1-A_{ij}},\]

where \(Q_{ij}\) is the estimated probability of edge \((i,j)\). Although in principle we could reconstruct networks directly from the above distribution, we can also incorporate it with SBM inference to take advantage of large-scale structures present in the data. We do so by employing Bayes’ rule to extract the noise model from the provided values [martin-structural-2015] [peixoto-reconstructing-2018],

\[\begin{split}\begin{align} P_Q(\boldsymbol Q | \boldsymbol A) &= \frac{P_Q(\boldsymbol A | \boldsymbol Q)P_Q(\boldsymbol Q)}{P_Q(\boldsymbol A)},\\ & = P_Q(\boldsymbol Q) \prod_{i<j} \left(\frac{Q_{ij}}{\bar Q}\right)^{A_{ij}}\left(\frac{1-Q_{ij}}{1-\bar Q}\right)^{1-A_{ij}}, \end{align}\end{split}\]

where \(\bar Q = \sum_{i<j}Q_{ij}/{N\choose 2}\) is the estimated network density, and \(P_Q(\boldsymbol Q)\) is an unknown prior for \(\boldsymbol Q\), which can remain unspecified as it has no effect on the posterior distribution. With the above, we can reconstruct the network based on the posterior distribution,

\[P(\boldsymbol A, \boldsymbol b | \boldsymbol Q) = \frac{P_Q(\boldsymbol Q | \boldsymbol A)P(\boldsymbol A, \boldsymbol b)}{P(\boldsymbol Q)}\]

where \(P(\boldsymbol A, \boldsymbol b)\) is the joint SBM distribution used before. Note that this reconstruction will be different from the one obtained directly from the original estimation, i.e.

\[P(\boldsymbol A | \boldsymbol Q) = \sum_{\boldsymbol b}P(\boldsymbol A, \boldsymbol b | \boldsymbol Q) \neq P_Q(\boldsymbol A | \boldsymbol Q).\]

This is because the posterior \(P(\boldsymbol A | \boldsymbol Q)\) will take into consideration the correlations found in the data, as captured by the inferred SBM structure, as further evidence for the existence and non-existence of edges. We illustrate this with an example similar to the one considered previously, where two adjacency matrix entries with the same ambiguous edge probability \(Q_{ij}=1/2\) are correctly reconstructed as edge and non-edge, due to the joint SBM inference:

g = gt.collection.data["lesmis"].copy()

N = g.num_vertices()
E = g.num_edges()

q = g.new_ep("double", .98)   # edge uncertainties

e = g.edge(11, 36)
q[e] = .5                     # ambiguous true edge

e = g.add_edge(15, 73)
q[e] = .5                     # ambiguous spurious edge

# We inititialize UncertainBlockState, assuming that each non-edge
# has an uncertainty of q_default, chosen to preserve the expected
# density of the original network:

q_default = (E - q.a.sum()) / ((N * (N - 1))/2 - E)

state = gt.UncertainBlockState(g, q=q, q_default=q_default)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

def collect_marginals(s):
   global bs, u, cs
   u = s.collect_marginal(u)
   bstate = s.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())
   cs.append(gt.local_clustering(s.get_graph()).fa.mean())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

The above yields the output:

Posterior probability of edge (11, 36): 0.798679...
Posterior probability of non-edge (15, 73): 0.012601...
Estimated average local clustering: 0.537222 ± 0.021772...

The reconstruction is accurate, despite the two ambiguous entries having the same measurement probability. The reconstructed network is visualized below.

# The maximum marginal posterior estimator can be obtained by
# filtering the edges with probability larger than .5

u = gt.GraphView(u, efilt=u.ep.eprob.fa > .5)

# Mark the recovered true edges as red, and the removed spurious edges as green
ecolor = u.new_ep("vector<double>", val=[0, 0, 0, .6])
edash = u.new_ep("vector<double>")
for e in u.edges():
    if g.edge(e.source(), e.target()) is None or (e.source(), e.target()) == (11, 36):
        ecolor[e] = [1, 0, 0, .6]

for e in g.edges():
    if u.edge(e.source(), e.target()) is None:
        ne = u.add_edge(e.source(), e.target())
        ecolor[ne] = [0, 1, 0, .6]
        if (e.source(), e.target()) == (15, 73):
            edash[ne] = [.1, .1, 0]

bstate = state.get_block_state()
bstate = bstate.levels[0].copy(g=u)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_color=ecolor, edge_dash_style=edash, edge_gradient=None,
            output="lesmis-uncertain-reconstruction-marginals.svg")
../../_images/lesmis-uncertain-reconstruction-marginals.svg

Reconstructed network of characters in the novel Les Misérables, assuming that each edge as a measurement probability of \(.98\). Edge (11, 36), shown in red, and non-edge (15, 73), shown in green, both have probability \(0.5\). Despite the ambiguity, both errors are successfully corrected by the reconstruction. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Latent Poisson multigraphs#

Even in situations where measurement errors can be neglected, it can still be useful to assume a given network is the outcome of a “hidden” multigraph model, i.e. more than one edge between nodes is allowed, but then its multiedges are “erased” by transforming them into simple edges. In this way, it is possible to construct generative models that can better handle situations where the underlying network possesses heterogeneous density, such as strong community structure and broad degree distributions [peixoto-latent-2020]. This can be incorporated into the scheme of Eq. (8) by considering the data to be the observed simple graph, \(\boldsymbol{\mathcal{D}} = \boldsymbol G\). We proceed in same way as in the previous reconstruction scenarios, but using instead LatentMultigraphBlockState.

For example, in the following we will obtain the community structure and latent multiedges of a network of political books:

g = gt.collection.data["polbooks"]

state = gt.LatentMultigraphBlockState(g)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior multigraph
bs = []               # partitions

def collect_marginals(s):
   global bs, u
   u = s.collect_marginal_multigraph(u)
   bstate = state.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

# compute average multiplicities

ew = u.new_ep("double")
w = u.ep.w
wcount = u.ep.wcount
for e in u.edges():
    ew[e] = (wcount[e].a * w[e].a).sum() / wcount[e].a.sum()

bstate = state.get_block_state()
bstate = bstate.levels[0].copy(g=u)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_pen_width=gt.prop_to_size(ew, .1, 8, power=1), edge_gradient=None,
            output="polbooks-erased-poisson.svg")
../../_images/polbooks-erased-poisson.svg

Reconstructed latent Poisson degree-corrected SBM for a network of political books, showing the marginal mean edge multiplicities as line thickness. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Latent triadic closures#

Another useful reconstruction scenario is when we assume our observed network is the outcome of a mixture of different edge placement mechanisms. One example is the combination of triadic closure with community structure [peixoto-disentangling-2022]. This approach can be used to separate the effects of triangle formation from node homophily, which are typically conflated. We proceed in same way as in the previous reconstruction scenarios, but using instead LatentClosureBlockState.

For example, in the following we will obtain the community structure and latent closure edges of a network of political books:

g = gt.collection.data["polbooks"]

state = gt.LatentClosureBlockState(g, L=10)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

us = None             # marginal posterior graphs
bs = []               # partitions

def collect_marginals(s):
   global bs, us
   us = s.collect_marginal(us)
   bstate = state.bstate
   bs.append(bstate.levels[0].b.a.copy())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

u = us[0]             # marginal seminal edges

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

bstate = state.bstate.levels[0].copy(g=u)

# edge width
ew = u.ep.eprob.copy()
ew.a = abs(ew.a - .5)

# get a color map
clrs = [(1, 0, 0, 1.0),
        (0, 0, 0, 1.0)]
red_cm = matplotlib.colors.LinearSegmentedColormap.from_list("Set3", clrs)

# draw red edge last
eorder = u.ep.eprob.copy()
eorder.a *= -1

bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_pen_width=gt.prop_to_size(ew, .1, 4, power=1),
            edge_gradient=None, edge_color=u.ep.eprob, ecmap=red_cm,
            eorder=eorder, output="polbooks-closure.svg")
../../_images/polbooks-closure.svg

Reconstructed degree-corrected SBM with latent closure edges for a network of political books, showing the marginal probability of an edge being due to triadic closure as the color red. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Triadic closure can also be used to perform uncertain network reconstruction, using MeasuredClosureBlockState, in a manner analogous to what was done in Measured networks:

g = gt.collection.data["lesmis"].copy()

# pretend we have measured and observed each edge twice

n = g.new_ep("int", 2)   # number of measurements
x = g.new_ep("int", 2)   # number of observations

e = g.edge(11, 36)
x[e] = 1                 # pretend we have observed edge (11, 36) only once

e = g.add_edge(15, 73)
n[e] = 2                 # pretend we have measured non-edge (15, 73) twice,
x[e] = 1                 # but observed it as an edge once.

# We inititialize MeasuredBlockState, assuming that each non-edge has
# been measured only once (as opposed to twice for the observed
# edges), as specified by the 'n_default' and 'x_default' parameters.

state = gt.MeasuredClosureBlockState(g, n=n, n_default=1, x=x, x_default=0, L=10)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

us = None             # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

def collect_marginals(s):
   global us, bs, cs
   us = s.collect_marginal(us)
   bstate = s.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())
   cs.append(gt.local_clustering(s.get_graph()).fa.mean())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

u = us[-1]
eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

Which yields the following output:

Posterior probability of edge (11, 36): 0.993999...
Posterior probability of non-edge (15, 73): 0.021802...
Estimated average local clustering: 0.575943 ± 0.007209...

Reconstruction from dynamics#

In some cases direct measurements of the edges of a network are either impossible to be done, or can be done only at significant experimental cost. In such situations, we are required to infer the network of interactions from the observed functional behavior [peixoto-network-2019]. In graph-tool this can be done for epidemic spreading (via EpidemicsBlockState), for the kinetic Ising model (via IsingGlauberBlockState and CIsingGlauberBlockState), and for the equilibrium Ising model (via PseudoIsingBlockState and PseudoCIsingBlockState). We consider the general reconstruction framework outlined above, where the observed data \(\mathcal{D}\) in Eq. (8) are a time-series generated by some of the supported processes. Just like before, the posterior distribution includes not only the adjacency matrix, but also the parameters of the dynamical model and of the SBM that is used as a prior.

For example, in the case of a SIS epidemics, where \(\sigma_i(t)=1\) means node \(i\) is infected at time \(t\), or \(0\) otherwise, the likelihood for a time-series \(\boldsymbol\sigma\) is

\[P(\boldsymbol\sigma|\boldsymbol A,\boldsymbol\beta,\gamma)=\prod_t\prod_iP(\sigma_i(t)|\boldsymbol\sigma(t-1)),\]

where

\[P(\sigma_i(t)|\boldsymbol\sigma(t-1)) = f(e^{m_i(t-1)}, \sigma_i(t))^{1-\sigma_i(t-1)} \times f(\gamma,\sigma_i(t))^{\sigma_i(t-1)}\]

is the transition probability for node \(i\) at time \(t\), with \(f(p,\sigma) = (1-p)^{\sigma}p^{1-\sigma}\), and where

\[m_i(t) = \sum_jA_{ij}\ln(1-\beta_{ij})\sigma_j(t)\]

is the contribution from all neighbors of node \(i\) to its infection probability at time \(t\). In the equations above the value \(\beta_{ij}\) is the probability of an infection via an existing edge \((i,j)\), and \(\gamma\) is the \(1\to 0\) recovery probability. With these additional parameters, the full posterior distribution for the reconstruction becomes

\[P(\boldsymbol A,\boldsymbol b,\boldsymbol\beta|\boldsymbol\sigma) = \frac{P(\boldsymbol\sigma|\boldsymbol A,\boldsymbol b,\gamma)P(\boldsymbol A|\boldsymbol b)P(\boldsymbol b)P(\boldsymbol\beta)}{P(\boldsymbol\sigma|\gamma)}.\]

Since \(\beta_{ij}\in[0,1]\) we use the uniform prior \(P(\boldsymbol\beta)=1\). Note also that the recovery probably \(\gamma\) plays no role on the reconstruction algorithm, since its term in the likelihood does not involve \(\boldsymbol A\) (and hence, gets cancelled out in the denominator \(P(\boldsymbol\sigma|\gamma)=P(\gamma|\boldsymbol\sigma)P(\boldsymbol\sigma)/P(\gamma)\)). This means the above posterior only depends on the infection events \(0\to 1\), and thus is also valid without any modifications to all epidemic variants SI, SIR, SEIR, etc, since the infection events occur with the same probability for all these models.

In the example below is shown how to perform reconstruction from an epidemic process.

# We will first simulate the dynamics with a given network
g = gt.collection.data["dolphins"]

# The algorithm accepts multiple independent time-series for the
# reconstruction. We will generate 100 SI cascades starting from a
# random node each time, and uniform infection probability 0.7.

ss = []
for i in range(100):
    si_state = gt.SIState(g, beta=.7)
    s = [si_state.get_state().copy()]
    for j in range(10):
        si_state.iterate_sync()
        s.append(si_state.get_state().copy())
    # Each time series should be represented as a single vector-valued
    # vertex property map with the states for each note at each time.
    s = gt.group_vector_property(s)
    ss.append(s)

# Prepare the initial state of the reconstruction as an empty graph
u = g.copy()
u.clear_edges()
ss = [u.own_property(s) for s in ss]   # time series properties need to be 'owned' by graph u

# Create reconstruction state
rstate = gt.EpidemicsBlockState(u, s=ss, beta=None, r=1e-6, global_beta=.1,
                                nested=False, aE=g.num_edges())

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

gm = None
bs = []
betas = []

def collect_marginals(s):
   global gm, bs
   gm = s.collect_marginal(gm)
   bs.append(s.bstate.b.a.copy())
   betas.append(s.params["global_beta"])

gt.mcmc_equilibrate(rstate, force_niter=10000, mcmc_args=dict(niter=10, xstep=0),
                    callback=collect_marginals)

print("Posterior similarity: ", gt.similarity(g, gm, g.new_ep("double", 1), gm.ep.eprob))
print("Inferred infection probability: %g ± %g" % (mean(betas), std(betas)))

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(gm)

gt.graph_draw(gm, gm.own_property(g.vp.pos), vertex_shape="pie", vertex_color="black",
              vertex_pie_fractions=pv, vertex_pen_width=1,
              edge_pen_width=gt.prop_to_size(gm.ep.eprob, 0, 5),
              eorder=gm.ep.eprob, output="dolphins-posterior.svg")

The reconstruction can accurately recover the hidden network and the infection probability:

Posterior similarity:  0.9874594311217187
Inferred infection probability: 0.687336 ± 0.0587325

The figure below shows the reconstructed network and the inferred community structure.

../../_images/dolphins-posterior.svg

Reconstructed network of associations between 62 dolphins, from the dynamics of a SI epidemic model, using the degree-corrected SBM as a latent prior. The edge thickness corresponds to the marginal posterior probability of each edge, and the node pie charts to the marginal posterior distribution of the node partition.#

Edge prediction as binary classification#

A more traditional approach to the prediction of missing and spurious edges formulates it as a supervised binary classification task, where the edge/non-edge scores are computed by fitting a generative model to the observed data, and computing their probabilities under that model [clauset-hierarchical-2008] [guimera-missing-2009]. In this setting, one typically omits any explicit model of the measurement process (hence intrinsically assuming it to be uniform), and as a consequence of the overall setup, only relative probabilities between individual missing and spurious edges can be produced, instead of the full posterior distribution considered in the last section. Since this limits the overall network reconstruction, and does not yield confidence intervals, it is a less powerful approach. Nevertheless, it is a popular procedure, which can also be performed with graph-tool, as we describe in the following.

We set up the classification task by dividing the edges/non-edges into two sets \(\boldsymbol A\) and \(\delta \boldsymbol A\), where the former corresponds to the observed network and the latter either to the missing or spurious edges. We may compute the posterior of \(\delta \boldsymbol A\) as [valles-catala-consistencies-2018]

(9)#\[P(\delta \boldsymbol A | \boldsymbol A) \propto \sum_{\boldsymbol b}\frac{P(\boldsymbol A \cup \delta\boldsymbol A| \boldsymbol b)}{P(\boldsymbol A| \boldsymbol b)}P(\boldsymbol b | \boldsymbol A)\]

up to a normalization constant 1Note that the posterior of Eq. (9) cannot be used to sample the reconstruction \(\delta \boldsymbol G\), as it is not informative of the overall network density (i.e. absolute number of missing and spurious edges). It can, however, be used to compare different reconstructions with the same density.. Although the normalization constant is difficult to obtain in general (since we need to perform a sum over all possible spurious/missing edges), the numerator of Eq. (9) can be computed by sampling partitions from the posterior, and then inserting or deleting edges from the graph and computing the new likelihood. This means that we can easily compare alternative predictive hypotheses \(\{\delta \boldsymbol A_i\}\) via their likelihood ratios

\[\lambda_i = \frac{P(\delta \boldsymbol A_i | \boldsymbol A)}{\sum_j P(\delta \boldsymbol A_j | \boldsymbol A)}\]

which do not depend on the normalization constant.

The values \(P(\delta \boldsymbol A | \boldsymbol A, \boldsymbol b)\) can be computed with get_edges_prob(). Hence, we can compute spurious/missing edge probabilities just as if we were collecting marginal distributions when doing model averaging.

Below is an example for predicting the two following edges in the football network, using the nested model (for which we need to replace \(\boldsymbol b\) by \(\{\boldsymbol b_l\}\) in the equations above).

../../_images/football_missing.svg

Two non-existing edges in the football network (in red): \((101,102)\) in the middle, and \((17,56)\) in the upper right region of the figure.#

g = gt.collection.data["football"]

missing_edges = [(101, 102), (17, 56)]

L = 10

state = gt.minimize_nested_blockmodel_dl(g)

probs = ([], [])

def collect_edge_probs(s):
    p1 = s.get_edges_prob([missing_edges[0]], entropy_args=dict(partition_dl=False))
    p2 = s.get_edges_prob([missing_edges[1]], entropy_args=dict(partition_dl=False))
    probs[0].append(p1)
    probs[1].append(p2)

# Now we collect the probabilities for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_edge_probs)


def get_avg(p):
   p = np.array(p)
   pmax = p.max()
   p -= pmax
   return pmax + log(exp(p).mean())

p1 = get_avg(probs[0])
p2 = get_avg(probs[1])

p_sum = get_avg([p1, p2]) + log(2)

l1 = p1 - p_sum
l2 = p2 - p_sum

print("likelihood-ratio for %s: %g" % (missing_edges[0], exp(l1)))
print("likelihood-ratio for %s: %g" % (missing_edges[1], exp(l2)))
likelihood-ratio for (101, 102): 0.0203792
likelihood-ratio for (17, 56): 0.979621

From which we can conclude that edge \((17, 56)\) is more likely than \((101, 102)\) to be a missing edge.

The prediction using the non-nested model can be performed in an entirely analogous fashion.

References#

[peixoto-bayesian-2019]

Tiago P. Peixoto, “Bayesian stochastic blockmodeling”, Advances in Network Clustering and Blockmodeling, edited by P. Doreian, V. Batagelj, A. Ferligoj, (Wiley, New York, 2019) DOI: 10.1002/9781119483298.ch11 [sci-hub, @tor], arXiv: 1705.10225

[peixoto-descriptive-2021]

Tiago P. Peixoto, “Descriptive vs. inferential community detection: pitfalls, myths and half-truths”, arXiv: 2112.00183

[holland-stochastic-1983]

Paul W. Holland, Kathryn Blackmond Laskey, Samuel Leinhardt, “Stochastic blockmodels: First steps”, Social Networks Volume 5, Issue 2, Pages 109-137 (1983). DOI: 10.1016/0378-8733(83)90021-7 [sci-hub, @tor]

[karrer-stochastic-2011]

Brian Karrer, M. E. J. Newman “Stochastic blockmodels and community structure in networks”, Phys. Rev. E 83, 016107 (2011). DOI: 10.1103/PhysRevE.83.016107 [sci-hub, @tor], arXiv: 1008.3926

[peixoto-nonparametric-2017] (1,2,3,4)

Tiago P. Peixoto, “Nonparametric Bayesian inference of the microcanonical stochastic block model”, Phys. Rev. E 95 012317 (2017). DOI: 10.1103/PhysRevE.95.012317 [sci-hub, @tor], arXiv: 1610.02703

[peixoto-parsimonious-2013]

Tiago P. Peixoto, “Parsimonious module inference in large networks”, Phys. Rev. Lett. 110, 148701 (2013). DOI: 10.1103/PhysRevLett.110.148701 [sci-hub, @tor], arXiv: 1212.4794.

[peixoto-hierarchical-2014]

Tiago P. Peixoto, “Hierarchical block structures and high-resolution model selection in large networks”, Phys. Rev. X 4, 011047 (2014). DOI: 10.1103/PhysRevX.4.011047 [sci-hub, @tor], arXiv: 1310.4377.

[peixoto-model-2016]

Tiago P. Peixoto, “Model selection and hypothesis testing for large-scale network models with overlapping groups”, Phys. Rev. X 5, 011033 (2016). DOI: 10.1103/PhysRevX.5.011033 [sci-hub, @tor], arXiv: 1409.3059.

[peixoto-inferring-2015]

Tiago P. Peixoto, “Inferring the mesoscale structure of layered, edge-valued and time-varying networks”, Phys. Rev. E 92, 042807 (2015). DOI: 10.1103/PhysRevE.92.042807 [sci-hub, @tor], arXiv: 1504.02381

[aicher-learning-2015]

Christopher Aicher, Abigail Z. Jacobs, and Aaron Clauset, “Learning Latent Block Structure in Weighted Networks”, Journal of Complex Networks 3(2). 221-248 (2015). DOI: 10.1093/comnet/cnu026 [sci-hub, @tor], arXiv: 1404.0431

[peixoto-weighted-2017] (1,2,3,4)

Tiago P. Peixoto, “Nonparametric weighted stochastic block models”, Phys. Rev. E 97, 012306 (2018), DOI: 10.1103/PhysRevE.97.012306 [sci-hub, @tor], arXiv: 1708.01432

[peixoto-efficient-2014] (1,2)

Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014). DOI: 10.1103/PhysRevE.89.012804 [sci-hub, @tor], arXiv: 1310.4378

[peixoto-merge-split-2020] (1,2)

Tiago P. Peixoto, “Merge-split Markov chain Monte Carlo for community detection”, Phys. Rev. E 102, 012305 (2020), DOI: 10.1103/PhysRevE.102.012305 [sci-hub, @tor], arXiv: 2003.07070

[peixoto-revealing-2021] (1,2,3)

Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, Phys. Rev. X 11 021003 (2021) DOI: 10.1103/PhysRevX.11.021003 [sci-hub, @tor], arXiv: 2005.13977

[lizhi-statistical-2020]

Lizhi Zhang, Tiago P. Peixoto, “Statistical inference of assortative community structures”, Phys. Rev. Research 2 043271 (2020), DOI: 10.1103/PhysRevResearch.2.043271 [sci-hub, @tor], arXiv: 2006.14493

[peixoto-ordered-2022]

Tiago P. Peixoto, “Ordered community detection in directed networks”, Phys. Rev. E 106, 024305 (2022), DOI: 10.1103/PhysRevE.106.024305 [sci-hub, @tor], arXiv: 2203.16460

[peixoto-reconstructing-2018] (1,2,3,4)

Tiago P. Peixoto, “Reconstructing networks with unknown and heterogeneous errors”, Phys. Rev. X 8 041011 (2018). DOI: 10.1103/PhysRevX.8.041011 [sci-hub, @tor], arXiv: 1806.07956

[peixoto-disentangling-2022]

Tiago P. Peixoto, “Disentangling homophily, community structure and triadic closure in networks”, Phys. Rev. X 12, 011004 (2022), DOI: 10.1103/PhysRevX.12.011004 [sci-hub, @tor], arXiv: 2101.02510

[peixoto-network-2019] (1,2)

Tiago P. Peixoto, “Network reconstruction and community detection from dynamics”, Phys. Rev. Lett. 123 128301 (2019), DOI: 10.1103/PhysRevLett.123.128301 [sci-hub, @tor], arXiv: 1903.10833

[peixoto-latent-2020]

Tiago P. Peixoto, “Latent Poisson models for networks with heterogeneous density”, Phys. Rev. E 102 012309 (2020) DOI: 10.1103/PhysRevE.102.012309 [sci-hub, @tor], arXiv: 2002.07803

[martin-structural-2015]

Travis Martin, Brian Ball, M. E. J. Newman, “Structural inference for uncertain networks”, Phys. Rev. E 93, 012306 (2016). DOI: 10.1103/PhysRevE.93.012306 [sci-hub, @tor], arXiv: 1506.05490

[clauset-hierarchical-2008]

Aaron Clauset, Cristopher Moore, M. E. J. Newman, “Hierarchical structure and the prediction of missing links in networks”, Nature 453, 98-101 (2008). DOI: 10.1038/nature06830 [sci-hub, @tor]

[guimera-missing-2009]

Roger Guimerà, Marta Sales-Pardo, “Missing and spurious interactions and the reconstruction of complex networks”, PNAS vol. 106 no. 52 (2009). DOI: 10.1073/pnas.0908366106 [sci-hub, @tor]

[valles-catala-consistencies-2018]

Toni Vallès-Català, Tiago P. Peixoto, Roger Guimerà, Marta Sales-Pardo, “Consistencies and inconsistencies between model selection and link prediction in networks”, Phys. Rev. E 97 062316 (2018), DOI: 10.1103/PhysRevE.97.062316 [sci-hub, @tor], arXiv: 1705.07967

[guimera-modularity-2004]

Roger Guimerà, Marta Sales-Pardo, and Luís A. Nunes Amaral, “Modularity from fluctuations in random graphs and complex networks”, Phys. Rev. E 70, 025101(R) (2004), DOI: 10.1103/PhysRevE.70.025101 [sci-hub, @tor]

[hayes-connecting-2006]

Brian Hayes, “Connecting the dots. can the tools of graph theory and social-network studies unravel the next big plot?”, American Scientist, 94(5):400-404, 2006. http://www.jstor.org/stable/27858828

[ulanowicz-network-2005]

Robert E. Ulanowicz, and Donald L. DeAngelis. “Network analysis of trophic dynamics in south florida ecosystems.” US Geological Survey Program on the South Florida Ecosystem 114 (2005). https://fl.water.usgs.gov/PDF_files/ofr99_181_gerould.pdf#page=125

[read-cultures-1954]

Kenneth E. Read, “Cultures of the Central Highlands, New Guinea”, Southwestern J. of Anthropology, 10(1):1-43 (1954). DOI: 10.1086/soutjanth.10.1.3629074 [sci-hub, @tor]

Footnotes