Inferring modular network structure#
graph-tool
includes algorithms to identify the large-scale structure
of networks via statistical inference in the
inference
submodule. Here we explain the basic
functionality with self-contained examples. For a more thorough
theoretical introduction to the methods described here, the reader is
referred to [peixoto-bayesian-2019].
See also [peixoto-descriptive-2023] and the corresponding blog post for an overall discussion about inferential approaches to structure identification in networks, and how they contrast with descriptive approaches.
Background: Nonparametric statistical inference#
A common task when analyzing networks is to characterize their structures in simple terms, often by dividing the nodes into modules or “communities”.
A principled approach to perform this task is to formulate generative models that include the idea of modules in their descriptions, which then can be detected by inferring the model parameters from data. More precisely, given the partition \(\boldsymbol b = \{b_i\}\) of the network into \(B\) groups, where \(b_i\in[0,B-1]\) is the group membership of node \(i\), we define a model that generates a network \(\boldsymbol A\) with a probability
where \(\boldsymbol\theta\) are additional model parameters that control how the node partition affects the structure of the network. Therefore, if we observe a network \(\boldsymbol A\), the likelihood that it was generated by a given partition \(\boldsymbol b\) is obtained via the Bayesian posterior probability
where \(P(\boldsymbol\theta, \boldsymbol b)\) is the prior probability of the model parameters, and
is called the evidence, and corresponds to the total probability of the data summed over all model parameters. The particular types of model that will be considered here have “hard constraints”, such that there is only one choice for the remaining parameters \(\boldsymbol\theta\) that is compatible with the generated network, which means Eq. (2) simplifies to
with \(\boldsymbol\theta\) above being the only choice compatible with \(\boldsymbol A\) and \(\boldsymbol b\). The inference procedures considered here will consist in either finding a network partition that maximizes Eq. (4), or sampling different partitions according its posterior probability.
As we will show below, this approach also enables the comparison of different models according to statistical evidence (a.k.a. model selection).
Minimum description length (MDL)#
We note that Eq. (4) can be written as
where
is called the description length of the network \(\boldsymbol A\). It measures the amount of information required to describe the data, if we encode it using the particular parametrization of the generative model given by \(\boldsymbol\theta\) and \(\boldsymbol b\), as well as the parameters themselves. Therefore, if we choose to maximize the posterior distribution of Eq. (4) it will be fully equivalent to the so-called minimum description length method. This approach corresponds to an implementation of Occam’s razor, where the simplest model is selected, among all possibilities with the same explanatory power. The selection is based on the statistical evidence available, and therefore will not overfit, i.e. mistake stochastic fluctuations for actual structure. In particular this means that we will not find modules in networks if they could have arisen simply because of stochastic fluctuations, as they do in fully random graphs [guimera-modularity-2004].
The stochastic block model (SBM)#
The stochastic block model is arguably the simplest generative process based on the notion of groups of nodes [holland-stochastic-1983]. The microcanonical formulation [peixoto-nonparametric-2017] of the basic or “traditional” version takes as parameters the partition of the nodes into groups \(\boldsymbol b\) and a \(B\times B\) matrix of edge counts \(\boldsymbol e\), where \(e_{rs}\) is the number of edges between groups \(r\) and \(s\). Given these constraints, the edges are then placed randomly. Hence, nodes that belong to the same group possess the same probability of being connected with other nodes of the network.
An example of a possible parametrization is given in the following figure.
Note
With the SBM, no constraints are imposed on what kind of modular structure is allowed, as the matrix of edge counts \(e\) is unconstrained. Hence, we can detect the putatively typical pattern of assortative “community structure”, i.e. when nodes are connected mostly to other nodes of the same group, if it happens to be the most likely network description, but we can also detect a large multiplicity of other patterns, such as bipartiteness, core-periphery, and many others, all under the same inference framework. If you are interested in searching exclusively for assortative structures, see Sec. Assortative community structure.
Although quite general, the traditional model assumes that the edges are placed randomly inside each group, and because of this the nodes that belong to the same group tend to have very similar degrees. As it turns out, this is often a poor model for many networks, which possess highly heterogeneous degree distributions. A better model for such networks is called the degree-corrected stochastic block model [karrer-stochastic-2011], and it is defined just like the traditional model, with the addition of the degree sequence \(\boldsymbol k = \{k_i\}\) of the graph as an additional set of parameters (assuming again a microcanonical formulation [peixoto-nonparametric-2017]).
The nested stochastic block model#
The regular SBM has a drawback when applied to large networks. Namely, it cannot be used to find relatively small groups, as the maximum number of groups that can be found scales as \(B_{\text{max}}=O(\sqrt{N})\), where \(N\) is the number of nodes in the network, if Bayesian inference is performed [peixoto-parsimonious-2013]. In order to circumvent this, we need to replace the noninformative priors used by a hierarchy of priors and hyperpriors, which amounts to a nested SBM, where the groups themselves are clustered into groups, and the matrix \(e\) of edge counts are generated from another SBM, and so on recursively [peixoto-hierarchical-2014], as illustrated below.
With this model, the maximum number of groups that can be inferred scales as \(B_{\text{max}}=O(N/\log(N))\). In addition to being able to find small groups in large networks, this model also provides a multilevel hierarchical description of the network. With such a description, we can uncover structural patterns at multiple scales, representing different levels of coarse-graining.
Inferring the best partition#
The simplest and most efficient approach is to find the best
partition of the network by maximizing Eq. (4)
according to some version of the model. This is obtained via the
functions minimize_blockmodel_dl()
or
minimize_nested_blockmodel_dl()
, which
employs an agglomerative multilevel Markov chain Monte Carlo (MCMC) algorithm
[peixoto-efficient-2014].
We focus first on the non-nested model, and we illustrate its use with a
network of American football teams, which we load from the
collection
module:
g = gt.collection.data["football"]
print(g)
which yields
<Graph object, undirected, with 115 vertices and 613 edges, 4 internal vertex properties, 2 internal graph properties, at 0x...>
We then fit the degree-corrected model by calling:
state = gt.minimize_blockmodel_dl(g)
This returns a BlockState
object that
includes the inference results.
Note
The inference algorithm used is stochastic by nature, and may return a different answer each time it is run. This may be due to the fact that there are alternative partitions with similar probabilities, or that the optimum is difficult to find. Note that the inference problem here is, in general, NP-Hard, hence there is no efficient algorithm that is guaranteed to always find the best answer.
Because of this, typically one would call the algorithm many times,
and select the partition with the largest posterior probability of
Eq. (4), or equivalently, the minimum description
length of Eq. (5). The description length of a fit can be
obtained with the entropy()
method. See also Sec. Hierarchical partitions below.
We may perform a drawing of the partition obtained via the
draw
method, that functions as a
convenience wrapper to the graph_draw()
function
state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")
which yields the following image.
We can obtain the group memberships as a
PropertyMap
on the vertices via the
get_blocks
method:
b = state.get_blocks()
r = b[10] # group membership of vertex 10
print(r)
which yields:
76
Note
For reasons of algorithmic efficiency, the group labels returned are not necessarily contiguous, and they may lie in any subset of the range \([0, N-1]\), where \(N\) is the number of nodes in the network.
We may also access the matrix of edge counts between groups via
get_matrix
# let us obtain a contiguous range first, which will facilitate
# visualization
b = gt.contiguous_map(state.get_blocks())
state = state.copy(b=b)
e = state.get_matrix()
B = state.get_nonempty_B()
matshow(e.todense()[:B, :B])
savefig("football-edge-counts.svg")
We may obtain the same matrix of edge counts as a graph, which has internal edge and vertex property maps with the edge and vertex counts, respectively:
bg = state.get_bg()
ers = state.mrs # edge counts
nr = state.wr # node counts
Hierarchical partitions#
The inference of the nested family of SBMs is done in a similar manner,
but we must use instead the
minimize_nested_blockmodel_dl()
function. We
illustrate its use with the neural network of the C. elegans worm:
g = gt.collection.data["celegansneural"]
print(g)
which has 297 vertices and 2359 edges.
<Graph object, directed, with 297 vertices and 2359 edges, 2 internal vertex properties, 1 internal edge property, 2 internal graph properties, at 0x...>
A hierarchical fit of the degree-corrected model is performed as follows.
state = gt.minimize_nested_blockmodel_dl(g)
The object returned is an instance of a
NestedBlockState
class, which
encapsulates the results. We can again draw the resulting hierarchical
clustering using the
draw()
method:
state.draw(output="celegans-hsbm-fit.svg")
Tip
If the output
parameter to
draw()
is omitted, an
interactive visualization is performed, where the user can re-order
the hierarchy nodes using the mouse and pressing the r
key.
A summary of the inferred hierarchy can be obtained with the
print_summary()
method,
which shows the number of nodes and groups in all levels:
state.print_summary()
l: 0, N: 297, B: 21
l: 1, N: 21, B: 6
l: 2, N: 6, B: 2
l: 3, N: 2, B: 1
l: 4, N: 1, B: 1
The hierarchical levels themselves are represented by individual
BlockState()
instances obtained via the
get_levels()
method:
levels = state.get_levels()
for s in levels:
print(s)
if s.get_N() == 1:
break
<BlockState object with 297 blocks (21 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges, 2 internal vertex properties, 1 internal edge property, 2 internal graph properties, at 0x...>, at 0x...>
<BlockState object with 21 blocks (6 nonempty), for graph <Graph object, directed, with 297 vertices and 218 edges, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
<BlockState object with 6 blocks (2 nonempty), for graph <Graph object, directed, with 21 vertices and 36 edges, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
<BlockState object with 2 blocks (1 nonempty), for graph <Graph object, directed, with 6 vertices and 4 edges, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
<BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 2 vertices and 1 edge, 2 internal vertex properties, 1 internal edge property, at 0x...>, at 0x...>
This means that we can inspect the hierarchical partition just as before:
r = levels[0].get_blocks()[46] # group membership of node 46 in level 0
print(r)
r = levels[1].get_blocks()[r] # group membership of node 46 in level 1
print(r)
r = levels[2].get_blocks()[r] # group membership of node 46 in level 2
print(r)
260
14
5
Refinements using merge-split MCMC#
The agglomerative algorithm behind
minimize_blockmodel_dl()
and
minimize_nested_blockmodel_dl()
has
a log-linear complexity on the size of the network, and it usually works
very well in finding a good estimate of the optimum
partition. Nevertheless, it’s often still possible to find refinements
without starting the whole algorithm from scratch using a greedy
algorithm based on a merge-split MCMC with zero temperature
[peixoto-merge-split-2020]. This is achieved by following the
instructions in Sec. Sampling from the posterior distribution, while setting the inverse
temperature parameter beta
to infinity. For example, an equivalent
to the above minimization for the C. elegans network is the following:
g = gt.collection.data["celegansneural"]
state = gt.minimize_nested_blockmodel_dl(g)
S1 = state.entropy()
for i in range(1000): # this should be sufficiently large
state.multiflip_mcmc_sweep(beta=np.inf, niter=10)
S2 = state.entropy()
print("Improvement:", S2 - S1)
Improvement: -251.532600...
Whenever possible, this procedure should be repeated several times, and
the result with the smallest description length (obtained via the
entropy()
method)
should be chosen. In more demanding situations, better results still can
be obtained, at the expense of a longer computation time, by using the
mcmc_anneal()
function, which
implements simulated annealing:
g = gt.collection.data["celegansneural"]
state = gt.minimize_nested_blockmodel_dl(g)
gt.mcmc_anneal(state, beta_range=(1, 10), niter=1000, mcmc_equilibrate_args=dict(force_niter=10))
Model selection#
As mentioned above, one can select the best model according to the choice that yields the smallest description length [peixoto-model-2016]. For instance, in case of the political blogs network we have
g = gt.collection.ns["polblogs"]
state_ndc = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=False))
state_dc = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=True))
print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
Non-degree-corrected DL: 63114.098664...
Degree-corrected DL: 61540.920016...
Since it yields the smallest description length, the degree-corrected fit should be preferred. The statistical significance of the choice can be accessed by inspecting the posterior odds ratio [peixoto-nonparametric-2017]
where \(\mathcal{H}_\text{NDC}\) and \(\mathcal{H}_\text{DC}\) correspond to the non-degree-corrected and degree-corrected model hypotheses (assumed to be equally likely a priori), respectively, and \(\Delta\Sigma\) is the difference of the description length of both fits. In our particular case, we have
print(u"ln \u039b: ", state_dc.entropy() - state_ndc.entropy())
ln Λ: -1573.178648...
The precise threshold that should be used to decide when to reject a hypothesis is subjective and context-dependent, but the value above implies that the particular degree-corrected fit is around \(\mathrm{e}^{1573} \approx 10^{683}\) times more likely than the non-degree corrected one, and hence it can be safely concluded that it provides a substantially better fit.
Although it is often true that the degree-corrected model provides a better fit for many empirical networks, there are also exceptions. For example, for the American football network above, we have:
g = gt.collection.data["football"]
state_ndc = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=False))
state_dc = gt.minimize_nested_blockmodel_dl(g, state_args=dict(deg_corr=True))
print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
print(u"ln \u039b:\t\t\t", state_ndc.entropy() - state_dc.entropy())
Non-degree-corrected DL: 1733.525685...
Degree-corrected DL: 1782.407946...
ln Λ: -48.882261...
Hence, with a posterior odds ratio of \(\Lambda \approx \mathrm{e}^{-48} \approx 10^{-21}\) in favor of the non-degree-corrected model, we conclude that the degree-corrected variant is an unnecessarily complex description for this network.
Sampling from the posterior distribution#
When analyzing empirical networks, one should be open to the possibility that there will be more than one fit of the SBM with similar posterior probabilities. In such situations, one should instead sample partitions from the posterior distribution, instead of simply finding its maximum. One can then compute quantities that are averaged over the different model fits, weighted according to their posterior probabilities.
Full support for model averaging is implemented in graph-tool
via an
efficient Markov chain Monte Carlo (MCMC) algorithm
[peixoto-efficient-2014]. It works by attempting to move nodes into
different groups with specific probabilities, and accepting or
rejecting
such moves so that, after a sufficiently long time, the partitions will
be observed with the desired posterior probability. The algorithm is
designed so that its run-time (i.e. each sweep of the MCMC) is linear on
the number of edges in the network, and independent on the number of
groups being used in the model, and hence is suitable for use on very
large networks.
In order to perform such moves, one needs again to operate with
BlockState
or
NestedBlockState
instances, and calling either their
mcmc_sweep()
or
multiflip_mcmc_sweep()
methods. The former implements a simpler MCMC where a single node is
moved at a time, where the latter is a more efficient version that
performs merges and splits [peixoto-merge-split-2020], which should be
in general preferred.
For example, the following will perform 1000 sweeps of the algorithm with the network of characters in the novel Les Misérables, starting from a random partition into 20 groups
g = gt.collection.data["lesmis"]
state = gt.BlockState(g) # This automatically initializes the state with a partition
# into one group. The user could also pass a higher number
# to start with a random partition of a given size, or pass a
# specific initial partition using the 'b' parameter.
# Now we run 1,000 sweeps of the MCMC. Note that the number of groups
# is allowed to change, so it will eventually move from the initial
# value of B=1 to whatever is most appropriate for the data.
dS, nattempts, nmoves = state.multiflip_mcmc_sweep(niter=1000)
print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: -76.201634...
Number of accepted vertex moves: 129931
Although the above is sufficient to implement sampling from the
posterior, there is a convenience function called
mcmc_equilibrate()
that is intend to
simplify the detection of equilibration, by keeping track of the maximum
and minimum values of description length encountered and how many sweeps
have been made without a “record breaking” event. For example,
# We will accept equilibration if 10 sweeps are completed without a
# record breaking event, 2 consecutive times.
gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10))
Note that the value of wait
above was made purposefully low so that
the output would not be overly long. The most appropriate value requires
experimentation, but a typically good value is wait=1000
.
The function mcmc_equilibrate()
accepts
a callback
argument that takes an optional function to be invoked
after each call to
multiflip_mcmc_sweep()
. This
function should accept a single parameter which will contain the actual
BlockState
instance. We will
use this in the example below to collect the posterior vertex marginals
(via PartitionModeState
,
which disambiguates group labels [peixoto-revealing-2021]), i.e. the
posterior probability that a node belongs to a given group:
# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))
bs = [] # collect some partitions
def collect_partitions(s):
global bs
bs.append(s.b.a.copy())
# Now we collect partitions for exactly 100,000 sweeps, at intervals
# of 10 sweeps:
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
callback=collect_partitions)
# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(g)
# Now the node marginals are stored in property map pv. We can
# visualize them as pie charts on the nodes:
state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv,
output="lesmis-sbm-marginals.svg")
We can also obtain a marginal probability on the number of groups itself, as follows.
h = np.zeros(g.num_vertices() + 1)
def collect_num_groups(s):
B = s.get_nonempty_B()
h[B] += 1
# Now we collect partitions for exactly 100,000 sweeps, at intervals
# of 10 sweeps:
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
callback=collect_num_groups)
Hierarchical partitions#
We can also perform model averaging using the nested SBM, which will
give us a distribution over hierarchies. The whole procedure is fairly
analogous, but now we make use of
NestedBlockState
instances.
Here we perform the sampling of hierarchical partitions using the same network as above.
g = gt.collection.data["lesmis"]
state = gt.NestedBlockState(g) # By default this creates a state with an initial single-group
# hierarchy of depth ceil(log2(g.num_vertices()).
# Now we run 1000 sweeps of the MCMC
dS, nmoves = 0, 0
for i in range(100):
ret = state.multiflip_mcmc_sweep(niter=10)
dS += ret[0]
nmoves += ret[1]
print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
Change in description length: -63.076803...
Number of accepted vertex moves: 469690
Warning
When using
NestedBlockState
, a
single call to
multiflip_mcmc_sweep()
or
mcmc_sweep()
performs niter
sweeps at each hierarchical level once. This means
that in order for the chain to equilibrate, we need to call these
functions several times, i.e. it is not enough to call it once with a
large value of niter
.
Similarly to the the non-nested case, we can use
mcmc_equilibrate()
to do most of the boring
work, and we can now obtain vertex marginals on all hierarchical levels:
# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))
# collect nested partitions
bs = []
def collect_partitions(s):
global bs
bs.append(s.get_bs())
# Now we collect the marginals for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
callback=collect_partitions)
# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, nested=True, converge=True)
pv = pmode.get_marginal(g)
# Get consensus estimate
bs = pmode.get_max_nested()
state = state.copy(bs=bs)
# We can visualize the marginals as pie charts on the nodes:
state.draw(vertex_shape="pie", vertex_pie_fractions=pv,
output="lesmis-nested-sbm-marginals.svg")
We can also obtain a marginal probability of the number of groups itself, as follows.
h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()]
def collect_num_groups(s):
for l, sl in enumerate(s.get_levels()):
B = sl.get_nonempty_B()
h[l][B] += 1
# Now we collect the marginal distribution for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
callback=collect_num_groups)
Below we obtain some hierarchical partitions sampled from the posterior distribution.
for i in range(10):
for j in range(100):
state.multiflip_mcmc_sweep(niter=10)
state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False)
Characterizing the posterior distribution#
The posterior distribution of partitions can have an elaborate
structure, containing multiple possible explanations for the data. In
order to summarize it, we can infer the modes of the distribution using
ModeClusterState
, as
described in [peixoto-revealing-2021]. This amounts to identifying
clusters of partitions that are very similar to each other, but
sufficiently different from those that belong to other
clusters. Collective, such “modes” represent the different stories that
the data is telling us through the model. Here is an example using again
the Les Misérables network:
g = gt.collection.data["lesmis"]
state = gt.NestedBlockState(g)
# Equilibration
gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10))
bs = []
def collect_partitions(s):
global bs
bs.append(s.get_bs())
# We will collect only partitions 1000 partitions. For more accurate
# results, this number should be increased.
gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10),
callback=collect_partitions)
# Infer partition modes
pmode = gt.ModeClusterState(bs, nested=True)
# Minimize the mode state itself
gt.mcmc_equilibrate(pmode, wait=1, mcmc_args=dict(niter=1, beta=np.inf))
# Get inferred modes
modes = pmode.get_modes()
for i, mode in enumerate(modes):
b = mode.get_max_nested() # mode's maximum
pv = mode.get_marginal(g) # mode's marginal distribution
print(f"Mode {i} with size {mode.get_M()/len(bs)}")
state = state.copy(bs=b)
state.draw(vertex_shape="pie", vertex_pie_fractions=pv,
output="lesmis-partition-mode-%i.svg" % i)
Running the above code gives us the relative size of each mode, corresponding to their collective posterior probability.
Mode 0 with size 0.458458...
Mode 1 with size 0.432432...
Mode 2 with size 0.084084...
Mode 3 with size 0.025025...
Below are the marginal node distributions representing the partitions that belong to each inferred mode:
Model class selection#
When averaging over partitions, we may be interested in evaluating which model class provides a better fit of the data, considering all possible parameter choices. This is done by evaluating the model evidence summed over all possible partitions [peixoto-nonparametric-2017]:
This quantity is analogous to a partition function in statistical physics, which we can write more conveniently as a negative free energy by taking its logarithm
where
is the posterior probability of partition \(\boldsymbol b\). The first term of Eq. (6) (the “negative energy”) is minus the average of description length \(\left<\Sigma\right>\), weighted according to the posterior distribution. The second term \(\mathcal{S}\) is the entropy of the posterior distribution, and measures, in a sense, the “quality of fit” of the model: If the posterior is very “peaked”, i.e. dominated by a single partition with a very large probability, the entropy will tend to zero. However, if there are many partitions with similar probabilities — meaning that there is no single partition that describes the network uniquely well — it will take a large value instead.
Since the MCMC algorithm samples partitions from the distribution \(q(\boldsymbol b)\), it can be used to compute \(\left<\Sigma\right>\) easily, simply by averaging the description length values encountered by sampling from the posterior distribution many times.
The computation of the posterior entropy \(\mathcal{S}\), however,
is significantly more difficult, since it involves measuring the precise
value of \(q(\boldsymbol b)\). A direct “brute force” computation of
\(\mathcal{S}\) is implemented via
collect_partition_histogram()
and microstate_entropy()
, however
this is only feasible for very small networks. For larger networks, we
are forced to perform approximations. One possibility is to employ the
method described in [peixoto-revealing-2021], based on fitting a
mixture “random label” model to the posterior distribution, which allows
us to compute its entropy. In graph-tool this is done by using
ModeClusterState
, as we
show in the example below.
from scipy.special import gammaln
g = gt.collection.data["lesmis"]
for deg_corr in [True, False]:
state = gt.minimize_blockmodel_dl(g, state_args=dict(deg_corr=deg_corr)) # Initialize the Markov
# chain from the "ground
# state"
dls = [] # description length history
bs = [] # partitions
def collect_partitions(s):
global bs, dls
bs.append(s.get_state().a.copy())
dls.append(s.entropy())
# Now we collect 2000 partitions; but the larger this is, the
# more accurate will be the calculation
gt.mcmc_equilibrate(state, force_niter=2000, mcmc_args=dict(niter=10),
callback=collect_partitions)
# Infer partition modes
pmode = gt.ModeClusterState(bs)
# Minimize the mode state itself
gt.mcmc_equilibrate(pmode, wait=1, mcmc_args=dict(niter=1, beta=np.inf))
# Posterior entropy
H = pmode.posterior_entropy()
# log(B!) term
logB = mean(gammaln(np.array([len(np.unique(b)) for b in bs]) + 1))
# Evidence
L = -mean(dls) + logB + H
print(f"Model log-evidence for deg_corr = {deg_corr}: {L}")
Model log-evidence for deg_corr = True: -679.538941...
Model log-evidence for deg_corr = False: -672.861361...
The outcome shows a slight preference for the non-degree-corrected model.
When using the nested model, the approach is entirely analogous. We show below the approach for the same network, using the nested model.
from scipy.special import gammaln
g = gt.collection.data["lesmis"]
for deg_corr in [True, False]:
state = gt.NestedBlockState(g, state_args=dict(deg_corr=deg_corr))
# Equilibrate
gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10))
dls = [] # description length history
bs = [] # partitions
def collect_partitions(s):
global bs, dls
bs.append(s.get_state())
dls.append(s.entropy())
# Now we collect 5000 partitions; but the larger this is, the
# more accurate will be the calculation
gt.mcmc_equilibrate(state, force_niter=5000, mcmc_args=dict(niter=10),
callback=collect_partitions)
# Infer partition modes
pmode = gt.ModeClusterState(bs, nested=True)
# Minimize the mode state itself
gt.mcmc_equilibrate(pmode, wait=1, mcmc_args=dict(niter=1, beta=np.inf))
# Posterior entropy
H = pmode.posterior_entropy()
# log(B!) term
logB = mean([sum(gammaln(len(np.unique(bl))+1) for bl in b) for b in bs])
# Evidence
L = -mean(dls) + logB + H
print(f"Model log-evidence for deg_corr = {deg_corr}: {L}")
Model log-evidence for deg_corr = True: -666.435446...
Model log-evidence for deg_corr = False: -658.156441...
The situation is now inverted: The degree-corrected model possesses the largest evidence. Note also that we observe a better evidence for the nested models themselves, when comparing to the evidences for the non-nested model — which is not quite surprising, since the non-nested model is a special case of the nested one.
Edge weights and covariates#
Very often networks cannot be completely represented by simple graphs, but instead have arbitrary “weights” \(x_{ij}\) on the edges. Edge weights can be continuous or discrete numbers, and either strictly positive or positive or negative, depending on context. The SBM can be extended to cover these cases by treating edge weights as covariates that are sampled from some distribution conditioned on the node partition [aicher-learning-2015] [peixoto-weighted-2017], i.e.
where \(P(\boldsymbol A|\boldsymbol b)\) is the likelihood of the unweighted SBM described previously, and \(P(\boldsymbol x|\boldsymbol A,\boldsymbol b)\) is the integrated likelihood of the edge weights
where \(P({\boldsymbol x}_{rs}|\gamma)\) is some model for the weights \({\boldsymbol x}_{rs}\) between groups \((r,s)\), conditioned on some parameter \(\gamma\), sampled from its prior \(P(\gamma)\). A hierarchical version of the model can also be implemented by replacing this prior by a nested sequence of priors and hyperpriors, as described in [peixoto-weighted-2017]. The posterior partition distribution is then simply
which can be sampled from, or maximized, just like with the unweighted case, but will use the information on the weights to guide the partitions.
A variety of weight models is supported, reflecting different kinds of edge covariates:
Name |
Domain |
Bounds |
Shape |
---|---|---|---|
|
Real \((\mathbb{R})\) |
\([0,\infty]\) |
|
|
Real \((\mathbb{R})\) |
\([-\infty,\infty]\) |
|
|
Natural \((\mathbb{N})\) |
\([0,\infty]\) |
|
|
Natural \((\mathbb{N})\) |
\([0,M]\) |
|
|
Natural \((\mathbb{N})\) |
\([0,\infty]\) |
In fact, the actual model implements microcanonical versions of
these distributions that are asymptotically equivalent, as described in
[peixoto-weighted-2017]. These can be combined with arbitrary weight
transformations to achieve a large family of associated
distributions. For example, to use a log-normal weight model
for positive real weights \(\boldsymbol x\), we can use the
transformation \(y_{ij} = \ln x_{ij}\) together with the
"real-normal"
model for \(\boldsymbol y\). To model weights that
are positive or negative integers in \(\mathbb{Z}\), we could either
subtract the minimum value, \(y_{ij} = x_{ij} - x^*\), with
\(x^*=\operatorname{min}_{ij}x_{ij}\), and use any of the above
models for non-negative integers in \(\mathbb{N}\), or
alternatively, consider the sign as an additional covariate,
i.e. \(s_{ij} = [\operatorname{sign}(x_{ij})+1]/2 \in \{0,1\}\),
using the Binomial distribution with \(M=1\) (a.k.a. the Bernoulli
distribution),
and any of the other discrete distributions for the magnitude,
\(y_{ij} = \operatorname{abs}(x_{ij})\).
The support for weighted networks is activated by passing the parameters
recs
and rec_types
to
BlockState
(or
OverlapBlockState
),
that specify the edge covariates (an edge
PropertyMap
) and their types (a string from the
table above), respectively. Note that these parameters expect lists,
so that multiple edge weights can be used simultaneously.
For example, let us consider a network of suspected terrorists involved in the train bombing of Madrid on March 11, 2004 [hayes-connecting-2006]. An edge indicates that a connection between the two persons have been identified, and the weight of the edge (an integer in the range \([0,3]\)) indicates the “strength” of the connection. We can apply the weighted SBM, using a Binomial model for the weights, as follows:
g = gt.collection.ns["train_terrorists"]
# This network contains an internal edge property map with name
# "weight" that contains the strength of interactions. The values
# integers in the range [0, 3].
state = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[g.ep.weight],
rec_types=["discrete-binomial"]))
# improve solution with merge-split
for i in range(100):
ret = state.multiflip_mcmc_sweep(niter=10, beta=np.inf)
state.draw(edge_color=g.ep.weight, ecmap=(matplotlib.cm.inferno, .6),
eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 2, 8, power=1),
edge_gradient=[], output="moreno-train-wsbm.svg")
Model selection#
In order to select the best weighted model, we proceed in the same manner as described in Sec. Model selection. However, when using transformations on continuous weights, we must include the associated scaling of the probability density, as described in [peixoto-weighted-2017].
For example, consider a food web between species in south
Florida [ulanowicz-network-2005]. A directed link exists from species
\(i\) to \(j\) if a energy flow exists between them, and a
weight \(x_{ij}\) on this edge indicates the magnitude of the energy
flow (a positive real value, i.e. \(x_{ij}\in [0,\infty]\)). One
possibility, therefore, is to use the "real-exponential"
model, as
follows:
g = gt.collection.ns["foodweb_baywet"]
# This network contains an internal edge property map with name
# "weight" that contains the energy flow between species. The values
# are continuous in the range [0, infinity].
state = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[g.ep.weight],
rec_types=["real-exponential"]))
# improve solution with merge-split
for i in range(100):
ret = state.multiflip_mcmc_sweep(niter=10, beta=np.inf)
state.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=(matplotlib.cm.inferno, .6),
eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1, log=True),
edge_gradient=[], output="foodweb-wsbm.svg")
Alternatively, we may consider a transformation of the type
so that \(y_{ij} \in [-\infty,\infty]\). If we use a model
"real-normal"
for \(\boldsymbol y\), it amounts to a log-normal model for
\(\boldsymbol x\). This can be a better choice if the weights are
distributed across many orders of magnitude, or show multi-modality. We
can fit this alternative model simply by using the transformed weights:
# Apply the weight transformation
y = g.ep.weight.copy()
y.a = log(y.a)
state_ln = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[y],
rec_types=["real-normal"]))
# improve solution with merge-split
for i in range(100):
ret = state_ln.multiflip_mcmc_sweep(niter=10, beta=np.inf)
state_ln.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=(matplotlib.cm.inferno, .6),
eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1, log=True),
edge_gradient=[], output="foodweb-wsbm-lognormal.svg")
At this point, we ask ourselves which of the above models yields the best fit of the data. This is answered by performing model selection via posterior odds ratios just like in Sec. Model selection. However, here we need to take into account the scaling of the probability density incurred by the variable transformation, i.e.
In the particular case of Eq. (7), we have
Therefore, we can compute the posterior odds ratio between both models as:
L1 = -state.entropy()
L2 = -state_ln.entropy() - log(g.ep.weight.a).sum()
print(u"ln \u039b: ", L2 - L1)
ln Λ: -70.657644...
A value of \(\Lambda \approx \mathrm{e}^{70} \approx 10^{30}\) in favor the exponential model indicates that the log-normal model does not provide a better fit for this particular data.
Posterior sampling#
The procedure to sample from the posterior distribution is identical to what is described in Sec. Sampling from the posterior distribution, but with the appropriate initialization, e.g..
g = gt.collection.ns["foodweb_baywet"]
state = gt.NestedBlockState(g, state_args=dict(recs=[g.ep.weight], rec_types=["real-exponential"]))
gt.mcmc_equilibrate(state, force_niter=100, mcmc_args=dict(niter=10))
Layered networks#
The edges of the network may be distributed in discrete “layers”,
representing distinct types if interactions
[peixoto-inferring-2015]. Extensions to the SBM may be defined for such
data, and they can be inferred using the exact same interface shown
above, except one should use the
LayeredBlockState
class, instead of
BlockState
. This class takes
two additional parameters: the ec
parameter, that must correspond to
an edge PropertyMap
with the layer/covariate values
on the edges, and the Boolean layers
parameter, which if True
specifies a layered model, otherwise one with categorical edge
covariates (not to be confused with the weighted models in
Sec. Edge weights and covariates).
If we use minimize_blockmodel_dl()
, this can
be achieved simply by passing the option layers=True
as well as the
appropriate value of state_args
, which will be propagated to
LayeredBlockState
’s constructor.
As an example, let us consider a social network of tribes, where two types of interactions were recorded, amounting to either friendship or enmity [read-cultures-1954]. We may apply the layered model by separating these two types of interactions in two layers:
g = gt.collection.ns["new_guinea_tribes"]
# The edge types are stored in the edge property map "weights".
# Note the different meanings of the two 'layers' parameters below: The
# first enables the use of LayeredBlockState, and the second selects
# the 'edge layers' version (instead of 'edge covariates').
state = gt.minimize_nested_blockmodel_dl(g,
state_args=dict(base_type=gt.LayeredBlockState,
state_args=dict(ec=g.ep.weight, layers=True)))
state.draw(edge_color=g.ep.weight.copy("double"), edge_gradient=[],
ecmap=(matplotlib.cm.coolwarm_r, .6), edge_pen_width=5, eorder=g.ep.weight,
output="tribes-sbm-edge-layers.svg")
It is possible to perform model averaging of all layered variants exactly like for the regular SBMs as was shown above.
Assortative community structure#
Traditionally, “community structure” in the proper
sense refers to groups of nodes that are more connected to each other
than to nodes of other communities. The SBM is capable of representing
this kind of structure without any problems, but in some circumstances
it might make sense to search exclusively for assortative communities
[lizhi-statistical-2020]. A version of the SBM that is constrained in
this way is called the “planted partition model”, which can be inferred
with graph-tool using
PPBlockState
. This
class behaves just like
BlockState
, therefore all
algorithms described in this documentation work in the same way. Below
we show how this model can be inferred for the football network
considered previously
g = gt.collection.data["football"]
# We can use the same agglomerative heuristic as before, but we need
# to specify PPBlockState as the internal state.
state = gt.minimize_blockmodel_dl(g, state=gt.PPBlockState)
# Now we run 100 sweeps of the MCMC with zero temperature, as a
# refinement. This is often not necessary.
state.multiflip_mcmc_sweep(beta=np.inf, niter=100)
state.draw(pos=g.vp.pos, output="football-pp.svg")
It is possible to perform model comparison with other model variations in the same manner as described in Hierarchical partitions below.
Ordered community structure#
The modular structure of directed networks might possess an inherent
ordering of the groups, such that most edges flow either “downstream” or
“upstream” according to that ordering. The directed version of the SBM
will inherently capture this ordering, but it will not be visible from
the model parameters — in particular the group labels — since the model is
invariant to group permutations. This ordering can be obtained from a
modified version of the model [peixoto-ordered-2022], which can be
inferred with graph-tool using
RankedBlockState
. This class behaves just
like BlockState
, therefore all algorithms
described in this documentation work in the same way (including when
NestedBlockState
is used).
Below we show how this model can be inferred for a faculty_hiring network.
g = gt.collection.ns["faculty_hiring/computer_science"].copy()
# For visualization purposes, it will be more useful to work with a
# weighted graph than with a multigraph, but the results are
# insensitive to this.
ew = gt.contract_parallel_edges(g)
# We will use a nested SBM, with the base state being the ordered SBM.
state = gt.NestedBlockState(g, base_type=gt.RankedBlockState, state_args=dict(eweight=ew))
# The number of iterations below is sufficient for a good estimate of
# the ground state for this network.
for i in range(100):
state.multiflip_mcmc_sweep(beta=np.inf, niter=10)
# We can use sfdp_layout() to obtain a ranked visualization.
pos = gt.sfdp_layout(g, cooling_step=0.99, multilevel=False, R=20,
rmap=state.levels[0].get_vertex_order(),
groups=state.levels[0].b, gamma=1)
state.levels[0].draw(pos=pos, edge_pen_width=gt.prop_to_size(ew, 1, 5),
output="hiring.svg")
It is possible to perform model comparison with other model variations in the same manner as described in Hierarchical partitions below.
References#
Tiago P. Peixoto, “Bayesian stochastic blockmodeling”, Advances in Network Clustering and Blockmodeling, edited by P. Doreian, V. Batagelj, A. Ferligoj, (Wiley, New York, 2019) DOI: 10.1002/9781119483298.ch11 [sci-hub, @tor], arXiv: 1705.10225
Tiago P. Peixoto, “Descriptive vs. inferential community detection in networks: pitfalls, myths and half-truths”, Elements in the Structure and Dynamics of Complex Networks, Cambridge University Press (2023), DOI: 10.1017/9781009118897 [sci-hub, @tor] arXiv: 2112.00183
Paul W. Holland, Kathryn Blackmond Laskey, Samuel Leinhardt, “Stochastic blockmodels: First steps”, Social Networks Volume 5, Issue 2, Pages 109-137 (1983). DOI: 10.1016/0378-8733(83)90021-7 [sci-hub, @tor]
Brian Karrer, M. E. J. Newman “Stochastic blockmodels and community structure in networks”, Phys. Rev. E 83, 016107 (2011). DOI: 10.1103/PhysRevE.83.016107 [sci-hub, @tor], arXiv: 1008.3926
Tiago P. Peixoto, “Nonparametric Bayesian inference of the microcanonical stochastic block model”, Phys. Rev. E 95 012317 (2017). DOI: 10.1103/PhysRevE.95.012317 [sci-hub, @tor], arXiv: 1610.02703
Tiago P. Peixoto, “Parsimonious module inference in large networks”, Phys. Rev. Lett. 110, 148701 (2013). DOI: 10.1103/PhysRevLett.110.148701 [sci-hub, @tor], arXiv: 1212.4794.
Tiago P. Peixoto, “Hierarchical block structures and high-resolution model selection in large networks”, Phys. Rev. X 4, 011047 (2014). DOI: 10.1103/PhysRevX.4.011047 [sci-hub, @tor], arXiv: 1310.4377.
Tiago P. Peixoto, “Model selection and hypothesis testing for large-scale network models with overlapping groups”, Phys. Rev. X 5, 011033 (2016). DOI: 10.1103/PhysRevX.5.011033 [sci-hub, @tor], arXiv: 1409.3059.
Tiago P. Peixoto, “Inferring the mesoscale structure of layered, edge-valued and time-varying networks”, Phys. Rev. E 92, 042807 (2015). DOI: 10.1103/PhysRevE.92.042807 [sci-hub, @tor], arXiv: 1504.02381
Christopher Aicher, Abigail Z. Jacobs, and Aaron Clauset, “Learning Latent Block Structure in Weighted Networks”, Journal of Complex Networks 3(2). 221-248 (2015). DOI: 10.1093/comnet/cnu026 [sci-hub, @tor], arXiv: 1404.0431
Tiago P. Peixoto, “Nonparametric weighted stochastic block models”, Phys. Rev. E 97, 012306 (2018), DOI: 10.1103/PhysRevE.97.012306 [sci-hub, @tor], arXiv: 1708.01432
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014). DOI: 10.1103/PhysRevE.89.012804 [sci-hub, @tor], arXiv: 1310.4378
Tiago P. Peixoto, “Merge-split Markov chain Monte Carlo for community detection”, Phys. Rev. E 102, 012305 (2020), DOI: 10.1103/PhysRevE.102.012305 [sci-hub, @tor], arXiv: 2003.07070
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, Phys. Rev. X 11 021003 (2021) DOI: 10.1103/PhysRevX.11.021003 [sci-hub, @tor], arXiv: 2005.13977
Lizhi Zhang, Tiago P. Peixoto, “Statistical inference of assortative community structures”, Phys. Rev. Research 2 043271 (2020), DOI: 10.1103/PhysRevResearch.2.043271 [sci-hub, @tor], arXiv: 2006.14493
Tiago P. Peixoto, “Ordered community detection in directed networks”, Phys. Rev. E 106, 024305 (2022), DOI: 10.1103/PhysRevE.106.024305 [sci-hub, @tor], arXiv: 2203.16460