graph_tool.inference
 Statistical inference of generative network models¶
This module contains algorithms for the identification of largescale network structure via the statistical inference of generative models.
Note
An introduction to the concepts used here, as well as a basic HOWTO is included in the cookbook section: Inferring modular network structure.
Nonparametric stochastic block model inference¶
Highlevel functions¶
Fit the stochastic block model, by minimizing its description length using an agglomerative heuristic. 

Fit the nested stochastic block model, by minimizing its description length using an agglomerative heuristic. 
State classes¶
The stochastic block model state of a given graph. 

The overlapping stochastic block model state of a given graph. 

The (possibly overlapping) block state of a given graph, where the edges are divided into discrete layers. 

The nested stochastic block model state of a given graph. 

Obtain the partition of a network according to the Bayesian planted partition model. 

Obtain the partition of a network according to the Newman’s modularity. 

This class aggregates several state classes and corresponding inversetemperature values to implement parallel tempering MCMC. 
Sampling and minimization¶
Equilibrate a MCMC with a given starting state. 

Equilibrate a MCMC at a specified target temperature by performing simulated annealing. 

Equilibrate a MCMC from a starting state with a higher order, by performing successive agglomerative initializations and equilibrations until the desired order is reached, such that metastable states are avoided. 

Equilibrate a multicanonical Monte Carlo sampling using the WangLandau algorithm. 

The density of states of a multicanonical Monte Carlo algorithm. 

Find the best order (number of groups) given an initial set of states by performing a onedimension minimization, using a Fibonacci (or golden section) search. 

Attempt to find a fit of the nested stochastic block model that minimizes the description length. 
Comparing and manipulating partitions¶
The random label model state for a set of labelled partitions, which attempts to align them with a common group labelling. 

The mixed random label model state for a set of labelled partitions, which attempts to align them inside clusters with a common group labelling. 

Obtain the center of a set of partitions, according to the variation of information metric or reduced mutual information. 

Returns the maximum overlap between partitions, according to an optimal label alignment. 

Returns the hierarchical maximum overlap between nested partitions, according to an optimal recursive label alignment. 

Returns the variation of information between two partitions. 

Returns the mutual information between two partitions. 

Returns the reduced mutual information between two partitions. 

Returns the contingency graph between both partitions. 

Returns a copy of partition 

Returns a copy of partition 

Returns a copy of nested partition 

Returns a copy of partition 

Returns a copy of nested partition 

Find a partition with a maximal overlap to all items of the list of partitions given. 

Find a nested partition with a maximal overlap to all items of the list of nested partitions given. 

Returns a copy of nested partition 
Auxiliary functions¶
Computes the amount of information necessary for the parameters of the traditional blockmodel ensemble, for 

Compute the “mean field” entropy given the vertex block membership marginals. 

Compute the Bethe entropy given the edge block membership marginals. 

Compute microstate entropy given a histogram of partitions. 

Compute the entropy of the marginal latent multigraph distribution. 

Generate a halfedge graph, where each halfedge is represented by a node, and an edge connects the halfedges like in the original graph. 

Get edge gradients corresponding to the block membership at the endpoints of the edges given by the 
Auxiliary classes¶
Histogram of partitions, implemented in C++. 

Histogram of block pairs, implemented in C++. 
Nonparametric network reconstruction¶
State classes¶
Inference state of an erased Poisson multigraph, using the stochastic block model as a prior. 

Inference state of a measured graph, using the stochastic block model as a prior. 

Inference state of a measured graph with heterogeneous errors, using the stochastic block model as a prior. 

Inference state of an uncertain graph, using the stochastic block model as a prior. 

Base state for uncertain network inference. 

Base state for network reconstruction based on dynamical data, using the stochastic block model as a prior. 

Inference state for network reconstruction based on epidemic dynamics, using the stochastic block model as a prior. 

Base state for network reconstruction based on the Ising model, using the stochastic block model as a prior. 

State for network reconstruction based on the Glauber dynamics of the Ising model, using the stochastic block model as a prior. 

State for network reconstruction based on the Glauber dynamics of the continuous Ising model, using the stochastic block model as a prior. 

State for network reconstruction based on the equilibrium configurations of the Ising model, using the Pseudolikelihood approximation and the stochastic block model as a prior. 

State for network reconstruction based on the equilibrium configurations of the continuous Ising model, using the Pseudolikelihood approximation and the stochastic block model as a prior. 
Expectationmaximization Inference¶
Infer latent Poisson multigraph model given an “erased” simple graph. 
Semiparametric stochastic block model inference¶
State classes¶
The parametric, undirected stochastic block model state of a given graph. 
Largescale descriptors¶
Calculate Newman’s (generalized) modularity of a network partition. 
Contents¶

class
graph_tool.inference.blockmodel.
PartitionHist
¶ Histogram of partitions, implemented in C++. Interface supports querying and setting using Vector_int32_t as keys, and ints as values.

asdict
()¶ Return the histogram’s contents as a dict.


class
graph_tool.inference.blockmodel.
BlockPairHist
¶ Histogram of block pairs, implemented in C++. Interface supports querying and setting using pairs of ints as keys, and ints as values.

asdict
()¶ Return the histogram’s contents as a dict.


class
graph_tool.inference.blockmodel.
BlockState
(g, b=None, B=None, eweight=None, vweight=None, recs=[], rec_types=[], rec_params=[], clabel=None, pclabel=None, bfield=None, Bfield=None, deg_corr=True, dense_bg=False, **kwargs)[source]¶ Bases:
object
The stochastic block model state of a given graph.
 Parameters
 g
Graph
Graph to be modelled.
 b
VertexPropertyMap
(optional, default:None
) Initial block labels on the vertices. If not supplied, it will be randomly sampled.
 B
int
(optional, default:None
) Number of blocks (or vertex groups). If not supplied it will be obtained from the parameter
b
. eweight
EdgePropertyMap
(optional, default:None
) Edge multiplicities (for multigraphs or block graphs).
 vweight
VertexPropertyMap
(optional, default:None
) Vertex multiplicities (for block graphs).
 recslist of
EdgePropertyMap
instances (optional, default:[]
) List of real or discretevalued edge covariates.
 rec_typeslist of edge covariate types (optional, default:
[]
) List of types of edge covariates. The possible types are:
"realexponential"
,"realnormal"
,"discretegeometric"
,"discretepoisson"
or"discretebinomial"
. rec_paramslist of
dict
(optional, default:[]
) Model hyperparameters for edge covariates. This should be a list of
dict
instances, or the string “microcanonical” (the default if nothing is specified). The keys depend on the type of edge covariate:"realexponential"
or"discretepoisson"
The parameter list is
["r", "theta"]
, corresponding to the parameters of the Gamma prior distribution. If unspecified, the default is the “empirical Bayes” choice:r = 1.0
andtheta
is the global average of the edge covariate."discretegeometric"
The parameter list is
["alpha", "beta"]
, corresponding to the parameters of the Beta prior distribution. If unspecified, the default is the noninformative choice:alpha = beta = 1.0
"discretebinomial"
The parameter list is
["N", "alpha", "beta"]
, corresponding to the number of trialsN
and the parameters of the Beta prior distribution. If unspecified, the default is the noninformative choice,alpha = beta = 1.0
, andN
is taken to be the maximum edge covarite value."realnormal"
The parameter list is
["m0", "k0", "v0", "nu0"]
corresponding to the normalinversechisquared prior. If unspecified, the defaults are:m0 = rec.fa.mean()
,k0 = 1
,v0 = rec.fa.std() ** 2
, andnu0 = 3
, whererec
is the corresponding edge covariate property map.
 clabel
VertexPropertyMap
(optional, default:None
) Constraint labels on the vertices. If supplied, vertices with different label values will not be clustered in the same group.
 pclabel
VertexPropertyMap
(optional, default:None
) Partition constraint labels on the vertices. This has the same interpretation as
clabel
, but will be used to compute the partition description length. bfield
VertexPropertyMap
(optional, default:None
) Local field acting as a prior for the node partition. This should be a vector property map of type
vector<double>
, and contain the logprobability for each node to be placed in each group. deg_corr
bool
(optional, default:True
) If
True
, the degreecorrected version of the blockmodel ensemble will be assumed, otherwise the traditional variant will be used. dense_bg
bool
(optional, default:False
) If
True
a dense matrix is used for the block graph, otherwise a sparse matrix will be used.
 g

copy
(self, g=None, eweight=None, vweight=None, b=None, B=None, deg_corr=None, clabel=None, overlap=False, pclabel=None, bfield=None, dense_bg=None, **kwargs)[source]¶ Copies the block state. The parameters override the state properties, and have the same meaning as in the constructor.

get_block_state
(self, b=None, vweight=False, **kwargs)[source]¶ Returns a
BlockState
corresponding to the block graph (i.e. the blocks of the current state become the nodes). The parameters have the same meaning as the in the constructor. Ifvweight == True
the nodes of the block state are weighted with the node counts.

get_Be
(self)[source]¶ Returns the effective number of blocks, defined as \(e^{H}\), with \(H=\sum_r\frac{n_r}{N}\ln \frac{n_r}{N}\), where \(n_r\) is the number of nodes in group r.

get_bclabel
(self, clabel=None)[source]¶ Returns a
VertexPropertyMap
corresponding to constraint labels for the block graph.

get_bpclabel
(self)[source]¶ Returns a
VertexPropertyMap
corresponding to partition constraint labels for the block graph.

get_state
(self)[source]¶ Alias to
get_blocks()
.

get_ers
(self)[source]¶ Returns the edge property map of the block graph which contains the \(e_{rs}\) matrix entries. For undirected graphs, the diagonal values (selfloops) contain \(e_{rr}/2\).

get_er
(self)[source]¶ Returns the vertex property map of the block graph which contains the number \(e_r\) of halfedges incident on block \(r\). If the graph is directed, a pair of property maps is returned, with the number of outedges \(e^+_r\) and inedges \(e^_r\), respectively.

get_nr
(self)[source]¶ Returns the vertex property map of the block graph which contains the block sizes \(n_r\).

entropy
(self, adjacency=True, dl=True, partition_dl=True, degree_dl=True, degree_dl_kind='distributed', edges_dl=True, dense=False, multigraph=True, deg_entropy=True, recs=True, recs_dl=True, beta_dl=1.0, Bfield=True, exact=True, **kwargs)[source]¶ Calculate the entropy (a.k.a. negative loglikelihood) associated with the current block partition.
 Parameters
 adjacency
bool
(optional, default:True
) If
True
, the adjacency term of the description length will be included. dl
bool
(optional, default:True
) If
True
, the description length for the parameters will be included. partition_dl
bool
(optional, default:True
) If
True
, anddl == True
the partition description length will be included. degree_dl
bool
(optional, default:True
) If
True
, anddl == True
the degree sequence description length will be included (for degreecorrected models). degree_dl_kind
str
(optional, default:"distributed"
) This specifies the prior used for the degree sequence. It must be one of:
"uniform"
,"distributed"
(default) or"entropy"
. edges_dl
bool
(optional, default:True
) If
True
, anddl == True
the edge matrix description length will be included. dense
bool
(optional, default:False
) If
True
, the “dense” variant of the entropy will be computed. multigraph
bool
(optional, default:True
) If
True
, the multigraph entropy will be used. deg_entropy
bool
(optional, default:True
) If
True
, the degree entropy term that is independent of the network partition will be included (for degreecorrected models). recs
bool
(optional, default:True
) If
True
, the likelihood for real or discretevalued edge covariates is computed. recs_dl
bool
(optional, default:True
) If
True
, anddl == True
the edge covariate description length will be included. beta_dl
double
(optional, default:1.
) Prior inverse temperature.
 exact
bool
(optional, default:True
) If
True
, the exact expressions will be used. Otherwise, Stirling’s factorial approximation will be used for some terms.
 adjacency
Notes
The “entropy” of the state is the negative loglikelihood of the microcanonical SBM, that includes the generated graph \(\boldsymbol{A}\) and the model parameters \(\boldsymbol{\theta}\),
\[\begin{split}\Sigma &=  \ln P(\boldsymbol{A},\boldsymbol{\theta}) \\ &=  \ln P(\boldsymbol{A}\boldsymbol{\theta})  \ln P(\boldsymbol{\theta}).\end{split}\]This value is also called the description length of the data, and it corresponds to the amount of information required to describe it (in nats).
For the traditional blockmodel (
deg_corr == False
), the model parameters are \(\boldsymbol{\theta} = \{\boldsymbol{e}, \boldsymbol{b}\}\), where \(\boldsymbol{e}\) is the matrix of edge counts between blocks, and \(\boldsymbol{b}\) is the partition of the nodes into blocks. For the degreecorrected blockmodel (deg_corr == True
), we have an additional set of parameters, namely the degree sequence \(\boldsymbol{k}\).For the traditional blockmodel, the model likelihood is
\[\begin{split}P(\boldsymbol{A}\boldsymbol{e},\boldsymbol{b}) &= \frac{\prod_{r<s}e_{rs}!\prod_re_{rr}!!}{\prod_rn_r^{e_r}}\times \frac{1}{\prod_{i<j}A_{ij}!\prod_iA_{ii}!!},\\ P(\boldsymbol{A}\boldsymbol{e},\boldsymbol{b}) &= \frac{\prod_{rs}e_{rs}!}{\prod_rn_r^{e_r}}\times \frac{1}{\prod_{ij}A_{ij}!},\end{split}\]for undirected and directed graphs, respectively, where \(e_{rs}\) is the number of edges from block \(r\) to \(s\) (or the number of halfedges for the undirected case when \(r=s\)), and \(n_r\) is the number of vertices in block \(r\) .
For the degreecorrected variant the equivalent expressions are
\[\begin{split}P(\boldsymbol{A}\boldsymbol{e},\boldsymbol{b},\boldsymbol{k}) &= \frac{\prod_{r<s}e_{rs}!\prod_re_{rr}!!}{\prod_re_r!}\times \frac{\prod_ik_i!}{\prod_{i<j}A_{ij}!\prod_iA_{ii}!!},\\ P(\boldsymbol{A}\boldsymbol{e},\boldsymbol{b},\boldsymbol{k}) &= \frac{\prod_{rs}e_{rs}!}{\prod_re_r^+!\prod_re_r^!}\times \frac{\prod_ik_i^+!\prod_ik_i^!}{\prod_{ij}A_{ij}!},\end{split}\]where \(e_r = \sum_se_{rs}\) is the number of halfedges incident on block \(r\), and \(e^+_r = \sum_se_{rs}\) and \(e^_r = \sum_se_{sr}\) are the numbers of out and inedges adjacent to block \(r\), respectively.
If
exact == False
, Stirling’s approximation is used in the above expression.If
dense == True
, the likelihood for the nondegreecorrected model becomes instead\[\begin{split}P(\boldsymbol{A}\boldsymbol{e},\boldsymbol{b})^{1} &= \prod_{r<s}{n_rn_s\choose e_{rs}}\prod_r{{n_r\choose 2}\choose e_{rr}/2},\\ P(\boldsymbol{A}\boldsymbol{e},\boldsymbol{b})^{1} &= \prod_{rs}{n_rn_s\choose e_{rs}}\end{split}\]if
multigraph == False
, otherwise we replace \({n\choose m}\to\left(\!\!{n\choose m}\!\!\right)\) above, where \(\left(\!\!{n\choose m}\!\!\right) = {n+m1\choose m}\). A “dense” entropy for the degreecorrected model is not available, and if requested will raise aNotImplementedError
.If
dl == True
, the description length \(\mathcal{L} = \ln P(\boldsymbol{\theta})\) of the model will be returned as well. The terms \(P(\boldsymbol{e})\) and \(P(\boldsymbol{b})\) are described in described inmodel_entropy()
.For the degreecorrected model we need to specify the prior \(P(\boldsymbol{k})\) for the degree sequence as well. Here there are three options:
degree_dl_kind == "uniform"
\[P(\boldsymbol{k}\boldsymbol{e},\boldsymbol{b}) = \prod_r\left(\!\!{n_r\choose e_r}\!\!\right)^{1}.\]This corresponds to a noninformative prior, where the degrees are sampled from a uniform distribution.
degree_dl_kind == "distributed"
(default)\[P(\boldsymbol{k}\boldsymbol{e},\boldsymbol{b}) = \prod_r\frac{\prod_k\eta_k^r!}{n_r!} \prod_r q(e_r, n_r)^{1}\]with \(\eta_k^r\) being the number of nodes with degree \(k\) in group \(r\), and \(q(n,m)\) being the number of partitions of integer \(n\) into at most \(m\) parts.
This corresponds to a prior for the degree sequence conditioned on the degree frequencies, which are themselves sampled from a uniform hyperprior. This option should be preferred in most cases.
degree_dl_kind == "entropy"
\[P(\boldsymbol{k}\boldsymbol{e},\boldsymbol{b}) \approx \prod_r\exp\left(n_rH(\boldsymbol{k}_r)\right)\]where \(H(\boldsymbol{k}_r) = \sum_kp_r(k)\ln p_r(k)\) is the entropy of the degree distribution inside block \(r\).
Note that, differently from the other two choices, this represents only an approximation of the description length. It is meant to be used only for comparison purposes, and should be avoided in practice.
For the directed case, the above expressions are duplicated for the in and outdegrees.
References
 peixotononparametric2017
Tiago P. Peixoto, “Nonparametric Bayesian inference of the microcanonical stochastic block model”, Phys. Rev. E 95 012317 (2017), DOI: 10.1103/PhysRevE.95.012317 [scihub, @tor], arXiv: 1610.02703
 peixotohierarchical2014
Tiago P. Peixoto, “Hierarchical block structures and highresolution model selection in large networks “, Phys. Rev. X 4, 011047 (2014), DOI: 10.1103/PhysRevX.4.011047 [scihub, @tor], arXiv: 1310.4377.
 peixotoweighted2017
Tiago P. Peixoto, “Nonparametric weighted stochastic block models”, Phys. Rev. E 97, 012306 (2018), DOI: 10.1103/PhysRevE.97.012306 [scihub, @tor], arXiv: 1708.01432

get_matrix
(self)[source]¶ Returns the block matrix (as a sparse
csr_matrix
), which contains the number of edges between each block pair.Warning
This corresponds to the adjacency matrix of the block graph, which by convention includes twice the amount of edges in the diagonal entries if the graph is undirected.
Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.BlockState(g, B=5, deg_corr=True) >>> state.mcmc_sweep(niter=1000) (...) >>> m = state.get_matrix() >>> figure() <...> >>> matshow(m.todense()) <...> >>> savefig("bloc_mat.svg")

virtual_vertex_move
(self, v, s, **kwargs)[source]¶ Computes the entropy difference if vertex
v
is moved to blocks
. The remaining parameters are the same as ingraph_tool.inference.blockmodel.BlockState.entropy()
.

move_vertex
(self, v, s)[source]¶ Move vertex
v
to blocks
.This optionally accepts a list of vertices and blocks to move simultaneously.

remove_vertex
(self, v)[source]¶ Remove vertex
v
from its current group.This optionally accepts a list of vertices to remove.
Warning
This will leave the state in an inconsistent state before the vertex is returned to some other group, or if the same vertex is removed twice.

add_vertex
(self, v, r)[source]¶ Add vertex
v
to blockr
.This optionally accepts a list of vertices and blocks to add.
Warning
This can leave the state in an inconsistent state if a vertex is added twice to the same group.

merge_vertices
(self, u, v)[source]¶ Merge vertex
u
intov
.Warning
This modifies the underlying graph.

sample_vertex_move
(self, v, c=1.0, d=0.1)[source]¶ Sample block membership proposal of vertex
v
according to realvalued sampling parametersc
andd
: For \(c\to 0\) the blocks are sampled according to the local neighborhood and their connections; for \(c\to\infty\) the blocks are sampled randomly. With a probabilityd
, a new (empty) group is sampled.

get_move_prob
(self, v, s, c=1.0, d=0.1, reverse=False)[source]¶ Compute the probability of a move proposal for vertex
v
to blocks
according to sampling parametersc
andd
, as obtained withgraph_tool.inference.blockmodel.BlockState.sample_vertex_move()
. Ifreverse == True
, the reverse probability of moving the node back from blocks
to its current one is obtained.

get_edges_prob
(self, missing, spurious=[], entropy_args={})[source]¶ Compute the joint logprobability of the missing and spurious edges given by
missing
andspurious
(a list of(source, target)
tuples, orEdge()
instances), together with the observed edges.More precisely, the loglikelihood returned is
\[\ln \frac{P(\boldsymbol G + \delta \boldsymbol G  \boldsymbol b)}{P(\boldsymbol G \boldsymbol b)}\]where \(\boldsymbol G + \delta \boldsymbol G\) is the modified graph (with missing edges added and spurious edges deleted).
The values in
entropy_args
are passed toentropy()
to calculate the logprobability.

mcmc_sweep
(self, beta=1.0, c=1.0, d=0.01, niter=1, entropy_args={}, allow_vacate=True, sequential=True, deterministic=False, vertices=None, verbose=False, **kwargs)[source]¶ Perform
niter
sweeps of a MetropolisHastings acceptancerejection sampling MCMC to sample network partitions. Parameters
 beta
float
(optional, default:1.
) Inverse temperature.
 c
float
(optional, default:1.
) Sampling parameter
c
for move proposals: For \(c\to 0\) the blocks are sampled according to the local neighborhood of a given node and their block connections; for \(c\to\infty\) the blocks are sampled randomly. Note that only for \(c > 0\) the MCMC is guaranteed to be ergodic. d
float
(optional, default:.01
) Probability of selecting a new (i.e. empty) group for a given move.
 niter
int
(optional, default:1
) Number of sweeps to perform. During each sweep, a move attempt is made for each node.
 entropy_args
dict
(optional, default:{}
) Entropy arguments, with the same meaning and defaults as in
graph_tool.inference.blockmodel.BlockState.entropy()
. allow_vacate
bool
(optional, default:True
) Allow groups to be vacated.
 sequential
bool
(optional, default:True
) If
sequential == True
each vertex move attempt is made sequentially, where vertices are visited in random order. Otherwise the moves are attempted by sampling vertices randomly, so that the same vertex can be moved more than once, before other vertices had the chance to move. deterministic
bool
(optional, default:False
) If
sequential == True
anddeterministic == True
the vertices will be visited in deterministic order. vertices
list
of ints (optional, default:None
) If provided, this should be a list of vertices which will be moved. Otherwise, all vertices will.
 verbose
bool
(optional, default:False
) If
verbose == True
, detailed information will be displayed.
 beta
 Returns
 dS
float
Entropy difference after the sweeps.
 nattempts
int
Number of vertex moves attempted.
 nmoves
int
Number of vertices moved.
 dS
Notes
This algorithm has an \(O(E)\) complexity, where \(E\) is the number of edges (independent of the number of blocks).
References
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378

multiflip_mcmc_sweep
(self, beta=1.0, c=1.0, psingle=None, psplit=1, pmerge=1, pmergesplit=1, d=0.01, gibbs_sweeps=10, niter=1, entropy_args={}, accept_stats=None, verbose=False, **kwargs)[source]¶ Perform
niter
sweeps of a MetropolisHastings acceptancerejection sampling MCMC with multiple simultaneous moves to sample network partitions. Parameters
 beta
float
(optional, default:1.
) Inverse temperature.
 c
float
(optional, default:1.
) Sampling parameter
c
for move proposals: For \(c\to 0\) the blocks are sampled according to the local neighborhood of a given node and their block connections; for \(c\to\infty\) the blocks are sampled randomly. Note that only for \(c > 0\) the MCMC is guaranteed to be ergodic. psingle
float
(optional, default:None
) Relative probability of proposing a single node move. If
None
, it will be selected as the number of nodes in the graph. psplit
float
(optional, default:1
) Relative probability of proposing a group split.
 pmergesplit
float
(optional, default:1
) Relative probability of proposing a margesplit move.
 d
float
(optional, default:1
) Probability of selecting a new (i.e. empty) group for a given singlenode move.
 gibbs_sweeps
int
(optional, default:10
) Number of sweeps of Gibbs sampling to be performed (i.e. each node is attempted once per sweep) to refine a split proposal.
 niter
int
(optional, default:1
) Number of sweeps to perform. During each sweep, a move attempt is made for each node, on average.
 entropy_args
dict
(optional, default:{}
) Entropy arguments, with the same meaning and defaults as in
graph_tool.inference.blockmodel.BlockState.entropy()
. verbose
bool
(optional, default:False
) If
verbose == True
, detailed information will be displayed.
 beta
 Returns
 dS
float
Entropy difference after the sweeps.
 nattempts
int
Number of vertex moves attempted.
 nmoves
int
Number of vertices moved.
 dS
Notes
This algorithm has an \(O(E)\) complexity, where \(E\) is the number of edges (independent of the number of blocks).

gibbs_sweep
(self, beta=1.0, niter=1, entropy_args={}, allow_new_group=True, sequential=True, deterministic=False, vertices=None, verbose=False, **kwargs)[source]¶ Perform
niter
sweeps of a rejectionfree Gibbs sampling MCMC to sample network partitions. Parameters
 beta
float
(optional, default:1.
) Inverse temperature.
 niter
int
(optional, default:1
) Number of sweeps to perform. During each sweep, a move attempt is made for each node.
 entropy_args
dict
(optional, default:{}
) Entropy arguments, with the same meaning and defaults as in
graph_tool.inference.blockmodel.BlockState.entropy()
. allow_new_group
bool
(optional, default:True
) Allow the number of groups to increase and decrease.
 sequential
bool
(optional, default:True
) If
sequential == True
each vertex move attempt is made sequentially, where vertices are visited in random order. Otherwise the moves are attempted by sampling vertices randomly, so that the same vertex can be moved more than once, before other vertices had the chance to move. deterministic
bool
(optional, default:False
) If
sequential == True
anddeterministic == True
the vertices will be visited in deterministic order. vertices
list
of ints (optional, default:None
) If provided, this should be a list of vertices which will be moved. Otherwise, all vertices will.
 verbose
bool
(optional, default:False
) If
verbose == True
, detailed information will be displayed.
 beta
 Returns
 dS
float
Entropy difference after the sweeps.
 nattempts
int
Number of vertex moves attempted.
 nmoves
int
Number of vertices moved.
 dS
Notes
This algorithm has an \(O(E\times B)\) complexity, where \(B\) is the number of blocks, and \(E\) is the number of edges.

multicanonical_sweep
(self, m_state, multiflip=False, **kwargs)[source]¶ Perform
niter
sweeps of a nonMarkovian multicanonical sampling using the WangLandau algorithm. Parameters
 m_state
MulticanonicalState
MulticanonicalState
instance containing the current state of the WangLandau run. multiflip
bool
(optional, default:False
) If
True
,multiflip_mcmc_sweep()
will be used, otherwisemcmc_sweep()
. **kwargsKeyword parameter list
The remaining parameters will be passed to
multiflip_mcmc_sweep()
ormcmc_sweep()
.
 m_state
 Returns
 dS
float
Entropy difference after the sweeps.
 nattempts
int
Number of vertex moves attempted.
 nmoves
int
Number of vertices moved.
 dS
Notes
This algorithm has an \(O(E)\) complexity, where \(E\) is the number of edges (independent of the number of blocks).
References
 wangefficient2001
Fugao Wang, D. P. Landau, “An efficient, multiple range random walk algorithm to calculate the density of states”, Phys. Rev. Lett. 86, 2050 (2001), DOI: 10.1103/PhysRevLett.86.2050 [scihub, @tor], arXiv: condmat/0011174

multicanonical_B_sweep
(self, m_state, **kwargs)[source]¶ Perform
niter
sweeps of a nonMarkovian multicanonical sampling using the WangLandau algorithm. Parameters
 m_state
MulticanonicalState
MulticanonicalState
instance containing the current state of the WangLandau run. multiflip
bool
(optional, default:False
) If
True
,multiflip_mcmc_sweep()
will be used, otherwisemcmc_sweep()
. **kwargsKeyword parameter list
The remaining parameters will be passed to
multiflip_mcmc_sweep()
ormcmc_sweep()
.
 m_state
 Returns
 dS
float
Entropy difference after the sweeps.
 nattempts
int
Number of vertex moves attempted.
 nmoves
int
Number of vertices moved.
 dS
Notes
This algorithm has an \(O(E)\) complexity, where \(E\) is the number of edges (independent of the number of blocks).
References
 wangefficient2001
Fugao Wang, D. P. Landau, “An efficient, multiple range random walk algorithm to calculate the density of states”, Phys. Rev. Lett. 86, 2050 (2001), DOI: 10.1103/PhysRevLett.86.2050 [scihub, @tor], arXiv: condmat/0011174

exhaustive_sweep
(self, entropy_args={}, callback=None, density=None, vertices=None, initial_partition=None, max_iter=None)[source]¶ Perform an exhaustive loop over all possible network partitions.
 Parameters
 entropy_args
dict
(optional, default:{}
) Entropy arguments, with the same meaning and defaults as in
graph_tool.inference.blockmodel.BlockState.entropy()
. callbackcallable object (optional, default:
None
) Function to be called for each partition, with three arguments
(S, S_min, b_min)
corresponding to the the current entropy value, the minimum entropy value so far, and the corresponding partition, respectively. If not provided, andhist is None
an iterator over the same values will be returned instead. density
tuple
(optional, default:None
) If provided, it should contain a tuple with values
(S_min, S_max, n_bins)
, which will be used to obtain the density of states via a histogram of sizen_bins
. This parameter is ignored unlesscallback is None
. verticesiterable of ints (optional, default:
None
) If provided, this should be a list of vertices which will be moved. Otherwise, all vertices will.
 initial_partitioniterable of ints (optional, default:
None
) If provided, this will provide the initial partition for the iteration.
 max_iter
int
(optional, default:None
) If provided, this will limit the total number of iterations.
 entropy_args
 Returns
 statesiterator over (S, S_min, b_min)
If
callback
isNone
andhist
isNone
, the function will return an iterator over(S, S_min, b_min)
corresponding to the the current entropy value, the minimum entropy value so far, and the corresponding partition, respectively. Ss, countspair of
numpy.ndarray
If
callback is None
andhist is not None
, the function will return the values of each bin (Ss
) and the state count of each bin (counts
). b_min
VertexPropertyMap
If
callback is not None
orhist is not None
, the function will also return partition with smallest entropy.
Notes
This algorithm has an \(O(B^N)\) complexity, where \(B\) is the number of blocks, and \(N\) is the number of vertices.

merge_sweep
(self, nmerges=1, niter=10, entropy_args={}, parallel=True, verbose=False, **kwargs)[source]¶ Perform
niter
merge sweeps, where block nodes are progressively merged together in a manner that least increases the entropy. Parameters
 nmerges
int
(optional, default:1
) Number block nodes to merge.
 niter
int
(optional, default:1
) Number of merge attempts to perform for each block node, before the best one is selected.
 entropy_args
dict
(optional, default:{}
) Entropy arguments, with the same meaning and defaults as in
graph_tool.inference.blockmodel.BlockState.entropy()
. parallel
bool
(optional, default:True
) If
parallel == True
, the merge candidates are obtained in parallel. verbose
bool
(optional, default:False
) If
verbose == True
, detailed information will be displayed.
 nmerges
 Returns
 dS
float
Entropy difference after the sweeps.
 nattempts
int
Number of attempted merges.
 nmoves
int
Number of vertices merged.
 dS
Notes
This function should only be called for block states, obtained from
graph_tool.inference.blockmodel.BlockState.get_block_state()
.References
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378

shrink
(self, B, **kwargs)[source]¶ Reduces the order of current state by progressively merging groups, until only
B
are left. All remaining keyword arguments are passed tograph_tool.inference.blockmodel.BlockState.merge_sweep()
.This function leaves the current state untouched and returns instead a copy with the new partition.

collect_edge_marginals
(self, p=None, update=1)[source]¶ Collect the edge marginal histogram, which counts the number of times the endpoints of each node have been assigned to a given block pair.
This should be called multiple times, e.g. after repeated runs of the
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
function. Parameters
 p
EdgePropertyMap
(optional, default:None
) Edge property map with edge marginals to be updated. If not provided, an empty histogram will be created.
 updatefloat (optional, default:
1
) Each call increases the current count by the amount given by this parameter.
 p
 Returns
 p
EdgePropertyMap
Edge property map with updated edge marginals.
 p
Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.BlockState(g, B=4, deg_corr=True) >>> pe = None >>> state.mcmc_sweep(niter=1000) # remove part of the transient (...) >>> for i in range(1000): ... ret = state.mcmc_sweep(niter=10) ... pe = state.collect_edge_marginals(pe) >>> gt.bethe_entropy(g, pe)[0] 1.7733496...

collect_vertex_marginals
(self, p=None, b=None, unlabel=False, update=1)[source]¶ Collect the vertex marginal histogram, which counts the number of times a node was assigned to a given block.
This should be called multiple times, e.g. after repeated runs of the
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
function. Parameters
 p
VertexPropertyMap
(optional, default:None
) Vertex property map with vectortype values, storing the previous block membership counts. If not provided, an empty histogram will be created.
 b
VertexPropertyMap
(optional, default:None
) Vertex property map with group partition. If not provided, the state’s partition will be used.
 unlabelbool (optional, default:
False
) If
True
, a canonical labelling of the groups will be used, so that each partition is uniquely represented. updateint (optional, default:
1
) Each call increases the current count by the amount given by this parameter.
 p
 Returns
 p
VertexPropertyMap
Vertex property map with vectortype values, storing the accumulated block membership counts.
 p
Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.BlockState(g, B=4, deg_corr=True) >>> pv = None >>> state.mcmc_sweep(niter=1000) # remove part of the transient (...) >>> for i in range(1000): ... ret = state.mcmc_sweep(niter=10) ... pv = state.collect_vertex_marginals(pv) >>> gt.mf_entropy(g, pv) 22.237735... >>> gt.graph_draw(g, pos=g.vp["pos"], vertex_shape="pie", ... vertex_pie_fractions=pv, output="polbooks_blocks_soft_B4.svg") <...>

collect_partition_histogram
(self, h=None, update=1, unlabel=True)[source]¶ Collect a histogram of partitions.
This should be called multiple times, e.g. after repeated runs of the
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
function. Parameters
 h
PartitionHist
(optional, default:None
) Partition histogram. If not provided, an empty histogram will be created.
 updatefloat (optional, default:
1
) Each call increases the current count by the amount given by this parameter.
 unlabelbool (optional, default:
True
) If
True
, a canonical labelling of the groups will be used, so that each partition is uniquely represented.
 h
 Returns
 h
PartitionHist
(optional, default:None
) Updated Partition histogram.
 h
Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.BlockState(g, B=4, deg_corr=True) >>> ph = None >>> state.mcmc_sweep(niter=1000) # remove part of the transient (...) >>> for i in range(1000): ... ret = state.mcmc_sweep(niter=10) ... ph = state.collect_partition_histogram(ph) >>> gt.microstate_entropy(ph) 132.254124...

draw
(self, **kwargs)[source]¶ Convenience wrapper to
graph_draw()
that draws the state of the graph as colors on the vertices and edges.

sample_graph
(self, canonical=False, multigraph=True, self_loops=True, max_ent=False, n_iter=1000)[source]¶ Sample a new graph from the fitted model.
 Parameters
 canonical
bool
(optional, default:False
) If
canonical == True
, the graph will be sampled from the maximumlikelihood estimate of the canonical stochastic block model. Otherwise, it will be sampled from the microcanonical model. multigraph
bool
(optional, default:True
) If
True
, parallel edges will be allowed. selfloops
bool
(optional, default:True
) If
True
, selfloops will be allowed. max_ent
bool
(optional, default:False
) If
True
, maximumentropy model variants will be used. n_iter
int
(optional, default:1000
) Number of iterations used (only relevant if
canonical == False
andmax_ent == True
).
 canonical
 Returns
 g
Graph
Generated graph.
 g
Notes
This function is just a convenience wrapper to
generate_sbm()
. However, ifmax_ent==True
andcanonical == False
it wrapsrandom_rewire()
instead.Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.minimize_blockmodel_dl(g, B_max=3) >>> u = state.sample_graph(canonical=True, self_loops=False, multigraph=False) >>> ustate = gt.BlockState(u, b=state.b) >>> state.draw(pos=g.vp.pos, output="polbookssbm.svg") <...> >>> ustate.draw(pos=u.own_property(g.vp.pos), output="polbookssbmsampled.svg") <...>
Left: Political books network. Right: Sample from the degreecorrected SBM fitted to the original network.

graph_tool.inference.blockmodel.
model_entropy
(B, N, E, directed=False, nr=None)[source]¶ Computes the amount of information necessary for the parameters of the traditional blockmodel ensemble, for
B
blocks,N
vertices,E
edges, and either a directed or undirected graph.This is equivalently defined as minus the loglikelihood of sampling the parameters from a nonparametric generative model.
A traditional blockmodel is defined as a set of \(N\) vertices which can belong to one of \(B\) blocks, and the matrix \(e_{rs}\) describes the number of edges from block \(r\) to \(s\) (or twice that number if \(r=s\) and the graph is undirected).
For an undirected graph, the number of distinct \(e_{rs}\) matrices is given by,
\[\Omega_m = \left(\!\!{B(B+1)/2 \choose E}\!\!\right)\]and for a directed graph,
\[\Omega_m = \left(\!\!{B^2 \choose E}\!\!\right)\]where \(\left(\!{n \choose k}\!\right) = {n+k1\choose k}\) is the number of \(k\) combinations with repetitions from a set of size \(n\). Hence, we have the description length of the edge counts
\[\ln P(\boldsymbol{e}) = \ln \Omega_m.\]For the node partition \(\boldsymbol{b}\) we assume a twolevel Bayesian hierarchy, where first the group size histogram is generated, and conditioned on it the partition, which leads to a description length:
\[\ln P(\boldsymbol{b}) = \ln {N  1 \choose B  1} + \ln N!  \sum_r \ln n_r!.\]where \(n_r\) is the number of nodes in block \(r\).
The total information necessary to describe the model is then,
\[\ln P(\boldsymbol{e}, \boldsymbol{b}) = \ln P(\boldsymbol{e})  \ln P(\boldsymbol{b}).\]If
nr
isNone
, it is assumed \(n_r=N/B\). Ifnr
isFalse
, the partition term \(\ln P(\boldsymbol{b})\) is omitted entirely.References
 peixotoparsimonious2013
Tiago P. Peixoto, “Parsimonious module inference in large networks”, Phys. Rev. Lett. 110, 148701 (2013), DOI: 10.1103/PhysRevLett.110.148701 [scihub, @tor], arXiv: 1212.4794.
 peixotononparametric2017
Tiago P. Peixoto, “Nonparametric Bayesian inference of the microcanonical stochastic block model”, Phys. Rev. E 95 012317 (2017), DOI: 10.1103/PhysRevE.95.012317 [scihub, @tor], arXiv: 1610.02703

graph_tool.inference.blockmodel.
bethe_entropy
(g, p)[source]¶ Compute the Bethe entropy given the edge block membership marginals.
 Parameters
 g
Graph
The graph.
 p
EdgePropertyMap
Edge property map with edge marginals.
 g
 Returns
 H
float
The Bethe entropy value (in nats)
 Hmf
float
The “mean field” entropy value (in nats), as would be returned by the
mf_entropy()
function. pv
VertexPropertyMap
(optional, default:None
) Vertex property map with vectortype values, storing the accumulated block membership counts. These are the node marginals, as would be returned by the
collect_vertex_marginals()
method.
 H
Notes
The Bethe entropy is defined as,
\[H = \sum_{ij}A_{ij}\sum_{rs}\pi_{ij}(r,s)\ln\pi_{ij}(r,s)  \sum_i(1k_i)\sum_r\pi_i(r)\ln\pi_i(r),\]where \(\pi_{ij}(r,s)\) is the marginal probability that vertices \(i\) and \(j\) belong to blocks \(r\) and \(s\), respectively, and \(\pi_i(r)\) is the marginal probability that vertex \(i\) belongs to block \(r\), and \(k_i\) is the degree of vertex \(i\) (or total degree for directed graphs).
References
 mezardinformation2009
Marc Mézard, Andrea Montanari, “Information, Physics, and Computation”, Oxford Univ Press, 2009. DOI: 10.1093/acprof:oso/9780198570837.001.0001 [scihub, @tor]

graph_tool.inference.blockmodel.
mf_entropy
(g, p)[source]¶ Compute the “mean field” entropy given the vertex block membership marginals.
 Parameters
 g
Graph
The graph.
 p
VertexPropertyMap
Vertex property map with vectortype values, storing the accumulated block membership counts.
 g
 Returns
 Hmf
float
The “mean field” entropy value (in nats).
 Hmf
Notes
The “mean field” entropy is defined as,
\[H =  \sum_{i,r}\pi_i(r)\ln\pi_i(r),\]where \(\pi_i(r)\) is the marginal probability that vertex \(i\) belongs to block \(r\).
References
 mezardinformation2009
Marc Mézard, Andrea Montanari, “Information, Physics, and Computation”, Oxford Univ Press, 2009. DOI: 10.1093/acprof:oso/9780198570837.001.0001 [scihub, @tor]

graph_tool.inference.blockmodel.
microstate_entropy
(h, unlabel=True)[source]¶ Compute microstate entropy given a histogram of partitions.
 Parameters
 h
PartitionHist
(optional, default:None
) Partition histogram.
 unlabelbool (optional, default:
True
) If
True
, a canonical labelling of the groups will be used, so that each partition is uniquely represented. However, the entropy computed will still correspond to the full distribution over labelled partitions, where all permutations are assumed to be equally likely.
 h
 Returns
 H
float
The microstate entropy value (in nats).
 H
Notes
The microstate entropy is defined as,
\[H =  \sum_{\boldsymbol b}p({\boldsymbol b})\ln p({\boldsymbol b}),\]where \(p({\boldsymbol b})\) is observed frequency of labelled partition \({\boldsymbol b}\).
References
 mezardinformation2009
Marc Mézard, Andrea Montanari, “Information, Physics, and Computation”, Oxford Univ Press, 2009. DOI: 10.1093/acprof:oso/9780198570837.001.0001 [scihub, @tor]

class
graph_tool.inference.overlap_blockmodel.
OverlapBlockState
(g, b=None, B=None, recs=[], rec_types=[], rec_params=[], clabel=None, pclabel=None, deg_corr=True, dense_bg=False, **kwargs)[source]¶ Bases:
graph_tool.inference.blockmodel.BlockState
The overlapping stochastic block model state of a given graph.
 Parameters
 g
Graph
Graph to be modelled.
 b
VertexPropertyMap
ornumpy.ndarray
(optional, default:None
) Initial block labels on the vertices or halfedges. If not supplied, it will be randomly sampled. If the value passed is a vertex property map, it will be assumed to be a nonoverlapping partition of the vertices. If it is an edge property map, it should contain a vector for each edge, with the block labels at each end point (sorted according to their vertex index, in the case of undirected graphs, otherwise from source to target). If the value is an
numpy.ndarray
, it will be assumed to correspond directly to a partition of the list of halfedges. B
int
(optional, default:None
) Number of blocks (or vertex groups). If not supplied it will be obtained from the parameter
b
. recslist of
EdgePropertyMap
instances (optional, default:[]
) List of real or discretevalued edge covariates.
 rec_typeslist of edge covariate types (optional, default:
[]
) List of types of edge covariates. The possible types are:
"realexponential"
,"realnormal"
,"discretegeometric"
,"discretepoisson"
or"discretebinomial"
. rec_paramslist of
dict
(optional, default:[]
) Model hyperparameters for edge covariates. This should a list of
dict
instances. SeeBlockState
for more details. clabel
VertexPropertyMap
(optional, default:None
) Constraint labels on the vertices. If supplied, vertices with different label values will not be clustered in the same group.
 deg_corr
bool
(optional, default:True
) If
True
, the degreecorrected version of the blockmodel ensemble will be assumed, otherwise the traditional variant will be used. dense_bg
bool
(optional, default:False
) If
True
a dense matrix is used for the block graph, otherwise a sparse matrix will be used.
 g

copy
(self, g=None, b=None, B=None, deg_corr=None, clabel=None, pclabel=None, **kwargs)[source]¶ Copies the block state. The parameters override the state properties, and have the same meaning as in the constructor. If
overlap=False
an instance ofBlockState
is returned. This is by default a shallow copy.

get_edge_blocks
(self)[source]¶ Returns an edge property map which contains the block labels pairs for each edge.

get_overlap_blocks
(self)[source]¶ Returns the mixed membership of each vertex.
 Returns
 bv
VertexPropertyMap
A vectorvalued vertex property map containing the block memberships of each node.
 bc_in
VertexPropertyMap
The labelled indegrees of each node, i.e. how many inedges belong to each group, in the same order as the
bv
property above. bc_out
VertexPropertyMap
The labelled outdegrees of each node, i.e. how many outedges belong to each group, in the same order as the
bv
property above. bc_total
VertexPropertyMap
The labelled total degrees of each node, i.e. how many incident edges belong to each group, in the same order as the
bv
property above.
 bv

get_nonoverlap_blocks
(self)[source]¶ Returns a scalarvalued vertex property map with the block mixture represented as a single number.

get_majority_blocks
(self)[source]¶ Returns a scalarvalued vertex property map with the majority block membership of each node.

entropy
(self, adjacency=True, dl=True, partition_dl=True, degree_dl=True, degree_dl_kind='distributed', edges_dl=True, dense=False, multigraph=True, deg_entropy=True, recs=True, recs_dl=True, beta_dl=1.0, exact=True, **kwargs)[source]¶ Calculate the entropy associated with the current block partition.
 Parameters
 adjacency
bool
(optional, default:True
) If
True
, the adjacency term of the description length will be included. dl
bool
(optional, default:True
) If
True
, the description length for the parameters will be included. partition_dl
bool
(optional, default:True
) If
True
, anddl == True
the partition description length will be included. degree_dl
bool
(optional, default:True
) If
True
, anddl == True
the degree sequence description length will be included (for degreecorrected models). degree_dl_kind
str
(optional, default:"distributed"
) This specifies the prior used for the degree sequence. It must be one of:
"uniform"
,"distributed"
(default) or"entropy"
. edges_dl
bool
(optional, default:True
) If
True
, anddl == True
the edge matrix description length will be included. dense
bool
(optional, default:False
) If
True
, the “dense” variant of the entropy will be computed. multigraph
bool
(optional, default:True
) If
True
, the multigraph entropy will be used. deg_entropy
bool
(optional, default:True
) If
True
, the degree entropy term that is independent of the network partition will be included (for degreecorrected models). recs
bool
(optional, default:True
) If
True
, the likelihood for real or discretevalued edge covariates is computed. recs_dl
bool
(optional, default:True
) If
True
, anddl == True
the edge covariate description length will be included. beta_dl
double
(optional, default:1.
) Prior inverse temperature.
 exact
bool
(optional, default:True
) If
True
, the exact expressions will be used. Otherwise, Stirling’s factorial approximation will be used for some terms.
 adjacency
Notes
The “entropy” of the state is minus the loglikelihood of the microcanonical SBM, that includes the generated graph \(\boldsymbol{A}\) and the model parameters \(\boldsymbol{\theta}\),
\[\begin{split}\mathcal{S} &=  \ln P(\boldsymbol{A},\boldsymbol{\theta}) \\ &=  \ln P(\boldsymbol{A}\boldsymbol{\theta})  \ln P(\boldsymbol{\theta}).\end{split}\]This value is also called the description length of the data, and it corresponds to the amount of information required to describe it (in nats).
For the traditional blockmodel (
deg_corr == False
), the model parameters are \(\boldsymbol{\theta} = \{\boldsymbol{e}, \boldsymbol{b}\}\), where \(\boldsymbol{e}\) is the matrix of edge counts between blocks, and \(\boldsymbol{b}\) is the overlapping partition of the nodes into blocks. For the degreecorrected blockmodel (deg_corr == True
), we have an additional set of parameters, namely the labelled degree sequence \(\boldsymbol{k}\).The model likelihood \(P(\boldsymbol{A}\theta)\) is given analogously to the nonoverlapping case, as described in
graph_tool.inference.blockmodel.BlockState.entropy()
.If
dl == True
, the description length \(\mathcal{L} = \ln P(\boldsymbol{\theta})\) of the model will be returned as well. The edgecount prior \(P(\boldsymbol{e})\) is described in described inmodel_entropy()
. For the overlapping partition \(P(\boldsymbol{b})\), we have\[\ln P(\boldsymbol{b}) = \ln\left(\!\!{D \choose N}\!\!\right) + \sum_d \ln {\left(\!\!{{B\choose d}\choose n_d}\!\!\right)} + \ln N!  \sum_{\vec{b}}\ln n_{\vec{b}}!,\]where \(d \equiv \vec{b}_1 = \sum_rb_r\) is the mixture size, \(n_d\) is the number of nodes in a mixture of size \(d\), \(D\) is the maximum value of \(d\), \(n_{\vec{b}}\) is the number of nodes in mixture \(\vec{b}\).
For the degreecorrected model we need to specify the prior \(P(\boldsymbol{k})\) for the labelled degree sequence as well:
\[\ln P(\boldsymbol{k}) = \sum_r\ln\left(\!\!{m_r \choose e_r}\!\!\right)  \sum_{\vec{b}}\ln P(\boldsymbol{k}{\vec{b}}),\]where \(m_r\) is the number of nonempty mixtures which contain type \(r\), and \(P(\boldsymbol{k}{\vec{b}})\) is the likelihood of the labelled degree sequence inside mixture \(\vec{b}\). For this term we have three options:
degree_dl_kind == "uniform"
\[P(\boldsymbol{k}\vec{b}) = \prod_r\left(\!\!{n_{\vec{b}}\choose e^r_{\vec{b}}}\!\!\right)^{1}.\]degree_dl_kind == "distributed"
\[P(\boldsymbol{k}\vec{b}) = \prod_{\vec{b}}\frac{\prod_{\vec{k}}\eta_{\vec{k}}^{\vec{b}}!}{n_{\vec{b}}!} \prod_r q(e_{\vec{b}}^r  n_{\vec{b}}, n_{\vec{b}})\]where \(n^{\vec{b}}_{\vec{k}}\) is the number of nodes in mixture \(\vec{b}\) with labelled degree \(\vec{k}\), and \(q(n,m)\) is the number of partitions of integer \(n\) into at most \(m\) parts.
degree_dl_kind == "entropy"
\[P(\boldsymbol{k}\vec{b}) = \prod_{\vec{b}}\exp\left(n_{\vec{b}}H(\boldsymbol{k}_{\vec{b}})\right)\]where \(H(\boldsymbol{k}_{\vec{b}}) = \sum_{\vec{k}}p_{\vec{b}}(\vec{k})\ln p_{\vec{b}}(\vec{k})\) is the entropy of the labelled degree distribution inside mixture \(\vec{b}\).
Note that, differently from the other two choices, this represents only an approximation of the description length. It is meant to be used only for comparison purposes, and should be avoided in practice.
For the directed case, the above expressions are duplicated for the in and outdegrees.

mcmc_sweep
(self, bundled=False, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings rejection sampling MCMC to sample network partitions. If
bundled == True
, the halfedges incident of the same node that belong to the same group are moved together. All remaining parameters are passed tograph_tool.inference.blockmodel.BlockState.mcmc_sweep()
.

shrink
(self, B, **kwargs)[source]¶ Reduces the order of current state by progressively merging groups, until only
B
are left. All remaining keyword arguments are passed tograph_tool.inference.blockmodel.BlockState.merge_sweep()
.This function leaves the current state untouched and returns instead a copy with the new partition.

draw
(self, **kwargs)[source]¶ Convenience wrapper to
graph_draw()
that draws the state of the graph as colors on the vertices and edges.

graph_tool.inference.overlap_blockmodel.
half_edge_graph
(g, b=None, B=None, rec=None)[source]¶ Generate a halfedge graph, where each halfedge is represented by a node, and an edge connects the halfedges like in the original graph.

graph_tool.inference.overlap_blockmodel.
augmented_graph
(g, b, node_index, eweight=None)[source]¶ Generates an augmented graph from the halfedge graph
g
partitioned according tob
, where each halfedge belonging to a different group inside each node forms a new node.

graph_tool.inference.overlap_blockmodel.
get_block_edge_gradient
(g, be, cmap=None)[source]¶ Get edge gradients corresponding to the block membership at the endpoints of the edges given by the
be
edge property map. Parameters
 g
Graph
The graph.
 be
EdgePropertyMap
Vectorvalued edge property map with the block membership at each endpoint.
 cmap
matplotlib.colors.Colormap
(optional, default:default_cm
) Color map used to construct the gradient.
 g
 Returns
 cp
EdgePropertyMap
A vectorvalued edge property map containing a color gradient.
 cp

class
graph_tool.inference.layered_blockmodel.
LayeredBlockState
(g, ec, eweight=None, vweight=None, recs=[], rec_types=[], rec_params=[], b=None, B=None, clabel=None, pclabel=False, layers=False, deg_corr=True, overlap=False, **kwargs)[source]¶ Bases:
graph_tool.inference.overlap_blockmodel.OverlapBlockState
,graph_tool.inference.blockmodel.BlockState
The (possibly overlapping) block state of a given graph, where the edges are divided into discrete layers.
 Parameters
 g
Graph
Graph to be modelled.
 ec
EdgePropertyMap
Edge property map containing discrete edge covariates that will split the network in discrete layers.
 recslist of
EdgePropertyMap
instances (optional, default:[]
) List of real or discretevalued edge covariates.
 rec_typeslist of edge covariate types (optional, default:
[]
) List of types of edge covariates. The possible types are:
"realexponential"
,"realnormal"
,"discretegeometric"
,"discretepoisson"
or"discretebinomial"
. rec_paramslist of
dict
(optional, default:[]
) Model hyperparameters for edge covariates. This should a list of
dict
instances. SeeBlockState
for more details. eweight
EdgePropertyMap
(optional, default:None
) Edge multiplicities (for multigraphs or block graphs).
 vweight
VertexPropertyMap
(optional, default:None
) Vertex multiplicities (for block graphs).
 b
VertexPropertyMap
ornumpy.ndarray
(optional, default:None
) Initial block labels on the vertices or halfedges. If not supplied, it will be randomly sampled.
 B
int
(optional, default:None
) Number of blocks (or vertex groups). If not supplied it will be obtained from the parameter
b
. clabel
VertexPropertyMap
(optional, default:None
) Constraint labels on the vertices. If supplied, vertices with different label values will not be clustered in the same group.
 pclabel
VertexPropertyMap
(optional, default:None
) Partition constraint labels on the vertices. This has the same interpretation as
clabel
, but will be used to compute the partition description length. layers
bool
(optional, default:False
) If
layers == True
, the “independent layers” version of the model is used, instead of the “edge covariates” version. deg_corr
bool
(optional, default:True
) If
True
, the degreecorrected version of the blockmodel ensemble will be assumed, otherwise the traditional variant will be used. overlap
bool
(optional, default:False
) If
True
, the overlapping version of the model will be used.
 g

copy
(self, g=None, eweight=None, vweight=None, b=None, B=None, deg_corr=None, clabel=None, pclabel=None, bfield=None, overlap=None, layers=None, ec=None, **kwargs)[source]¶ Copies the block state. The parameters override the state properties, and have the same meaning as in the constructor.

get_block_state
(self, b=None, vweight=False, deg_corr=False, overlap=False, layers=None, **kwargs)[source]¶ Returns a
LayeredBlockState
corresponding to the block graph. The parameters have the same meaning as the in the constructor.

get_edge_blocks
(self)[source]¶ Returns an edge property map which contains the block labels pairs for each edge.

get_overlap_blocks
(self)[source]¶ Returns the mixed membership of each vertex.
 Returns
 bv
VertexPropertyMap
A vectorvalued vertex property map containing the block memberships of each node.
 bc_in
VertexPropertyMap
The labelled indegrees of each node, i.e. how many inedges belong to each group, in the same order as the
bv
property above. bc_out
VertexPropertyMap
The labelled outdegrees of each node, i.e. how many outedges belong to each group, in the same order as the
bv
property above. bc_total
VertexPropertyMap
The labelled total degrees of each node, i.e. how many incident edges belong to each group, in the same order as the
bv
property above.
 bv

get_nonoverlap_blocks
(self)[source]¶ Returns a scalarvalued vertex property map with the block mixture represented as a single number.

get_majority_blocks
(self)[source]¶ Returns a scalarvalued vertex property map with the majority block membership of each node.

entropy
(self, adjacency=True, dl=True, partition_dl=True, degree_dl=True, degree_dl_kind='distributed', edges_dl=True, dense=False, multigraph=True, deg_entropy=True, exact=True, **kwargs)[source]¶ Calculate the entropy associated with the current block partition. The meaning of the parameters are the same as in
graph_tool.inference.blockmodel.BlockState.entropy()
.

get_edges_prob
(self, missing, spurious=[], entropy_args={})[source]¶ Compute the joint logprobability of the missing and spurious edges given by
missing
andspurious
(a list of(source, target, layer)
tuples, orEdge()
instances), together with the observed edges.More precisely, the loglikelihood returned is
\[\ln \frac{P(\boldsymbol G + \delta \boldsymbol G  \boldsymbol b)}{P(\boldsymbol G \boldsymbol b)}\]where \(\boldsymbol G + \delta \boldsymbol G\) is the modified graph (with missing edges added and spurious edges deleted).
The values in
entropy_args
are passed tograph_tool.inference.blockmodel.BlockState.entropy()
to calculate the logprobability.

mcmc_sweep
(self, bundled=False, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings rejection sampling MCMC to sample network partitions. If
bundled == True
and the state is an overlapping one, the halfedges incident of the same node that belong to the same group are moved together. All remaining parameters are passed tograph_tool.inference.blockmodel.BlockState.mcmc_sweep()
.

shrink
(self, B, **kwargs)[source]¶ Reduces the order of current state by progressively merging groups, until only
B
are left. All remaining keyword arguments are passed tograph_tool.inference.blockmodel.BlockState.shrink()
orgraph_tool.inference.overlap_blockmodel.OverlapBlockState.shrink()
, as appropriate.This function leaves the current state untouched and returns instead a copy with the new partition.

draw
(self, **kwargs)[source]¶ Convenience function to draw the current state. All keyword arguments are passed to
graph_tool.inference.blockmodel.BlockState.draw()
orgraph_tool.inference.overlap_blockmodel.OverlapBlockState.draw()
, as appropriate.

class
graph_tool.inference.nested_blockmodel.
NestedBlockState
(g, bs=None, base_type=<class 'graph_tool.inference.blockmodel.BlockState'>, state_args={}, hstate_args={}, hentropy_args={}, sampling=True, **kwargs)[source]¶ Bases:
object
The nested stochastic block model state of a given graph.
 Parameters
 g
Graph
Graph to be modeled.
 bs
list
ofVertexPropertyMap
ornumpy.ndarray
(optional, default:None
) Hierarchical node partition. If not provided it will correspond to a singlegroup hierarchy of length \(\lceil\log_2(N)\rceil\).
 base_type
type
(optional, default:BlockState
) State type for lowermost level (e.g.
BlockState
,OverlapBlockState
orLayeredBlockState
) hstate_args
dict
(optional, default: {}) Keyword arguments to be passed to the constructor of the higherlevel states.
 hentropy_args
dict
(optional, default: {}) Keyword arguments to be passed to the
entropy()
method of the higherlevel states. sampling
bool
(optional, default:True
) If
True
, the state will be properly prepared for MCMC sampling (as opposed to minimization). state_args
dict
(optional, default:{}
) Keyword arguments to be passed to base type constructor.
 **kwargskeyword arguments
Keyword arguments to be passed to base type constructor. The
state_args
parameter overrides this.
 g

copy
(self, g=None, bs=None, state_args=None, hstate_args=None, hentropy_args=None, sampling=None, **kwargs)[source]¶ Copies the block state. The parameters override the state properties, and have the same meaning as in the constructor.

get_bs
(self)[source]¶ Get hierarchy levels as a list of
numpy.ndarray
objects with the group memberships at each level.

get_levels
(self)[source]¶ Get hierarchy levels as a list of
BlockState
instances.

entropy
(self, **kwargs)[source]¶ Compute the entropy of whole hierarchy.
The keyword arguments are passed to the
entropy()
method of the underlying state objects (e.g.graph_tool.inference.blockmodel.BlockState.entropy
,graph_tool.inference.overlap_blockmodel.OverlapBlockState.entropy
, orgraph_tool.inference.layered_blockmodel.LayeredBlockState.entropy
).

remove_vertex
(self, v)[source]¶ Remove vertex
v
from its current group.This optionally accepts a list of vertices to remove.
Warning
This will leave the state in an inconsistent state before the vertex is returned to some other group, or if the same vertex is removed twice.

add_vertex
(self, v, r)[source]¶ Add vertex
v
to blockr
.This optionally accepts a list of vertices and blocks to add.
Warning
This can leave the state in an inconsistent state if a vertex is added twice to the same group.

get_edges_prob
(self, missing, spurious=[], entropy_args={})[source]¶ Compute the joint logprobability of the missing and spurious edges given by
missing
andspurious
(a list of(source, target)
tuples, orEdge()
instances), together with the observed edges.More precisely, the loglikelihood returned is
\[\ln \frac{P(\boldsymbol G + \delta \boldsymbol G  \boldsymbol b)}{P(\boldsymbol G \boldsymbol b)}\]where \(\boldsymbol G + \delta \boldsymbol G\) is the modified graph (with missing edges added and spurious edges deleted).
The values in
entropy_args
are passed tograph_tool.inference.blockmodel.BlockState.entropy()
to calculate the logprobability.

get_bstack
(self)[source]¶ Return the nested levels as individual graphs.
This returns a list of
Graph
instances representing the inferred hierarchy at each level. Each graph has two internal vertex and edge property maps named “count” which correspond to the vertex and edge counts at the lower level, respectively. Additionally, an internal vertex property map named “b” specifies the block partition.

project_level
(self, l)[source]¶ Project the partition at level
l
onto the lowest level, and return the corresponding state.

find_new_level
(self, l, bisection_args={}, B_min=None, B_max=None, b_min=None, b_max=None)[source]¶ Attempt to find a better network partition at level
l
, usingbisection_minimize()
with arguments given bybisection_args
.

mcmc_sweep
(self, **kwargs)[source]¶ Perform
niter
sweeps of a MetropolisHastings acceptancerejection MCMC to sample hierarchical network partitions.The arguments accepted are the same as in
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
.If the parameter
c
is a scalar, the values used at each level arec * 2 ** l
forl
in the range[0, L1]
. Optionally, a list of values may be passed instead, which specifies the value ofc[l]
to be used at each level.Warning
This function performs
niter
sweeps at each hierarchical level once. This means that in order for the chain to equilibrate, we need to call this function several times, i.e. it is not enough to call it once with a large value ofniter
.

multiflip_mcmc_sweep
(self, **kwargs)[source]¶ Perform
niter
sweeps of a MetropolisHastings acceptancerejection MCMC with multiple moves to sample hierarchical network partitions.The arguments accepted are the same as in
graph_tool.inference.blockmodel.BlockState.multiflip_mcmc_sweep()
.If the parameter
c
is a scalar, the values used at each level arec * 2 ** l
forl
in the range[0, L1]
. Optionally, a list of values may be passed instead, which specifies the value ofc[l]
to be used at each level.Warning
This function performs
niter
sweeps at each hierarchical level once. This means that in order for the chain to equilibrate, we need to call this function several times, i.e. it is not enough to call it once with a large value ofniter
.

gibbs_sweep
(self, **kwargs)[source]¶ Perform
niter
sweeps of a rejectionfree Gibbs sampling MCMC to sample network partitions.The arguments accepted are the same as in
graph_tool.inference.blockmodel.BlockState.gibbs_sweep()
.Warning
This function performs
niter
sweeps at each hierarchical level once. This means that in order for the chain to equilibrate, we need to call this function several times, i.e. it is not enough to call it once with a large value ofniter
.

multicanonical_sweep
(self, **kwargs)[source]¶ Perform
niter
sweeps of a nonMarkovian multicanonical sampling using the WangLandau algorithm.The arguments accepted are the same as in
graph_tool.inference.blockmodel.BlockState.multicanonical_sweep()
.

collect_partition_histogram
(self, h=None, update=1)[source]¶ Collect a histogram of partitions.
This should be called multiple times, e.g. after repeated runs of the
graph_tool.inference.nested_blockmodel.NestedBlockState.mcmc_sweep()
function. Parameters
 h
PartitionHist
(optional, default:None
) Partition histogram. If not provided, an empty histogram will be created.
 updatefloat (optional, default:
1
) Each call increases the current count by the amount given by this parameter.
 h
 Returns
 h
PartitionHist
(optional, default:None
) Updated Partition histogram.
 h

draw
(self, **kwargs)[source]¶ Convenience wrapper to
draw_hierarchy()
that draws the hierarchical state.

graph_tool.inference.nested_blockmodel.
hierarchy_minimize
(state, B_min=None, B_max=None, b_min=None, b_max=None, frozen_levels=None, bisection_args={}, epsilon=1e08, verbose=False)[source]¶ Attempt to find a fit of the nested stochastic block model that minimizes the description length.
 Parameters
 state
NestedBlockState
The nested block state.
 B_min
int
(optional, default:None
) The minimum number of blocks.
 B_max
int
(optional, default:None
) The maximum number of blocks.
 b_min
VertexPropertyMap
(optional, default:None
) The partition to be used with the minimum number of blocks.
 b_max
VertexPropertyMap
(optional, default:None
) The partition to be used with the maximum number of blocks.
 frozen_levelssequence of
int
values (optional, default:None
) List of hierarchy levels that are kept constant during the minimization.
 bisection_args
dict
(optional, default:{}
) Arguments to be passed to
bisection_minimize()
. epsilon: ``float`` (optional, default: ``1e8``)
Only replace levels if the description length difference is above this threshold.
 verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 state
 Returns
 min_state
NestedBlockState
Nested state with minimal description length.
 min_state
Notes
This algorithms moves along the hierarchical levels, attempting to replace, delete or insert partitions that minimize the description length, until no further progress is possible.
See [peixotohierarchical2014] for details on the algorithm.
This algorithm has a complexity of \(O(V \ln^2 V)\), where \(V\) is the number of nodes in the network.
References
 peixotohierarchical2014
Tiago P. Peixoto, “Hierarchical block structures and highresolution model selection in large networks “, Phys. Rev. X 4, 011047 (2014), DOI: 10.1103/PhysRevX.4.011047 [scihub, @tor], arXiv: 1310.4377.

graph_tool.inference.nested_blockmodel.
get_hierarchy_tree
(state, empty_branches=True)[source]¶ Obtain the nested hierarchical levels as a tree.
This transforms a
NestedBlockState
instance into a singleGraph
instance containing the hierarchy tree. Parameters
 state
NestedBlockState
Nested block model state.
 empty_branches
bool
(optional, default:True
) If
empty_branches == False
, dangling branches at the upper layers will be pruned.
 state
 Returns
 tree
Graph
A directed graph, where vertices are blocks, and a directed edge points to an upper to a lower level in the hierarchy.
 label
VertexPropertyMap
A vertex property map containing the block label for each node.
 order
VertexPropertyMap
A vertex property map containing the relative ordering of each layer according to the total degree of the groups at the specific levels.
 tree

class
graph_tool.inference.uncertain_blockmodel.
UncertainBaseState
(g, nested=True, state_args={}, bstate=None, self_loops=False, init_empty=False)[source]¶ Bases:
object
Base state for uncertain network inference.

get_block_state
(self)[source]¶ Return the underlying block state, which can be either
BlockState
orNestedBlockState
.

entropy
(self, latent_edges=True, density=True, **kwargs)[source]¶ Return the entropy, i.e. negative loglikelihood.

mcmc_sweep
(self, r=0.5, multiflip=True, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings acceptancerejection sampling MCMC to sample network partitions and latent edges. The parameter
r
controls the probability with which edge move will be attempted, instead of partition moves. The remaining keyword parameters will be passed tomcmc_sweep()
ormultiflip_mcmc_sweep()
, ifmultiflip=True
.

multiflip_mcmc_sweep
(self, **kwargs)[source]¶ Alias for
mcmc_sweep()
withmultiflip=True
.

get_edge_prob
(self, u, v, entropy_args={}, epsilon=1e08)[source]¶ Return conditional posterior logprobability of edge \((u,v)\).

get_edges_prob
(self, elist, entropy_args={}, epsilon=1e08)[source]¶ Return conditional posterior logprobability of an edge list, with shape \((E,2)\).

collect_marginal
(self, g=None)[source]¶ Collect marginal inferred network during MCMC runs.
 Parameters
 g
Graph
(optional, default:None
) Previous marginal graph.
 g
 Returns
 g
Graph
New marginal graph, with internal edge
EdgePropertyMap
"eprob"
, containing the marginal probabilities for each edge.
 g
Notes
The posterior marginal probability of an edge \((i,j)\) is defined as
\[\pi_{ij} = \sum_{\boldsymbol A}A_{ij}P(\boldsymbol A\boldsymbol D)\]where \(P(\boldsymbol A\boldsymbol D)\) is the posterior probability given the data.

collect_marginal_multigraph
(self, g=None)[source]¶ Collect marginal latent multigraph during MCMC runs.
 Parameters
 g
Graph
(optional, default:None
) Previous marginal multigraph.
 g
 Returns
 g
Graph
New marginal graph, with internal edge
EdgePropertyMap
"w"
and"wcount"
, containing the edge multiplicities and their respective counts.
 g
Notes
The mean posterior marginal multiplicity distribution of a multiedge \((i,j)\) is defined as
\[\pi_{ij}(w) = \sum_{\boldsymbol G}\delta_{w,G_{ij}}P(\boldsymbol G\boldsymbol D)\]where \(P(\boldsymbol G\boldsymbol D)\) is the posterior probability of a multigraph \(\boldsymbol G\) given the data.


class
graph_tool.inference.uncertain_blockmodel.
UncertainBlockState
(g, q, q_default=0.0, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.UncertainBaseState
Inference state of an uncertain graph, using the stochastic block model as a prior.
 Parameters
 g
Graph
Measured graph.
 q
EdgePropertyMap
Edge probabilities in range \([0,1]\).
 q_default
float
(optional, default:0.
) Nonedge probability in range \([0,1]\).
 aE
float
(optional, default:NaN
) Expected total number of edges used in prior. If
NaN
, a flat prior will be used instead. nested
boolean
(optional, default:True
) If
True
, aNestedBlockState
will be used, otherwiseBlockState
. state_args
dict
(optional, default:{}
) Arguments to be passed to
NestedBlockState
orBlockState
. bstate
NestedBlockState
orBlockState
(optional, default:None
) If passed, this will be used to initialize the block state directly.
 self_loopsbool (optional, default:
False
) If
True
, it is assumed that the uncertain graph can contain selfloops.
 g
References
 peixotoreconstructing2018
Tiago P. Peixoto, “Reconstructing networks with unknown and heterogeneous errors”, Phys. Rev. X 8 041011 (2018). DOI: 10.1103/PhysRevX.8.041011 [scihub, @tor], arXiv: 1806.07956

class
graph_tool.inference.uncertain_blockmodel.
LatentMultigraphBlockState
(g, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.UncertainBaseState
Inference state of an erased Poisson multigraph, using the stochastic block model as a prior.
 Parameters
 g
Graph
Measured graph.
 aE
float
(optional, default:NaN
) Expected total number of edges used in prior. If
NaN
, a flat prior will be used instead. nested
boolean
(optional, default:True
) If
True
, aNestedBlockState
will be used, otherwiseBlockState
. state_args
dict
(optional, default:{}
) Arguments to be passed to
NestedBlockState
orBlockState
. bstate
NestedBlockState
orBlockState
(optional, default:None
) If passed, this will be used to initialize the block state directly.
 self_loopsbool (optional, default:
False
) If
True
, it is assumed that the uncertain graph can contain selfloops.
 g
References
 peixotolatent2020
Tiago P. Peixoto, “Latent Poisson models for networks with heterogeneous density”, arXiv: 2002.07803

class
graph_tool.inference.uncertain_blockmodel.
MeasuredBlockState
(g, n, x, n_default=1, x_default=0, fn_params={'alpha': 1, 'beta': 1}, fp_params={'mu': 1, 'nu': 1}, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.UncertainBaseState
Inference state of a measured graph, using the stochastic block model as a prior.
 Parameters
 g
Graph
Measured graph.
 n
EdgePropertyMap
Edge property map of type
int
, containing the total number of measurements for each edge. x
EdgePropertyMap
Edge property map of type
int
, containing the number of positive measurements for each edge. n_default
int
(optional, default:1
) Total number of measurements for each nonedge.
 x_default
int
(optional, default:0
) Total number of positive measurements for each nonedge.
 fn_params
dict
(optional, default:dict(alpha=1, beta=1)
) Beta distribution hyperparameters for the probability of missing edges (false negatives).
 fp_params
dict
(optional, default:dict(mu=1, nu=1)
) Beta distribution hyperparameters for the probability of spurious edges (false positives).
 aE
float
(optional, default:NaN
) Expected total number of edges used in prior. If
NaN
, a flat prior will be used instead. nested
boolean
(optional, default:True
) If
True
, aNestedBlockState
will be used, otherwiseBlockState
. state_args
dict
(optional, default:{}
) Arguments to be passed to
NestedBlockState
orBlockState
. bstate
NestedBlockState
orBlockState
(optional, default:None
) If passed, this will be used to initialize the block state directly.
 self_loopsbool (optional, default:
False
) If
True
, it is assumed that the uncertain graph can contain selfloops.
 g
References
 peixotoreconstructing2018
Tiago P. Peixoto, “Reconstructing networks with unknown and heterogeneous errors”, Phys. Rev. X 8 041011 (2018). DOI: 10.1103/PhysRevX.8.041011 [scihub, @tor], arXiv: 1806.07956

class
graph_tool.inference.uncertain_blockmodel.
MixedMeasuredBlockState
(g, n, x, n_default=1, x_default=0, fn_params={'alpha': 1, 'beta': 10}, fp_params={'mu': 1, 'nu': 10}, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.UncertainBaseState
Inference state of a measured graph with heterogeneous errors, using the stochastic block model as a prior.
 Parameters
 g
Graph
Measured graph.
 n
EdgePropertyMap
Edge property map of type
int
, containing the total number of measurements for each edge. x
EdgePropertyMap
Edge property map of type
int
, containing the number of positive measurements for each edge. n_default
int
(optional, default:1
) Total number of measurements for each nonedge.
 x_default
int
(optional, default:1
) Total number of positive measurements for each nonedge.
 fn_params
dict
(optional, default:dict(alpha=1, beta=10)
) Beta distribution hyperparameters for the probability of missing edges (false negatives).
 fp_params
dict
(optional, default:dict(mu=1, nu=10)
) Beta distribution hyperparameters for the probability of spurious edges (false positives).
 aE
float
(optional, default:NaN
) Expected total number of edges used in prior. If
NaN
, a flat prior will be used instead. nested
boolean
(optional, default:True
) If
True
, aNestedBlockState
will be used, otherwiseBlockState
. state_args
dict
(optional, default:{}
) Arguments to be passed to
NestedBlockState
orBlockState
. bstate
NestedBlockState
orBlockState
(optional, default:None
) If passed, this will be used to initialize the block state directly.
 self_loopsbool (optional, default:
False
) If
True
, it is assumed that the uncertain graph can contain selfloops.
 g
References
 peixotoreconstructing2018
Tiago P. Peixoto, “Reconstructing networks with unknown and heterogeneous errors”, Phys. Rev. X 8 041011 (2018). DOI: 10.1103/PhysRevX.8.041011 [scihub, @tor], arXiv: 1806.07956

mcmc_sweep
(self, r=0.5, h=0.1, hstep=1, multiflip=True, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings acceptancerejection sampling MCMC to sample network partitions and latent edges. The parameter
r
controls the probability with which edge move will be attempted, instead of partition moves. The parameterh
controls the relative probability with which hyperparamters moves will be attempted, andhstep
is the size of the step.The remaining keyword parameters will be passed to
mcmc_sweep()
ormultiflip_mcmc_sweep()
, ifmultiflip=True
.

class
graph_tool.inference.uncertain_blockmodel.
DynamicsBlockStateBase
(g, s, t, x=None, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.UncertainBaseState
Base state for network reconstruction based on dynamical data, using the stochastic block model as a prior. This class is not meant to be instantiated directly, only indirectly via one of its subclasses.

get_edge_prob
(self, u, v, x, entropy_args={}, epsilon=1e08)[source]¶ Return conditional posterior logprobability of edge \((u,v)\).

collect_marginal
(self, g=None)[source]¶ Collect marginal inferred network during MCMC runs.
 Parameters
 g
Graph
(optional, default:None
) Previous marginal graph.
 g
 Returns
 g
Graph
New marginal graph, with internal edge
EdgePropertyMap
"eprob"
, containing the marginal probabilities for each edge.
 g
Notes
The posterior marginal probability of an edge \((i,j)\) is defined as
\[\pi_{ij} = \sum_{\boldsymbol A}A_{ij}P(\boldsymbol A\boldsymbol D)\]where \(P(\boldsymbol A\boldsymbol D)\) is the posterior probability given the data.


class
graph_tool.inference.uncertain_blockmodel.
EpidemicsBlockState
(g, s, beta, r, r_v=None, global_beta=None, active=None, t=[], exposed=False, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.DynamicsBlockStateBase
Inference state for network reconstruction based on epidemic dynamics, using the stochastic block model as a prior.
 Parameters
 g
Graph
Initial graph state.
 s
list
ofVertexPropertyMap
Collection of timeseries with node states over time. Each entry in this list must be a
VertexPropertyMap
with typevector<int>
containing the states of each node in each time step. A value of1
means infected and0
susceptible. Other values are allowed (e.g. for recovered), but their actual value is unimportant for reconstruction.If the parameter
t
below is given, each property map value for a given node should contain only the states for the same points in time given by that parameter. beta
float
orEdgePropertyMap
Initial value of the global or local transmission probability for each edge.
 r
float
Spontaneous infection probability.
 r_v
VertexPropertyMap
(optional, default:None
) If given, this will set the initial spontaneous infection probability for each node, and trigger the use of a model where this quantity is in principle different for each node.
 global_beta
float
(optional, default:None
) If provided, and
beta is None
this will trigger the use of a model where all transmission probabilities on edges are the same, and given (initially) by this value. t
list
ofVertexPropertyMap
(optional, default:[]
) If nonempty, this allows for a compressed representation of the timeseries parameter
s
, corresponding only to points in time where the state of each node changes. Each each entry in this list must be aVertexPropertyMap
with typevector<int>
containing the points in time where the state of each node change. The corresponding state of the nodes at these times are given by parameters
. active
list
ofVertexPropertyMap
(optional, default:None
) If given, this specifies the points in time where each node is “active”, and prepared to change its state according to the state of its neighbors. Each entry in this list must be a
VertexPropertyMap
with typevector<int>
containing the states of each node in each time step. A value of1
means active and0
inactive. exposed
boolean
(optional, default:False
) If
True
, the data is supposed to come from a SEI, SEIR, etc. model, where a susceptible node (valued0
) first transits to an exposed state (valued1
) upon transmission, before transiting to the infective state (valued1
). aE
float
(optional, default:NaN
) Expected total number of edges used in prior. If
NaN
, a flat prior will be used instead. nested
boolean
(optional, default:True
) If
True
, aNestedBlockState
will be used, otherwiseBlockState
. state_args
dict
(optional, default:{}
) Arguments to be passed to
NestedBlockState
orBlockState
. bstate
NestedBlockState
orBlockState
(optional, default:None
) If passed, this will be used to initialize the block state directly.
 self_loopsbool (optional, default:
False
) If
True
, it is assumed that the inferred graph can contain selfloops.
 g
References
 peixotonetwork2019
Tiago P. Peixoto, “Network reconstruction and community detection from dynamics”, Phys. Rev. Lett. 123 128301 (2019), DOI: 10.1103/PhysRevLett.123.128301 [scihub, @tor], arXiv: 1903.10833

mcmc_sweep
(self, r=0.5, p=0.1, pstep=0.1, h=0.1, hstep=1, xstep=0.1, multiflip=True, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings acceptancerejection sampling MCMC to sample network partitions and latent edges. The parameter
r
controls the probability with which edge move will be attempted, instead of partition moves. The parameterh
controls the relative probability with which moves for the parametersr_v
will be attempted, andhstep
is the size of the step. The parameterp
controls the relative probability with which moves for the parametersglobal_beta
andr
will be attempted, andpstep
is the size of the step. The paramterxstep
determines the size of the attempted steps for the edge transmission probabilities.The remaining keyword parameters will be passed to
mcmc_sweep()
ormultiflip_mcmc_sweep()
, ifmultiflip=True
.

class
graph_tool.inference.uncertain_blockmodel.
IsingBaseBlockState
(g, s, beta, x=None, h=None, t=None, aE=nan, nested=True, state_args={}, bstate=None, self_loops=False, has_zero=False, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.DynamicsBlockStateBase
Base state for network reconstruction based on the Ising model, using the stochastic block model as a prior. This class is not supposed to be instantiated directly. Instead one of its specialized subclasses must be used, which have the same signature:
IsingGlauberBlockState
,PseudoIsingBlockState
,CIsingGlauberBlockState
,PseudoCIsingBlockState
. Parameters
 g
Graph
Initial graph state.
 s
list
ofVertexPropertyMap
orVertexPropertyMap
Collection of timeseries with node states over time, or a single timeseries. Each timeseries must be a
VertexPropertyMap
with typevector<int>
containing the Ising states (1
or+1
) of each node in each time step.If the parameter
t
below is given, each property map value for a given node should contain only the states for the same points in time given by that parameter. beta
float
Initial value of the global inverse temperature.
 x
EdgePropertyMap
(optional, default:None
) Initial value of the local coupling for each edge. If not given, a uniform value of
1
will be used. h
VertexPropertyMap
(optional, default:None
) If given, this will set the initial local fields of each node. Otherwise a value of
0
will be used. t
list
ofVertexPropertyMap
(optional, default:[]
) If nonempty, this allows for a compressed representation of the timeseries parameter
s
, corresponding only to points in time where the state of each node changes. Each each entry in this list must be aVertexPropertyMap
with typevector<int>
containing the points in time where the state of each node change. The corresponding state of the nodes at these times are given by parameters
. aE
float
(optional, default:NaN
) Expected total number of edges used in prior. If
NaN
, a flat prior will be used instead. nested
boolean
(optional, default:True
) If
True
, aNestedBlockState
will be used, otherwiseBlockState
. state_args
dict
(optional, default:{}
) Arguments to be passed to
NestedBlockState
orBlockState
. bstate
NestedBlockState
orBlockState
(optional, default:None
) If passed, this will be used to initialize the block state directly.
 self_loopsbool (optional, default:
False
) If
True
, it is assumed that the inferred graph can contain selfloops. has_zerobool (optional, default:
False
) If
True
, the threestate “Ising” model with values{1,0,1}
is used.
 g
References
 peixotonetwork2019
Tiago P. Peixoto, “Network reconstruction and community detection from dynamics”, Phys. Rev. Lett. 123 128301 (2019), DOI: 10.1103/PhysRevLett.123.128301 [scihub, @tor], arXiv: 1903.10833

mcmc_sweep
(self, r=0.5, p=0.1, pstep=0.1, h=0.1, hstep=1, xstep=0.1, multiflip=True, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings acceptancerejection sampling MCMC to sample network partitions and latent edges. The parameter
r
controls the probability with which edge move will be attempted, instead of partition moves. The parameterh
controls the relative probability with which moves for the parametersr_v
will be attempted, andhstep
is the size of the step. The parameterp
controls the relative probability with which moves for the parametersglobal_beta
andr
will be attempted, andpstep
is the size of the step. The paramterxstep
determines the size of the attempted steps for the edge coupling parameters.The remaining keyword parameters will be passed to
mcmc_sweep()
ormultiflip_mcmc_sweep()
, ifmultiflip=True
.

class
graph_tool.inference.uncertain_blockmodel.
IsingGlauberBlockState
(*args, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.IsingBaseBlockState
State for network reconstruction based on the Glauber dynamics of the Ising model, using the stochastic block model as a prior.
See documentation for
IsingBaseBlockState
for details.

class
graph_tool.inference.uncertain_blockmodel.
CIsingGlauberBlockState
(*args, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.IsingBaseBlockState
State for network reconstruction based on the Glauber dynamics of the continuous Ising model, using the stochastic block model as a prior.
See documentation for
IsingBaseBlockState
for details. Note that in this case thes
parameter should contain property maps of typevector<double>
, with values in the range \([1,1]\).

class
graph_tool.inference.uncertain_blockmodel.
PseudoIsingBlockState
(*args, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.IsingBaseBlockState
State for network reconstruction based on the equilibrium configurations of the Ising model, using the Pseudolikelihood approximation and the stochastic block model as a prior.
See documentation for
IsingBaseBlockState
for details. Note that in this model “timeseries” should be interpreted as a set of uncorrelated samples, not a temporal sequence.

class
graph_tool.inference.uncertain_blockmodel.
PseudoCIsingBlockState
(*args, **kwargs)[source]¶ Bases:
graph_tool.inference.uncertain_blockmodel.IsingBaseBlockState
State for network reconstruction based on the equilibrium configurations of the continuous Ising model, using the Pseudolikelihood approximation and the stochastic block model as a prior.
See documentation for
IsingBaseBlockState
for details. Note that in this model “timeseries” should be interpreted as a set of uncorrelated samples, not a temporal sequence. Additionally, thes
parameter should contain property maps of typevector<double>
, with values in the range \([1,1]\).

graph_tool.inference.uncertain_blockmodel.
marginal_multigraph_entropy
(g, ecount)[source]¶ Compute the entropy of the marginal latent multigraph distribution.
 Parameters
 g
Graph
Marginal multigraph.
 ecount
EdgePropertyMap
Vectorvalued edge property map containing the counts of edge multiplicities.
 g
 Returns
 eh
EdgePropertyMap
Marginal entropy of edge multiplicities.
 eh
Notes
The mean posterior marginal multiplicity distribution of a multiedge \((i,j)\) is defined as
\[\pi_{ij}(w) = \sum_{\boldsymbol G}\delta_{w,G_{ij}}P(\boldsymbol G\boldsymbol D)\]where \(P(\boldsymbol G\boldsymbol D)\) is the posterior probability of a multigraph \(\boldsymbol G\) given the data.
The corresponding entropy is therefore given (in nats) by
\[\mathcal{S}_{ij} = \sum_w\pi_{ij}(w)\ln \pi_{ij}(w).\]

graph_tool.inference.latent_multigraph.
latent_multigraph
(g, epsilon=1e08, max_niter=0, verbose=False)[source]¶ Infer latent Poisson multigraph model given an “erased” simple graph.
 Parameters
 g
Graph
Graph to be used. This is expected to be a simple graph.
 epsilon
float
(optional, default:1e8
) Convergence criterion.
 max_niter
int
(optional, default:0
) Maximum number of iterations allowed (if
0
, no maximum is assumed). verbose
boolean
(optional, default:False
) If
True
, display verbose information.
 g
 Returns
 u
Graph
Latent graph.
 w
EdgePropertyMap
Edge property map with inferred edge multiplicities.
 u
Notes
This implements the expectation maximization algorithm described in [peixotolatent2020] which consists in iterating the following steps until convergence:
In the “expectation” step we obtain the marginal mean multiedge multiplicities via:
\[\begin{split}w_{ij} = \begin{cases} \frac{\theta_i\theta_j}{1\mathrm{e}^{\theta_i\theta_j}} & \text{ if } G_{ij} = 1,\\ \theta_i^2 & \text{ if } i = j,\\ 0 & \text{ otherwise.} \end{cases}\end{split}\]In the “maximization” step we use the current values of \(\boldsymbol w\) to update the values of \(\boldsymbol \theta\):
\[\theta_i = \frac{d_i}{\sqrt{\sum_jd_j}}, \quad\text{ with } d_i = \sum_jw_{ji}. \]
The equations above are adapted accordingly if the supplied graph is directed, where we have \(\theta_i\theta_j\to\theta_i^\theta_j^+\), \(\theta_i^2\to\theta_i^\theta_i^+\), and \(\theta_i^{\pm}=\frac{d_i^{\pm}}{\sqrt{\sum_jd_j^{\pm}}}\), with \(d^+_i = \sum_jw_{ji}\) and \(d^_i = \sum_jw_{ij}\).
A single EM iteration takes time \(O(V + E)\). If enabled during compilation, this algorithm runs in parallel.
References
 peixotolatent2020
Tiago P. Peixoto, “Latent Poisson models for networks with heterogeneous density”, arXiv: 2002.07803
Examples
>>> g = gt.collection.data["as22july06"] >>> gt.scalar_assortativity(g, "out") (0.198384..., 0.001338...) >>> u, w = gt.latent_multigraph(g) >>> gt.scalar_assortativity(u, "out", eweight=w) (0.048426..., 0.034526...)

graph_tool.inference.mcmc.
mcmc_equilibrate
(state, wait=1000, nbreaks=2, max_niter=inf, force_niter=None, epsilon=0, gibbs=False, multiflip=True, mcmc_args={}, entropy_args={}, history=False, callback=None, verbose=False)[source]¶ Equilibrate a MCMC with a given starting state.
 Parameters
 stateAny state class (e.g.
BlockState
) Initial state. This state will be modified during the algorithm.
 wait
int
(optional, default:1000
) Number of iterations to wait for a recordbreaking event.
 nbreaks
int
(optional, default:2
) Number of iteration intervals (of size
wait
) without recordbreaking events necessary to stop the algorithm. max_niter
int
(optional, default:numpy.inf
) Maximum number of iterations.
 force_niter
int
(optional, default:None
) If given, will force the algorithm to run this exact number of iterations.
 epsilon
float
(optional, default:0
) Relative changes in entropy smaller than epsilon will not be considered as recordbreaking.
 gibbs
bool
(optional, default:False
) If
True
, each step will callstate.gibbs_sweep
instead ofstate.mcmc_sweep
. multiflip
bool
(optional, default:True
) If
True
, each step will callstate.multiflip_mcmc_sweep
instead ofstate.mcmc_sweep
. mcmc_args
dict
(optional, default:{}
) Arguments to be passed to
state.mcmc_sweep
(orstate.gibbs_sweep
). history
bool
(optional, default:False
) If
True
, a list of tuples of the form(nattempts, nmoves, entropy)
will be kept and returned, whereentropy
is the current entropy andnmoves
is the number of vertices moved. callback
function
(optional, default:None
) If given, this function will be called after each iteration. The function must accept the current state as an argument, and its return value must be either None or a (possibly empty) list of values that will be append to the history, if
history == True
. verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 stateAny state class (e.g.
 Returns
 historylist of tuples of the form
(nattempts, nmoves, entropy)
Summary of the MCMC run. This is returned only if
history == True
. entropy
float
Current entropy value after run. This is returned only if
history == False
. nattempts
int
Number of node move attempts.
 nmoves
int
Number of node moves.
 historylist of tuples of the form
Notes
The MCMC equilibration is attempted by keeping track of the maximum and minimum values, and waiting sufficiently long without a recordbreaking event.
This function calls
state.mcmc_sweep
(orstate.gibbs_sweep
) at each iteration (e.g.graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
andgraph_tool.inference.blockmodel.BlockState.gibbs_sweep()
), and keeps track of the value ofstate.entropy(**args)
withargs
corresponding tomcmc_args["entropy_args"]
.References
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378

graph_tool.inference.mcmc.
mcmc_anneal
(state, beta_range=1.0, 10.0, niter=100, history=False, mcmc_equilibrate_args={}, verbose=False)[source]¶ Equilibrate a MCMC at a specified target temperature by performing simulated annealing.
 Parameters
 stateAny state class (e.g.
BlockState
) Initial state. This state will be modified during the algorithm.
 beta_range
tuple
of two floats (optional, default:(1., 10.)
) Inverse temperature range.
 niter
int
(optional, default:100
) Number of steps (in logspace) from the starting temperature to the final one.
 history
bool
(optional, default:False
) If
True
, a list of tuples of the form(nattempts, nmoves, beta, entropy)
 mcmc_equilibrate_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_equilibrate()
. verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 stateAny state class (e.g.
 Returns
 historylist of tuples of the form
(nattempts, nmoves, beta, entropy)
Summary of the MCMC run. This is returned only if
history == True
. entropy
float
Current entropy value after run. This is returned only if
history == False
. nattempts
int
Number of node move attempts.
 nmoves
int
Number of node moves.
 historylist of tuples of the form
Notes
This algorithm employs exponential cooling, where the value of beta is multiplied by a constant at each iteration, so that starting from beta_range[0] the value of beta_range[1] is reached after niter iterations.
At each iteration, the function
mcmc_equilibrate()
is called with the current value of beta (via themcmc_args
parameter).References
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378

graph_tool.inference.mcmc.
mcmc_multilevel
(state, B, r=2, b_cache=None, anneal=False, mcmc_equilibrate_args={}, anneal_args={}, shrink_args={}, verbose=False)[source]¶ Equilibrate a MCMC from a starting state with a higher order, by performing successive agglomerative initializations and equilibrations until the desired order is reached, such that metastable states are avoided.
 Parameters
 stateAny state class (e.g.
BlockState
) Initial state. This state will not be modified during the algorithm.
 B
int
Desired model order (i.e. number of groups).
 r
int
(optional, default:2
) Greediness of agglomeration. At each iteration, the state order will be reduced by a factor
r
. b_cache
dict
(optional, default:None
) If specified, this should be a dictionary with keyvalue pairs of the form
(B, state)
that contain precomputed states of the specified order. This dictionary will be modified during the algorithm. anneal
bool
(optional, default:False
) If
True
, the equilibration steps will use simulated annealing, by callingmcmc_anneal()
, instead ofmcmc_equilibrate()
. mcmc_equilibrate_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_equilibrate()
. mcmc_anneal_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_anneal()
. shrink_args
dict
(optional, default:{}
) Arguments to be passed to
state.shrink
(e.g.graph_tool.inference.blockmodel.BlockState.shrink()
). verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 stateAny state class (e.g.
 Returns
 stateThe same type as parameter
state
This is the final state after the MCMC run.
 stateThe same type as parameter
Notes
This algorithm alternates between equilibrating the MCMC state and reducing the state order (via calls to
state.shrink
, e.g.graph_tool.inference.blockmodel.BlockState.shrink()
).This greatly reduces the changes of getting trapped in metastable states if the starting point if far away from equilibrium, as discussed in [peixotoefficient2014].
References
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378

class
graph_tool.inference.mcmc.
MulticanonicalState
(state, S_min, S_max, nbins=1000)[source]¶ Bases:
object
The density of states of a multicanonical Monte Carlo algorithm. It is used by
graph_tool.inference.mcmc.multicanonical_equilibrate()
. Parameters
 state
BlockState
orOverlapBlockState
orNestedBlockState
Block state to be used.
 S_min
float
Minimum energy.
 S_max
float
Maximum energy.
 nbins
int
(optional, default:1000
) Number of bins.
 state

graph_tool.inference.mcmc.
multicanonical_equilibrate
(m_state, f_range=1.0, 1e06, r=2, flatness=0.95, allow_gaps=True, callback=None, multicanonical_args={}, verbose=False)[source]¶ Equilibrate a multicanonical Monte Carlo sampling using the WangLandau algorithm.
 Parameters
 m_state
MulticanonicalState
Initial multicanonical state, where the state density will be stored.
 f_range
tuple
of two floats (optional, default:(1., 1e6)
) Range of density updates.
 r
float
(optional, default:2.
) Greediness of convergence. At each iteration, the density updates will be reduced by a factor
r
. flatness
float
(optional, default:.95
) Sufficient histogram flatness threshold used to continue the algorithm.
 allow_gaps
bool
(optional, default:True
) If
True
, gaps in the histogram (regions with zero count) will be ignored when computing the flatness. callback
function
(optional, default:None
) If given, this function will be called after each iteration. The function must accept the current
state
andm_state
as arguments. multicanonical_args
dict
(optional, default:{}
) Arguments to be passed to
state.multicanonical_sweep
(e.g.graph_tool.inference.blockmodel.BlockState.multicanonical_sweep()
). verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 m_state
 Returns
 niter
int
Number of iterations required for convergence.
 niter
References
 wangefficient2001
Fugao Wang, D. P. Landau, “An efficient, multiple range random walk algorithm to calculate the density of states”, Phys. Rev. Lett. 86, 2050 (2001), DOI: 10.1103/PhysRevLett.86.2050 [scihub, @tor], arXiv: condmat/0011174
 belardinelliwang2007
R. E. Belardinelli, V. D. Pereyra, “WangLandau algorithm: A theoretical analysis of the saturation of the error”, J. Chem. Phys. 127, 184105 (2007), DOI: 10.1063/1.2803061 [scihub, @tor], arXiv: condmat/0702414

class
graph_tool.inference.mcmc.
TemperingState
(states, betas, idx=None, beta_dl=False)[source]¶ Bases:
object
This class aggregates several state classes and corresponding inversetemperature values to implement parallel tempering MCMC.
This is meant to be used with
mcmc_equilibrate()
. Parameters
 stateslist of state objects (e.g.
BlockState
) Initial parallel states.
 betaslist of floats
Inverse temperature values.
 stateslist of state objects (e.g.

entropy
(self, **kwargs)[source]¶ Returns the sum of the entropy of the parallel states. All keyword arguments are propagated to the individual states’ entropy() method.

entropies
(self, **kwargs)[source]¶ Returns the entropies of the parallel states. All keyword arguments are propagated to the individual states’ entropy() method.

states_swap
(self, **kwargs)[source]¶ Perform a full sweep of the parallel states, where swaps are attempted. All relevant keyword arguments are propagated to the individual states’ entropy() method.

states_move
(self, sweep_algo, **kwargs)[source]¶ Perform a full sweep of the parallel states, where state moves are attempted by calling sweep_algo(state, beta=beta, **kwargs).

mcmc_sweep
(self, **kwargs)[source]¶ Perform a full mcmc sweep of the parallel states, where swap or moves are chosen randomly. It accepts an keyword argument
r
(default:0.1
) specifying the relative probability with which state swaps are performed with respect to node moves. All remaining keyword arguments are propagated to the individual states’ mcmc_sweep() method.

multiflip_mcmc_sweep
(self, **kwargs)[source]¶ Perform a full mcmc sweep of the parallel states, where swap or moves are chosen randomly. It accepts an keyword argument
r
(default:0.1
) specifying the relative probability with which state swaps are performed with respect to node moves. All remaining keyword arguments are propagated to the individual states’ mcmc_sweep() method.

gibbs_sweep
(self, **kwargs)[source]¶ Perform a full Gibbs mcmc sweep of the parallel states, where swap or moves are chosen randomly. It accepts an keyword argument
r
(default:0.1
) specifying the relative probability with which state swaps are performed with respect to node moves. All remaining keyword arguments are propagated to the individual states’ gibbs_sweep() method.

graph_tool.inference.bisection.
bisection_minimize
(init_states, random_bisection=False, mcmc_multilevel_args={}, verbose=False)[source]¶ Find the best order (number of groups) given an initial set of states by performing a onedimension minimization, using a Fibonacci (or golden section) search.
 Parameters
 init_statesAny state class (e.g.
BlockState
) List with two or more states that will be used to bracket the search.
 random_bisection
bool
(optional, default:False
) If
True
, the bisection will be done randomly in the interval, instead of using the golden rule. mcmc_multilevel_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_multilevel()
. verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 init_statesAny state class (e.g.
 Returns
 min_stateAny state class (e.g.
BlockState
) State with minimal entropy in the interval.
 min_stateAny state class (e.g.
Notes
This function calls
mcmc_multilevel()
to reduce the order of a given state, and uses the value ofstate.entropy(**args)
for the minimization, withargs
obtained frommcmc_multilevel_args
.References
 goldensectionsearch
“Golden section search”, https://en.wikipedia.org/wiki/Golden_section_search
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378

graph_tool.inference.minimize.
minimize_blockmodel_dl
(g, B_min=None, B_max=None, b_min=None, b_max=None, deg_corr=True, overlap=False, nonoverlap_init=True, layers=False, state_args={}, bisection_args={}, mcmc_args={}, anneal_args={}, mcmc_equilibrate_args={}, shrink_args={}, mcmc_multilevel_args={}, verbose=False)[source]¶ Fit the stochastic block model, by minimizing its description length using an agglomerative heuristic.
 Parameters
 g
Graph
The graph.
 B_min
int
(optional, default:None
) The minimum number of blocks.
 B_max
int
(optional, default:None
) The maximum number of blocks.
 b_min
VertexPropertyMap
(optional, default:None
) The partition to be used with the minimum number of blocks.
 b_max
VertexPropertyMap
(optional, default:None
) The partition to be used with the maximum number of blocks.
 deg_corr
bool
(optional, default:True
) If
True
, the degreecorrected version of the model will be used. overlap
bool
(optional, default:False
) If
True
, the overlapping version of the model will be used. nonoverlap_init
bool
(optional, default:True
) If
True
, andoverlap == True
a nonoverlapping initial state will be used. layers
bool
(optional, default:False
) If
True
, the layered version of the model will be used. state_args
dict
(optional, default:{}
) Arguments to be passed to appropriate state constructor (e.g.
BlockState
,OverlapBlockState
orLayeredBlockState
) bisection_args
dict
(optional, default:{}
) Arguments to be passed to
bisection_minimize()
. mcmc_args
dict
(optional, default:{}
) Arguments to be passed to
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
,graph_tool.inference.overlap_blockmodel.OverlapBlockState.mcmc_sweep()
orgraph_tool.inference.layered_blockmodel.LayeredBlockState.mcmc_sweep()
. mcmc_equilibrate_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_equilibrate()
. shrink_args
dict
(optional, default:{}
) Arguments to be passed to
graph_tool.inference.blockmodel.BlockState.shrink()
,graph_tool.inference.overlap_blockmodel.OverlapBlockState.shrink()
orgraph_tool.inference.layered_blockmodel.LayeredBlockState.shrink()
. mcmc_multilevel_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_multilevel()
. verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 g
 Returns
 min_state
BlockState
orOverlapBlockState
orLayeredBlockState
State with minimal description length.
 min_state
Notes
This function is a convenience wrapper around
bisection_minimize()
.See [peixotoefficient2014] for details on the algorithm.
This algorithm has a complexity of \(O(V \ln^2 V)\), where \(V\) is the number of nodes in the network.
References
 peixotoefficient2014
Tiago P. Peixoto, “Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models”, Phys. Rev. E 89, 012804 (2014), DOI: 10.1103/PhysRevE.89.012804 [scihub, @tor], arXiv: 1310.4378.
Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.minimize_blockmodel_dl(g) >>> state.draw(pos=g.vp["pos"], vertex_shape=state.get_blocks(), ... output="polbooks_blocks_mdl.svg") <...>
>>> g = gt.collection.data["polbooks"] >>> state = gt.minimize_blockmodel_dl(g, overlap=True) >>> state.draw(pos=g.vp["pos"], output="polbooks_overlap_blocks_mdl.svg") <...>

graph_tool.inference.minimize.
minimize_nested_blockmodel_dl
(g, B_min=None, B_max=None, b_min=None, b_max=None, Bs=None, bs=None, deg_corr=True, overlap=False, nonoverlap_init=True, layers=False, hierarchy_minimize_args={}, state_args={}, bisection_args={}, mcmc_args={}, anneal_args={}, mcmc_equilibrate_args={}, shrink_args={}, mcmc_multilevel_args={}, verbose=False)[source]¶ Fit the nested stochastic block model, by minimizing its description length using an agglomerative heuristic.
 Parameters
 g
Graph
The graph.
 B_min
int
(optional, default:None
) The minimum number of blocks.
 B_max
int
(optional, default:None
) The maximum number of blocks.
 b_min
VertexPropertyMap
(optional, default:None
) The partition to be used with the minimum number of blocks.
 b_max
VertexPropertyMap
(optional, default:None
) The partition to be used with the maximum number of blocks.
 Bs
list
of ints (optional, default:None
) If provided, it will correspond to the sizes of the initial hierarchy.
 bs
list
of integervaluednumpy.ndarray
objects (optional, default:None
) If provided, it will correspond to the initial hierarchical partition.
 deg_corr
bool
(optional, default:True
) If
True
, the degreecorrected version of the model will be used. overlap
bool
(optional, default:False
) If
True
, the overlapping version of the model will be used. nonoverlap_init
bool
(optional, default:True
) If
True
, andoverlap == True
a nonoverlapping initial state will be used. layers
bool
(optional, default:False
) If
True
, the layered version of the model will be used. hierarchy_minimize_args
dict
(optional, default:{}
) Arguments to be passed to
hierarchy_minimize()
. state_args
dict
(optional, default:{}
) Arguments to be passed to appropriate state constructor (e.g.
BlockState
,OverlapBlockState
orLayeredBlockState
) bisection_args
dict
(optional, default:{}
) Arguments to be passed to
bisection_minimize()
. mcmc_args
dict
(optional, default:{}
) Arguments to be passed to
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
,graph_tool.inference.overlap_blockmodel.OverlapBlockState.mcmc_sweep()
orgraph_tool.inference.layered_blockmodel.LayeredBlockState.mcmc_sweep()
. mcmc_equilibrate_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_equilibrate()
. shrink_args
dict
(optional, default:{}
) Arguments to be passed to
graph_tool.inference.blockmodel.BlockState.shrink()
,graph_tool.inference.overlap_blockmodel.OverlapBlockState.shrink()
orgraph_tool.inference.layered_blockmodel.LayeredBlockState.shrink()
. mcmc_multilevel_args
dict
(optional, default:{}
) Arguments to be passed to
mcmc_multilevel()
. verbose
bool
ortuple
(optional, default:False
) If
True
, progress information will be shown. Optionally, this accepts arguments of the typetuple
of the form(level, prefix)
wherelevel
is a positive integer that specifies the level of detail, andprefix
is a string that is prepended to the all output messages.
 g
 Returns
 min_state
NestedBlockState
Nested state with minimal description length.
 min_state
Notes
This function is a convenience wrapper around
hierarchy_minimize()
.See [peixotohierarchical2014] for details on the algorithm.
This algorithm has a complexity of \(O(V \ln^2 V)\), where \(V\) is the number of nodes in the network.
References
 peixotohierarchical2014
Tiago P. Peixoto, “Hierarchical block structures and highresolution model selection in large networks “, Phys. Rev. X 4, 011047 (2014), DOI: 10.1103/PhysRevX.4.011047 [scihub, @tor], arXiv: 1310.4377.
Examples
>>> g = gt.collection.data["power"] >>> state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True) >>> state.draw(output="power_nested_mdl.pdf") (...)
>>> g = gt.collection.data["celegansneural"] >>> state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True, overlap=True) >>> state.draw(output="celegans_nested_mdl_overlap.pdf") (...)

class
graph_tool.inference.partition_modes.
PartitionModeState
(bs, relabel=True, nested=False, converge=False, **kwargs)[source]¶ Bases:
object
The random label model state for a set of labelled partitions, which attempts to align them with a common group labelling.
 Parameters
 bslist of iterables
List of partitions to be aligned. If
nested=True
, these should be hierarchical partitions, composed each as a list of partitions. relabel
bool
(optional, default:True
) If
True
, an initial alignment of the partitions will be attempted during instantiation, otherwise they will be incorporated as they are. nested
bool
(optional, default:False
) If
True
, the partitions will be assumed to be hierarchical. converge
bool
(optional, default:False
) If
True
, the label alignment will be iterated until convergence upon initialization (otherwisereplace_partitions()
needs to be called repeatedly).
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977

copy
(self, bs=None)[source]¶ Copies the state. The parameters override the state properties, and have the same meaning as in the constructor.

add_partition
(self, b, relabel=True)[source]¶ Adds partition
b
to the ensemble, after relabelling it ifrelabel=True
, and returns its index in the population.

virtual_add_partition
(self, b, relabel=True)[source]¶ Computes the entropy difference (negative log probability) if partition
b
were inserted the ensemble, after relabelling it ifrelabel=True
.

virtual_remove_partition
(self, b, relabel=True)[source]¶ Computes the entropy difference (negative log probability) if partition
b
were removed from the ensemble, after relabelling it ifrelabel=True
.

replace_partitions
(self)[source]¶ Removes and readds every partition, after relabelling, an returns the entropy difference (negative log probability).

relabel_partition
(self, b)[source]¶ Returns a relabelled copy of partition
b
, according to its alignment with the ensemble.

align_mode
(self, mode)[source]¶ Relabel entire ensemble to align with another ensemble given by
mode
, which should be an instance ofPartitionModeState
.

posterior_entropy
(self, MLE=True)[source]¶ Return the entropy of the random label model, using maximum likelihood estimates for the marginal node probabilities if
`MLE=True`
, otherwise using posterior mean estimates.

posterior_cdev
(self, MLE=True)[source]¶ Return the uncertainty of the mode in the range \([0,1]\), using maximum likelihood estimates for the marginal node probabilities if
`MLE=True`
, otherwise using posterior mean estimates.

posterior_lprob
(self, b, MLE=True)[source]¶ Return the logprobability of partition
b
, using maximum likelihood estimates for the marginal node probabilities if`MLE=True`
, otherwise using posterior mean estimates.

get_coupled_state
(self)[source]¶ Return the instance of
PartitionModeState
representing the model at the upper hierarchical level.

get_marginal
(self, g)[source]¶ Return a
VertexPropertyMap
forGraph
g
, withvector<int>
values containing the marginal group membership counts for each node.

get_max
(self, g)[source]¶ Return a
VertexPropertyMap
forGraph
g
, withint
values containing the maximum marginal group membership for each node.

get_max_nested
(self)[source]¶ Return a hierarchical partition as a list of
numpy.ndarray
objects, containing the maximum marginal group membership for each node in every level.

class
graph_tool.inference.partition_modes.
ModeClusterState
(bs, b=None, B=1, nested=False, relabel=True)[source]¶ Bases:
object
The mixed random label model state for a set of labelled partitions, which attempts to align them inside clusters with a common group labelling.
 Parameters
 bslist of iterables
List of partitions to be aligned. If
nested=True
, these should be hierarchical partitions, composed each as a list of partitions. biterable (optional, default:
None
) Initial cluster membership for every partition. If
None
a random division intoB
groups will be used. B
int
(optional, default:1
) Number of groups for initial division.
 relabel
bool
(optional, default:True
) If
True
, an initial alignment of the partitions will be attempted during instantiation, otherwise they will be incorporated as they are. nested
bool
(optional, default:False
) If
True
, the partitions will be assumed to be hierarchical.
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977

copy
(self, bs=None, b=None)[source]¶ Copies the state. The parameters override the state properties, and have the same meaning as in the constructor.

get_mode
(self, r)[source]¶ Return the mode in cluster
r
as an instance ofPartitionModeState
.

get_modes
(self, sort=True)[source]¶ Return the list of nonempty modes, as instances of
PartitionModeState
. If sorted == True, the modes are retured in decreasing order with respect to their size.

get_Be
(self)[source]¶ Returns the effective number of clusters, defined as \(e^{H}\), with \(H=\sum_r\frac{n_r}{N}\ln \frac{n_r}{N}\), where \(n_r\) is the number of partitions in cluster \(r\).

virtual_add_partition
(self, b, r, relabel=True)[source]¶ Computes the entropy difference (negative log probability) if partition
b
were inserted in clusterr
, after relabelling it ifrelabel=True
.

add_partition
(self, b, r, relabel=True)[source]¶ Add partition
b
in clusterr
, after relabelling it ifrelabel=True
.

classify_partition
(self, b, relabel=True, new_group=True, sample=False)[source]¶ Returns the cluster
r
to which partitionb
would belong, after relabelling it ifrelabel=True
, according to the most probable assignment, or randomly sampled according to the relative probabilities ifsample==True
. Ifnew_group==True
, a new previously unoccupied group is also considered for the classification.

relabel
(self, epsilon=1e06, maxiter=100)[source]¶ Attempt to align group labels between clusters via a greedy algorithm. The algorithm stops after
maxiter
iterations or when the entropy improvement lies belowepsilon
.

posterior_entropy
(self, MLE=True)[source]¶ Return the entropy of the random label model, using maximum likelihood estimates for the marginal node probabilities if
`MLE=True`
, otherwise using posterior mean estimates.

posterior_lprob
(self, r, b, MLE=True)[source]¶ Return the logprobability of partition
b
belonging to moder
, using maximum likelihood estimates for the marginal node probabilities if`MLE=True`
, otherwise using posterior mean estimates.

replace_partitions
(self)[source]¶ For every cluster, removes and readds every partition, after relabelling, and returns the entropy difference (negative log probability).

sample_partition
(self, MLE=True)[source]¶ Sampled a cluster label and partition from the inferred model, using maximum likelihood estimates for the marginal node probabilities if
`MLE=True`
, otherwise using posterior mean estimates.

sample_nested_partition
(self, MLE=True, fix_empty=True)[source]¶ Sampled a cluster label and nested partition from the inferred model, using maximum likelihood estimates for the marginal node probabilities if
`MLE=True`
, otherwise using posterior mean estimates.

mcmc_sweep
(self, beta=inf, d=0.01, niter=1, allow_vacate=True, sequential=True, deterministic=False, verbose=False, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

multiflip_mcmc_sweep
(self, beta=inf, psingle=None, psplit=1, pmerge=1, pmergesplit=1, d=0.01, gibbs_sweeps=10, niter=1, accept_stats=None, verbose=False, **kwargs)[source]¶ Perform sweeps of a mergesplit MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

graph_tool.inference.partition_modes.
partition_overlap
(x, y, norm=True)[source]¶ Returns the maximum overlap between partitions, according to an optimal label alignment.
 Parameters
 xiterable of
int
values First partition.
 yiterable of
int
values Second partition.
 norm(optional, default:
True
) If
True
, the result will be normalized in the range \([0,1]\).
 xiterable of
 Returns
 w
float
orint
Maximum overlap value.
 w
Notes
The maximum overlap between partitions \(\boldsymbol x\) and \(\boldsymbol y\) is defined as
\[\omega(\boldsymbol x,\boldsymbol y) = \underset{\boldsymbol\mu}{\max}\sum_i\delta_{x_i,\mu(y_i)},\]where \(\boldsymbol\mu\) is a bijective mapping between group labels. It corresponds to solving an instance of the maximum weighted bipartite matching problem, which is done with the KuhnMunkres algorithm [kuhn_hungarian_1955] [munkres_algorithms_1957].
If
norm == True
, the normalized value is returned:\[\frac{\omega(\boldsymbol x,\boldsymbol y)}{N}\]which lies in the unit interval \([0,1]\).
This algorithm runs in time \(O[N + (B_x+B_y)E_m]\) where \(N\) is the length of \(\boldsymbol x\) and \(\boldsymbol y\), \(B_x\) and \(B_y\) are the number of labels in partitions \(\boldsymbol x\) and \(\boldsymbol y\), respectively, and \(E_m \le B_xB_y\) is the number of nonzero entries in the contingency table between both partitions.
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977
 kuhn_hungarian_1955
H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research Logistics Quarterly 2, 83–97 (1955) DOI: 10.1002/nav.3800020109 [scihub, @tor]
 munkres_algorithms_1957
James Munkres, “Algorithms for the Assignment and Transportation Problems,” Journal of the Society for Industrial and Applied Mathematics 5, 32–38 (1957). DOI: 10.1137/0105003 [scihub, @tor]
Examples
>>> x = np.random.randint(0, 10, 1000) >>> y = np.random.randint(0, 10, 1000) >>> gt.partition_overlap(x, y) 0.143

graph_tool.inference.partition_modes.
nested_partition_overlap
(x, y, norm=True)[source]¶ Returns the hierarchical maximum overlap between nested partitions, according to an optimal recursive label alignment.
 Parameters
 xiterable of iterables of
int
values First partition.
 yiterable of iterables of
int
values Second partition.
 norm(optional, default:
True
) If
True
, the result will be normalized in the range \([0,1]\).
 xiterable of iterables of
 Returns
 w
float
orint
Maximum hierarchical overlap value.
 w
Notes
The maximum overlap between partitions \(\bar{\boldsymbol x}\) and \(\bar{\boldsymbol y}\) is defined as
\[\omega(\bar{\boldsymbol x},\bar{\boldsymbol y}) = \sum_l\underset{\boldsymbol\mu_l}{\max}\sum_i\delta_{x_i^l,\mu_l(\tilde y_i^l)},\]where \(\boldsymbol\mu_l\) is a bijective mapping between group labels at level \(l\), and \(\tilde y_i^l = y^i_{\mu_{l1}(i)}\) are the nodes reordered according to the lower level. It corresponds to solving an instance of the maximum weighted bipartite matching problem for every hierarchical level, which is done with the KuhnMunkres algorithm [kuhn_hungarian_1955] [munkres_algorithms_1957].
If
norm == True
, the normalized value is returned:\[1  \frac{\left(\sum_lN_l\right)  \omega(\bar{\boldsymbol x}, \bar{\boldsymbol y})}{\sum_l\left(N_l  1\right)}\]which lies in the unit interval \([0,1]\), where \(N_l=\max(N_{{\boldsymbol x}^l}, N_{{\boldsymbol y}^l})\) is the number of nodes in level l.
This algorithm runs in time \(O[\sum_l N_l + (B_x^l+B_y^l)E_m^l]\) where \(B_x^l\) and \(B_y^l\) are the number of labels in partitions \(\bar{\boldsymbol x}\) and \(\bar{\boldsymbol y}\) at level \(l\), respectively, and \(E_m^l \le B_x^lB_y^l\) is the number of nonzero entries in the contingency table between both partitions.
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977
 kuhn_hungarian_1955
H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research Logistics Quarterly 2, 83–97 (1955) DOI: 10.1002/nav.3800020109 [scihub, @tor]
 munkres_algorithms_1957
James Munkres, “Algorithms for the Assignment and Transportation Problems,” Journal of the Society for Industrial and Applied Mathematics 5, 32–38 (1957). DOI: 10.1137/0105003 [scihub, @tor]
Examples
>>> x = [np.random.randint(0, 100, 1000), np.random.randint(0, 10, 100), np.random.randint(0, 3, 10)] >>> y = [np.random.randint(0, 100, 1000), np.random.randint(0, 10, 100), np.random.randint(0, 3, 10)] >>> gt.nested_partition_overlap(x, y) 0.150858...

graph_tool.inference.partition_modes.
contingency_graph
(x, y)[source]¶ Returns the contingency graph between both partitions.
 Parameters
 xiterable of
int
values First partition.
 yiterable of
int
values Second partition.
 xiterable of
 Returns
 g
Graph
Contingency graph, containing an internal edge property map
mrs
with the weights, an internal vertex property maplabel
with the label values, and an internal boolean vertex property mappartition
indicating the partition membership.
 g
Notes
The contingency graph is a bipartite graph with the labels of \(\boldsymbol x\) and \(\boldsymbol y\) as vertices, and edge weights given by
\[m_{rs} = \sum_i\delta_{x_i,r}\delta_{y_i,s}.\]This algorithm runs in time \(O(N)\) where \(N\) is the length of \(\boldsymbol x\) and \(\boldsymbol y\).
Examples
>>> x = np.random.randint(0, 10, 1000) >>> y = np.random.randint(0, 10, 1000) >>> g = gt.contingency_graph(x, y) >>> g.ep.mrs.a PropertyArray([ 8, 6, 8, 15, 15, 14, 11, 13, 8, 9, 16, 6, 5, 11, 8, 15, 6, 8, 9, 12, 11, 8, 13, 6, 10, 14, 12, 14, 15, 18, 13, 15, 10, 12, 13, 6, 12, 13, 15, 9, 11, 11, 5, 7, 11, 6, 8, 15, 15, 14, 8, 8, 7, 13, 11, 11, 8, 11, 9, 11, 9, 16, 13, 12, 8, 16, 6, 10, 15, 14, 4, 4, 7, 12, 11, 8, 6, 16, 11, 13, 3, 5, 13, 9, 11, 4, 4, 12, 7, 5, 7, 10, 6, 8, 6, 7, 10, 7, 11, 2], dtype=int32)

graph_tool.inference.partition_modes.
shuffle_partition_labels
(x)[source]¶ Returns a copy of partition
x
, with the group labels randomly shuffled. Parameters
 xiterable of
int
values Partition.
 xiterable of
 Returns
 y
numpy.ndarray
Partition with shuffled labels.
 y
Examples
>>> x = [0, 0, 0, 1, 1, 1, 2, 2, 2] >>> gt.shuffle_partition_labels(x) array([0, 0, 0, 2, 2, 2, 1, 1, 1], dtype=int32)

graph_tool.inference.partition_modes.
shuffle_nested_partition_labels
(x)[source]¶ Returns a copy of nested partition
x
, with the group labels randomly shuffled. Parameters
 xiterable iterable of
int
values Partition.
 xiterable iterable of
 Returns
 ylist of
numpy.ndarray
Nested partition with shuffled labels.
 ylist of
Examples
>>> x = [[0, 0, 0, 1, 1, 1, 2, 2, 2], [0, 0, 1], [1, 0]] >>> gt.shuffle_nested_partition_labels(x) [array([1, 1, 1, 0, 0, 0, 2, 2, 2], dtype=int32), array([0, 0, 1], dtype=int32), array([0, 1], dtype=int32)]

graph_tool.inference.partition_modes.
order_partition_labels
(x)[source]¶ Returns a copy of partition
x
, with the group labels ordered decreasingly according to group size. Parameters
 xiterable of
int
values Partition.
 xiterable of
 Returns
 y
numpy.ndarray
Partition with ordered labels.
 y
Examples
>>> x = [0, 2, 2, 1, 1, 1, 2, 2, 2] >>> gt.order_partition_labels(x) array([2, 0, 0, 1, 1, 1, 0, 0, 0], dtype=int32)

graph_tool.inference.partition_modes.
order_nested_partition_labels
(x)[source]¶ Returns a copy of nested partition
x
, with the group labels ordered decreasingly according to group size at each level. Parameters
 xiterable of iterables of
int
values Partition.
 xiterable of iterables of
 Returns
 ylist of
numpy.ndarray
Nested partition with ordered labels.
 ylist of
Examples
>>> x = [[0, 2, 2, 1, 1, 1, 2, 2, 2], [1, 1, 0], [1, 1]] >>> gt.order_nested_partition_labels(x) [array([2, 0, 0, 1, 1, 1, 0, 0, 0], dtype=int32), array([1, 0, 0], dtype=int32), array([0, 0], dtype=int32)]

graph_tool.inference.partition_modes.
align_partition_labels
(x, y)[source]¶ Returns a copy of partition
x
, with the group labels aligned as to maximize the overlap withy
. Parameters
 xiterable of
int
values Partition.
 xiterable of
 Returns
 y
numpy.ndarray
Partition with aligned labels.
 y
Notes
This algorithm runs in time \(O[N + (B_x+B_y)E_m]\) where \(N\) is the length of \(\boldsymbol x\) and \(\boldsymbol y\), \(B_x\) and \(B_y\) are the number of labels in partitions \(\boldsymbol x\) and \(\boldsymbol y\), respectively, and \(E_m \le B_xB_y\) is the number of nonzero entries in the contingency table between both partitions.
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977
Examples
>>> x = [0, 2, 2, 1, 1, 1, 2, 3, 2] >>> y = gt.shuffle_partition_labels(x) >>> print(y) [3 0 0 1 1 1 0 2 0] >>> gt.align_partition_labels(y, x) array([0, 2, 2, 1, 1, 1, 2, 3, 2], dtype=int32)

graph_tool.inference.partition_modes.
align_nested_partition_labels
(x, y)[source]¶ Returns a copy of nested partition
x
, with the group labels aligned as to maximize the overlap withy
. Parameters
 xiterable of iterables of
int
values Partition.
 xiterable of iterables of
 Returns
 ylist of
numpy.ndarray
Nested partition with aligned labels.
 ylist of
Notes
This algorithm runs in time \(O[\sum_l N_l + (B_x^l+B_y^l)E_m^l]\) where \(B_x^l\) and \(B_y^l\) are the number of labels in partitions \(\bar{\boldsymbol x}\) and \(\bar{\boldsymbol y}\) at level \(l\), respectively, and \(E_m^l \le B_x^lB_y^l\) is the number of nonzero entries in the contingency table between both partitions.
Examples
>>> x = [[0, 2, 2, 1, 1, 1, 2, 3, 2], [1, 0, 1, 0], [0,0]] >>> y = gt.shuffle_nested_partition_labels(x) >>> print(y) [array([1, 3, 3, 2, 2, 2, 3, 0, 3], dtype=int32), array([1, 0, 1, 0], dtype=int32), array([0, 0], dtype=int32)] >>> gt.align_nested_partition_labels(y, x) [array([0, 2, 2, 1, 1, 1, 2, 3, 2], dtype=int32), array([1, 0, 1, 0], dtype=int32), array([0, 0], dtype=int32)]

graph_tool.inference.partition_modes.
partition_overlap_center
(bs, init=None, relabel_bs=False)[source]¶ Find a partition with a maximal overlap to all items of the list of partitions given.
 Parameters
 bslist of iterables of
int
values List of partitions.
 inititerable of
int
values (optional, default:None
) If given, it will determine the initial partition.
 relabel_bs
bool
(optional, default:False
) If
True
the given list of partitions will be updated with relabelled values.
 bslist of iterables of
 Returns
 c
numpy.ndarray
Partition containing the overlap consensus.
 r
float
Uncertainty in range \([0,1]\).
 c
Notes
This algorithm obtains a partition \(\hat{\boldsymbol b}\) that has a maximal sum of overlaps with all partitions given in
bs
. It is obtained by performing the double maximization:\[\begin{split}\begin{aligned} \hat b_i &= \underset{r}{\operatorname{argmax}}\;\sum_m \delta_{\mu_m(b^m_i), r}\\ \boldsymbol\mu_m &= \underset{\boldsymbol\mu}{\operatorname{argmax}} \sum_rm_{r,\mu(r)}^{(m)}, \end{aligned}\end{split}\]where \(\boldsymbol\mu\) is a bijective mapping between group labels, and \(m_{rs}^{(m)}\) is the contingency table between \(\hat{\boldsymbol b}\) and \(\boldsymbol b ^{(m)}\). This algorithm simply iterates the above equations, until no further improvement is possible.
The uncertainty is given by:
\[r = 1  \frac{1}{NM} \sum_i \sum_m \delta_{\mu_m(b^m_i), \hat b_i}\]This algorithm runs in time \(O[M(N + B^3)]\) where \(M\) is the number of partitions, \(N\) is the length of the partitions and \(B\) is the number of labels used.
If enabled during compilation, this algorithm runs in parallel.
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977
Examples
>>> x = [5, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0] >>> bs = [] >>> for m in range(100): ... y = np.array(x) ... y[np.random.randint(len(y))] = np.random.randint(5) ... bs.append(y) >>> bs[:3] [array([5, 5, 2, 0, 1, 2, 1, 0, 0, 0, 0]), array([1, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0]), array([5, 5, 2, 0, 1, 0, 1, 0, 4, 0, 0])] >>> c, r = gt.partition_overlap_center(bs) >>> print(c, r) [1 1 2 0 3 0 3 0 0 0 0] 0.07454545... >>> gt.align_partition_labels(c, x) array([5, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0], dtype=int32)

graph_tool.inference.partition_modes.
nested_partition_overlap_center
(bs, init=None, return_bs=False)[source]¶ Find a nested partition with a maximal overlap to all items of the list of nested partitions given.
 Parameters
 bslist of list of iterables of
int
values List of nested partitions.
 inititerable of iterables of
int
values (optional, default:None
) If given, it will determine the initial nested partition.
 return_bs
bool
(optional, default:False
) If
True
the an update list of nested partitions will be return with relabelled values.
 bslist of list of iterables of
 Returns
 cList of
numpy.ndarray
Nested partition containing the overlap consensus.
 r
float
Uncertainty in range \([0,1]\).
 bsList of lists of
numpy.ndarray
List of relabelled nested partitions.
 cList of
Notes
This algorithm obtains a nested partition \(\hat{\bar{\boldsymbol b}}\) that has a maximal sum of overlaps with all nested partitions given in
bs
. It is obtained by performing the double maximization:\[\begin{split}\begin{aligned} \hat b_i^l &= \underset{r}{\operatorname{argmax}}\;\sum_m \delta_{\mu_m^l(b^{l,m}_i), r}\\ \boldsymbol\mu_m^l &= \underset{\boldsymbol\mu}{\operatorname{argmax}} \sum_rm_{r,\mu(r)}^{(l,m)}, \end{aligned}\end{split}\]where \(\boldsymbol\mu\) is a bijective mapping between group labels, and \(m_{rs}^{(l,m)}\) is the contingency table between \(\hat{\boldsymbol b}_l\) and \(\boldsymbol b ^{(m)}_l\). This algorithm simply iterates the above equations, until no further improvement is possible.
The uncertainty is given by:
\[r = 1  \frac{1}{NL}\sum_l\frac{N_l1}{N_l}\sum_i\frac{1}{M}\sum_m \delta_{\mu_m(b^{l,m}_i), \hat b_i^l}.\]This algorithm runs in time \(O[M\sum_l(N_l + B_l^3)]\) where \(M\) is the number of partitions, \(N_l\) is the length of the partitions and \(B_l\) is the number of labels used, in level \(l\).
If enabled during compilation, this algorithm runs in parallel.
References
 peixotorevealing2020
Tiago P. Peixoto, “Revealing consensus and dissensus between network partitions”, arXiv: 2005.13977
Examples
>>> x = [[5, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0], [0, 1, 0, 1, 1, 1]] >>> bs = [] >>> for m in range(100): ... y = [np.array(xl) for xl in x] ... y[0][np.random.randint(len(y[0]))] = np.random.randint(5) ... y[1][np.random.randint(len(y[1]))] = np.random.randint(2) ... bs.append(y) >>> bs[:3] [[array([5, 5, 2, 0, 1, 0, 3, 0, 0, 0, 0]), array([0, 1, 1, 1, 1, 1])], [array([5, 5, 2, 0, 0, 0, 1, 0, 0, 0, 0]), array([0, 0, 0, 1, 1, 1])], [array([1, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0]), array([0, 1, 0, 1, 1, 1])]] >>> c, r = gt.nested_partition_overlap_center(bs) >>> print(c, r) [array([1, 1, 2, 0, 3, 0, 3, 0, 0, 0, 0], dtype=int32), array([0, 1, 0, 1, 1], dtype=int32)] 0.084492... >>> gt.align_nested_partition_labels(c, x) [array([5, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0], dtype=int32), array([ 0, 1, 0, 1, 1, 1], dtype=int32)]

graph_tool.inference.partition_modes.
nested_partition_clear_null
(x)[source]¶ Returns a copy of nested partition
x
where the null values1
are replaced with0
. Parameters
 xiterable of iterables of
int
values Partition.
 xiterable of iterables of
 Returns
 ylist of
numpy.ndarray
Nested partition with null values removed.
 ylist of
Notes
This is useful to pass hierarchical partitions to
NestedBlockState
.Examples
>>> x = [[5, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0], [0, 1, 0, 1, 1, 1]] >>> gt.nested_partition_clear_null(x) [array([5, 5, 2, 0, 1, 0, 1, 0, 0, 0, 0], dtype=int32), array([0, 1, 0, 0, 0, 1], dtype=int32)]

class
graph_tool.inference.partition_centroid.
PartitionCentroidState
(bs, b=None, RMI=False)[source]¶ Bases:
object
Obtain the center of a set of partitions, according to the variation of information metric or reduced mutual information.
 Parameters
 bsiterable of iterable of
int
List of partitions.
 b
list
ornumpy.ndarray
(optional, default:None
) Initial partition. If not supplied, a partition into a single group will be used.
 RMI
bool
(optional, default:False
) If
True
, the reduced mutual information will be used, otherwise the variation of information metric will be used instead.
 bsiterable of iterable of

copy
(self, bs=None, b=None, RMI=None)[source]¶ Copies the state. The parameters override the state properties, and have the same meaning as in the constructor.

get_Be
(self)[source]¶ Returns the effective number of blocks, defined as \(e^{H}\), with \(H=\sum_r\frac{n_r}{N}\ln \frac{n_r}{N}\), where \(n_r\) is the number of nodes in group r.

mcmc_sweep
(self, beta=1.0, d=0.01, niter=1, allow_vacate=True, sequential=True, deterministic=False, verbose=False, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

multiflip_mcmc_sweep
(self, beta=1.0, psingle=None, psplit=1, pmerge=1, pmergesplit=1, d=0.01, gibbs_sweeps=10, niter=1, accept_stats=None, verbose=False, **kwargs)[source]¶ Perform sweeps of a mergesplit MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

graph_tool.inference.partition_centroid.
variation_information
(x, y, norm=False)[source]¶ Returns the variation of information between two partitions.
 Parameters
 xiterable of
int
values First partition.
 yiterable of
int
values Second partition.
 norm(optional, default:
True
) If
True
, the result will be normalized in the range \([0,1]\).
 xiterable of
 Returns
 VI
float
Variation of information value.
 VI
Notes
The variation of information [meila_comparing_2003] is defined as
\[\text{VI}(\boldsymbol x,\boldsymbol y) = \frac{1}{N}\sum_{rs}m_{rs}\left[\ln\frac{m_{rs}}{n_r} + \ln\frac{m_{rs}}{n_s'}\right],\]with \(m_{rs}=\sum_i\delta_{x_i,r}\delta_{y_i,s}\) being the contingency table between \(\boldsymbol x\) and \(\boldsymbol y\), and \(n_r=\sum_sm_{rs}\) and \(n'_s=\sum_rm_{rs}\) are the group sizes in both partitions.
If
norm == True
, the normalized value is returned:\[\frac{\text{VI}(\boldsymbol x,\boldsymbol y)}{\ln N}\]which lies in the unit interval \([0,1]\).
This algorithm runs in time \(O(N)\) where \(N\) is the length of \(\boldsymbol x\) and \(\boldsymbol y\).
References
 meila_comparing_2003
Marina Meilă, “Comparing Clusterings by the Variation of Information,” in Learning Theory and Kernel Machines, Lecture Notes in Computer Science No. 2777, edited by Bernhard Schölkopf and Manfred K. Warmuth (Springer Berlin Heidelberg, 2003) pp. 173–187. DOI: 10.1007/9783540451679_14 [scihub, @tor]
Examples
>>> x = np.random.randint(0, 10, 1000) >>> y = np.random.randint(0, 10, 1000) >>> gt.variation_information(x, y) 4.5346824...

graph_tool.inference.partition_centroid.
mutual_information
(x, y, norm=False)[source]¶ Returns the mutual information between two partitions.
 Parameters
 xiterable of
int
values First partition.
 yiterable of
int
values Second partition.
 norm(optional, default:
True
) If
True
, the result will be normalized in the range \([0,1]\).
 xiterable of
 Returns
 MI
float
Mutual information value
 MI
Notes
The mutual information is defined as
\[\text{MI}(\boldsymbol x,\boldsymbol y) = \frac{1}{N}\sum_{rs}m_{rs}\ln\frac{N m_{rs}}{n_rn'_s},\]with \(m_{rs}=\sum_i\delta_{x_i,r}\delta_{y_i,s}\) being the contingency table between \(\boldsymbol x\) and \(\boldsymbol y\), and \(n_r=\sum_sm_{rs}\) and \(n'_s=\sum_rm_{rs}\) are the group sizes in both partitions.
If
norm == True
, the normalized value is returned:\[2\frac{\text{MI}(\boldsymbol x,\boldsymbol y)}{H_x + H_y}\]which lies in the unit interval \([0,1]\), and where \(H_x = \frac{1}{N}\sum_rn_r\ln\frac{n_r}{N}\) and \(H_x = \frac{1}{N}\sum_rn'_r\ln\frac{n'_r}{N}\).
This algorithm runs in time \(O(N)\) where \(N\) is the length of \(\boldsymbol x\) and \(\boldsymbol y\).
Examples
>>> x = np.random.randint(0, 10, 1000) >>> y = np.random.randint(0, 10, 1000) >>> gt.mutual_information(x, y) 0.050321...

graph_tool.inference.partition_centroid.
reduced_mutual_information
(x, y, norm=False)[source]¶ Returns the reduced mutual information between two partitions.
 Parameters
 xiterable of
int
values First partition.
 yiterable of
int
values Second partition.
 norm(optional, default:
True
) If
True
, the result will be normalized in the range \([0,1]\).
 xiterable of
 Returns
 RMI
float
Reduced mutual information value.
 RMI
Notes
The reduced mutual information [newman_improved_2020] is defined as
\[\text{RMI}(\boldsymbol x,\boldsymbol y) = \frac{1}{N}\left[\ln \frac{N!\prod_{rs}m_{rs}!}{\prod_rn_r!\prod_sn_s'!} \ln\Omega(\boldsymbol n, \boldsymbol n')\right],\]with \(m_{rs}=\sum_i\delta_{x_i,r}\delta_{y_i,s}\) being the contingency table between \(\boldsymbol x\) and \(\boldsymbol y\), and \(n_r=\sum_sm_{rs}\) and \(n'_s=\sum_rm_{rs}\) are the group sizes in both partitions, and \(\Omega(\boldsymbol n, \boldsymbol n')\) is the total number of contingency tables with fixed row and column sums.
If
norm == True
, the normalized value is returned:\[\frac{2\ln \frac{N!\prod_{rs}m_{rs}!}{\prod_rn_r!\prod_sn_s'!} 2\ln\Omega(\boldsymbol n, \boldsymbol n')} {\ln\frac{N!}{\prod_rn_r!} + \ln\frac{N!}{\prod_rn'_r!} \ln\Omega(\boldsymbol n, \boldsymbol n) \ln\Omega(\boldsymbol n', \boldsymbol n')}\]which can take a maximum value of one.
This algorithm runs in time \(O(N)\) where \(N\) is the length of \(\boldsymbol x\) and \(\boldsymbol y\).
References
 newman_improved_2020
M. E. J. Newman, G. T. Cantwell and J.G. Young, “Improved mutual information measure for classification and community detection”, Phys. Rev. E, 101, 042304 (2020), DOI: 10.1103/PhysRevE.101.042304 [scihub, @tor], arXiv: 1907.12581
Examples
>>> x = np.random.randint(0, 10, 1000) >>> y = np.random.randint(0, 10, 1000) >>> gt.reduced_mutual_information(x, y) 0.065562...

class
graph_tool.inference.planted_partition.
PPBlockState
(g, b=None)[source]¶ Bases:
object
Obtain the partition of a network according to the Bayesian planted partition model.
 Parameters
 g
Graph
Graph to be modelled.
 b
PropertyMap
(optional, default:None
) Initial partition. If not supplied, a partition into a single group will be used.
 g
References
 lizhistatistical2020
Lizhi Zhang, Tiago P. Peixoto, “Statistical inference of assortative community structures”, arXiv: 2006.14493

copy
(self, g=None, b=None)[source]¶ Copies the state. The parameters override the state properties, and have the same meaning as in the constructor.

get_state
(self)[source]¶ Alias to
get_blocks()
.

get_Be
(self)[source]¶ Returns the effective number of blocks, defined as \(e^{H}\), with \(H=\sum_r\frac{n_r}{N}\ln \frac{n_r}{N}\), where \(n_r\) is the number of nodes in group r.

entropy
(self, uniform=False, degree_dl_kind='distributed', **kwargs)[source]¶ Return the model entropy (negative loglikelihood).
 Parameters
 uniform
bool
(optional, default:False
) If
True
, the uniform planted partition model is used, otherwise a nonuniform version is used. degree_dl_kind
str
(optional, default:"distributed"
) This specifies the prior used for the degree sequence. It must be one of:
"uniform"
or"distributed"
(default).
 uniform
Notes
The “entropy” of the state is the negative loglikelihood of the microcanonical SBM, that includes the generated graph \(\boldsymbol{A}\) and the model parameters \(e_{\text{in}}\), \(e_{\text{out}}\), \(\boldsymbol{k}\) and \(\boldsymbol{b}\),
\[\begin{split}\Sigma &=  \ln P(\boldsymbol{A},e_{\text{in}},e_{\text{out}},\boldsymbol{k},\boldsymbol{b}) \\ &=  \ln P(\boldsymbol{A}e_{\text{in}},e_{\text{out}},\boldsymbol{k},\boldsymbol{b})  \ln P(e_{\text{in}},e_{\text{out}},\boldsymbol{k},\boldsymbol{b}).\end{split}\]This value is also called the description length of the data, and it corresponds to the amount of information required to describe it (in nats).
For the uniform version of the model, the likelihood is
\[P(\boldsymbol{A}\boldsymbol{k},\boldsymbol{b}) = \frac{e_{\text{in}}!e_{\text{out}}!} {\left(\frac{B}{2}\right)^{e_{\text{in}}}{B\choose 2}^{e_{\text{out}}}(E+1)^{1\delta_{B,1}}\prod_re_r!}\times \frac{\prod_ik_i!}{\prod_{i<j}A_{ij}!\prod_i A_{ii}!!}.\]where \(e_{\text{in}}\) and \(e_{\text{out}}\) are the number of edges inside and outside communities, respectively, and \(e_r\) is the sum of degrees in group \(r\).
For the nonuniform model we have instead:
\[P(\boldsymbol{A}\boldsymbol{k},\boldsymbol{b}) = \frac{e_{\text{out}}!\prod_re_{rr}!!} {{B\choose 2}^{e_{\text{out}}}(E+1)^{1\delta_{B,1}}\prod_re_r!}\times{B + e_{\text{in}}  1 \choose e_{\text{in}}}^{1}\times \frac{\prod_ik_i!}{\prod_{i<j}A_{ij}!\prod_i A_{ii}!!}.\]Here there are two options for the prior on the degrees:
degree_dl_kind == "uniform"
\[P(\boldsymbol{k}\boldsymbol{e},\boldsymbol{b}) = \prod_r\left(\!\!{n_r\choose e_r}\!\!\right)^{1}.\]This corresponds to a noninformative prior, where the degrees are sampled from a uniform distribution.
degree_dl_kind == "distributed"
(default)\[P(\boldsymbol{k}\boldsymbol{e},\boldsymbol{b}) = \prod_r\frac{\prod_k\eta_k^r!}{n_r!} \prod_r q(e_r, n_r)^{1}\]with \(\eta_k^r\) being the number of nodes with degree \(k\) in group \(r\), and \(q(n,m)\) being the number of partitions of integer \(n\) into at most \(m\) parts.
This corresponds to a prior for the degree sequence conditioned on the degree frequencies, which are themselves sampled from a uniform hyperprior. This option should be preferred in most cases.
For the partition prior \(P(\boldsymbol{b})\) please refer to
model_entropy()
.References
 lizhistatistical2020
Lizhi Zhang, Tiago P. Peixoto, “Statistical inference of assortative community structures”, arXiv: 2006.14493

draw
(self, **kwargs)[source]¶ Convenience wrapper to
graph_draw()
that draws the state of the graph as colors on the vertices and edges.

mcmc_sweep
(self, beta=1.0, c=0.5, d=0.01, niter=1, entropy_args={}, allow_vacate=True, sequential=True, deterministic=False, verbose=False, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

multiflip_mcmc_sweep
(self, beta=1.0, c=0.5, psingle=None, psplit=1, pmerge=1, pmergesplit=1, d=0.01, gibbs_sweeps=10, niter=1, entropy_args={}, accept_stats=None, verbose=False, **kwargs)[source]¶ Perform sweeps of a mergesplit MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

class
graph_tool.inference.blockmodel_em.
EMBlockState
(g, B, init_state=None)[source]¶ Bases:
object
The parametric, undirected stochastic block model state of a given graph.
 Parameters
 g
Graph
Graph to be modelled.
 B
int
Number of blocks (or vertex groups).
 init_state
BlockState
(optional, default:None
) Optional block state used for initialization.
 g
Notes
This class is intended to be used with
em_infer()
to perform expectation maximization with belief propagation. See [decelle_asymptotic_2011] for more details.References
 decelle_asymptotic_2011
Aurelien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborová, “Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications”, Phys. Rev. E 84, 066106 (2011), DOI: 10.1103/PhysRevE.84.066106 [scihub, @tor], arXiv: 1109.3041

e_iter
(self, max_iter=1000, epsilon=0.001, verbose=False)[source]¶ Perform ‘expectation’ iterations, using belief propagation, where the vertex marginals and edge messages are updated, until convergence according to
epsilon
or the maximum number of iterations given bymax_iter
. Ifverbose == True
, convergence information is displayed.The last update delta is returned.

m_iter
(self)[source]¶ Perform a single ‘maximization’ iteration, where the group sizes and connection probability matrix are updated.
The update delta is returned.

learn
(self, epsilon=0.001)[source]¶ Perform ‘maximization’ iterations until convergence according to
epsilon
.The last update delta is returned.

draw
(self, **kwargs)[source]¶ Convenience wrapper to
graph_draw()
that draws the state of the graph as colors on the vertices and edges.

graph_tool.inference.blockmodel_em.
em_infer
(state, max_iter=1000, max_e_iter=1, epsilon=0.001, learn_first=False, verbose=False)[source]¶ Infer the model parameters and latent variables using the expectationmaximization (EM) algorithm with initial state given by
state
. Parameters
 statemodel state
State object, e.g. of type
graph_tool.inference.blockmodel_em.EMBlockState
. max_iter
int
(optional, default:1000
) Maximum number of iterations.
 max_e_iter
int
(optional, default:1
) Maximum number of ‘expectation’ iterations inside the main loop.
 epsilon
float
(optional, default:1e3
) Convergence criterion.
 learn_first
bool
(optional, default:False
) If
True
, the maximization (a.k.a parameter learning) is converged before the main loop is run. verbose
bool
(optional, default:True
) If
True
, convergence information is displayed.
 Returns
 delta
float
The last update delta.
 niter
int
The total number of iterations.
 delta
References
 wikiEM
“Expectation–maximization algorithm”, https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm
Examples
>>> g = gt.collection.data["polbooks"] >>> state = gt.EMBlockState(g, B=3) >>> delta, niter = gt.em_infer(state) >>> state.draw(pos=g.vp["pos"], output="polbooks_EM_B3.svg") <...>

class
graph_tool.inference.util.
DictState
(d)[source]¶ Bases:
dict
Dictionary with (key,value) pairs accessible via attributes.

graph_tool.inference.util.
pmap
(prop, value_map)[source]¶ Maps all the values of prop to the values given by value_map, which is indexed by the values of prop.

graph_tool.inference.util.
reverse_map
(prop, value_map)[source]¶ Modify value_map such that the positions indexed by the values in prop correspond to their index in prop.

graph_tool.inference.util.
continuous_map
(prop)[source]¶ Remap the values of
prop
in the continuous range \([0, N1]\).

graph_tool.inference.modularity.
modularity
(g, b, gamma=1.0, weight=None)[source]¶ Calculate Newman’s (generalized) modularity of a network partition.
 Parameters
 g
Graph
Graph to be used.
 b
VertexPropertyMap
Vertex property map with the community partition.
 gamma
float
(optional, default:1.
) Resolution parameter.
 weight
EdgePropertyMap
(optional, default: None) Edge property map with the optional edge weights.
 g
 Returns
 Qfloat
Newman’s modularity.
Notes
Given a specific graph partition specified by prop, Newman’s modularity [newmanmodularity2006] is defined as:
\[Q = \frac{1}{2E} \sum_r e_{rr} \frac{e_r^2}{2E}\]where \(e_{rs}\) is the number of edges which fall between vertices in communities s and r, or twice that number if \(r = s\), and \(e_r = \sum_s e_{rs}\).
If weights are provided, the matrix \(e_{rs}\) corresponds to the sum of edge weights instead of number of edges, and the value of \(E\) becomes the total sum of edge weights.
References
 newmanmodularity2006
M. E. J. Newman, “Modularity and community structure in networks”, Proc. Natl. Acad. Sci. USA 103, 85778582 (2006), DOI: 10.1073/pnas.0601602103 [scihub, @tor], arXiv: physics/0602124
Examples
>>> g = gt.collection.data["football"] >>> gt.modularity(g, g.vp.value_tsevans) 0.5744393497...

class
graph_tool.inference.modularity.
ModularityState
(g, b=None)[source]¶ Bases:
object
Obtain the partition of a network according to the Newman’s modularity.
Warning
Do not use this approach in the analysis of networks without understanding the consequences. This algorithm is included only for comparison purposes. In general, the inferencebased approaches based on
BlockState
,NestedBlockState
, andPPBlockState
should be universally preferred. Parameters
 g
Graph
Graph to be partitioned.
 b
PropertyMap
(optional, default:None
) Initial partition. If not supplied, a partition into a single group will be used.
 g

copy
(self, g=None, b=None)[source]¶ Copies the state. The parameters override the state properties, and have the same meaning as in the constructor.

get_Be
(self)[source]¶ Returns the effective number of blocks, defined as \(e^{H}\), with \(H=\sum_r\frac{n_r}{N}\ln \frac{n_r}{N}\), where \(n_r\) is the number of nodes in group r.

entropy
(self, gamma=1.0, **kwargs)[source]¶ Return the unnormalized negative generalized modularity.
Notes
The unnormalized negative generalized modularity is defined as
\[\sum_{ij}\left(A_{ij}\gamma \frac{k_ik_j}{2E}\right)\]Where \(A_{ij}\) is the adjacency matrix, \(k_i\) is the degree of node \(i\), and \(E\) is the total number of edges.

modularity
(self, gamma=1)[source]¶ Return the generalized modularity.
Notes
The generalized modularity is defined as
\[\frac{1}{2E}\sum_{ij}\left(A_{ij}\gamma \frac{k_ik_j}{2E}\right)\]Where \(A_{ij}\) is the adjacency matrix, \(k_i\) is the degree of node \(i\), and \(E\) is the total number of edges.

draw
(self, **kwargs)[source]¶ Convenience wrapper to
graph_draw()
that draws the state of the graph as colors on the vertices and edges.

mcmc_sweep
(self, beta=1.0, c=0.5, d=0.01, niter=1, entropy_args={}, allow_vacate=True, sequential=True, deterministic=False, verbose=False, **kwargs)[source]¶ Perform sweeps of a MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.

multiflip_mcmc_sweep
(self, beta=1.0, c=0.5, psingle=None, psplit=1, pmerge=1, pmergesplit=1, d=0.01, gibbs_sweeps=10, niter=1, entropy_args={}, accept_stats=None, verbose=False, **kwargs)[source]¶ Perform sweeps of a mergesplit MetropolisHastings rejection sampling MCMC to sample network partitions. See
graph_tool.inference.blockmodel.BlockState.mcmc_sweep()
for the parameter documentation.