Uncertain network reconstruction#

An important application of generative models is to be able to generalize from observations and make predictions that go beyond what is seen in the data. This is particularly useful when the network we observe is incomplete, or contains errors, i.e. some of the edges are either missing or are outcomes of mistakes in measurement, or is not even observed at all. In this situation, we can use statistical inference to reconstruct the original network. Following [peixoto-reconstructing-2018], if \(\boldsymbol{\mathcal{D}}\) is the observed data, the network can be reconstructed according to the posterior distribution,

(1)#\[P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}) = \frac{P(\boldsymbol{\mathcal{D}} | \boldsymbol A)P(\boldsymbol A, \boldsymbol b)}{P(\boldsymbol{\mathcal{D}})}\]

where the likelihood \(P(\boldsymbol{\mathcal{D}}|\boldsymbol A)\) models the measurement process, and for the prior \(P(\boldsymbol A, \boldsymbol b)\) we use the SBM as before. This means that when performing reconstruction, we sample both the community structure \(\boldsymbol b\) and the network \(\boldsymbol A\) itself from the posterior distribution. From it, we can obtain the marginal probability of each edge,

\[\pi_{ij} = \sum_{\boldsymbol A, \boldsymbol b}A_{ij}P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}).\]

Based on the marginal posterior probabilities, the best estimate for the whole underlying network \(\boldsymbol{\hat{A}}\) is given by the maximum of this distribution,

\[\begin{split}\hat A_{ij} = \begin{cases} 1 & \text{ if } \pi_{ij} > \frac{1}{2},\\ 0 & \text{ if } \pi_{ij} < \frac{1}{2}.\\ \end{cases}\end{split}\]

We can also make estimates \(\hat y\) of arbitrary scalar network properties \(y(\boldsymbol A)\) via posterior averages,

\[\begin{split}\begin{align} \hat y &= \sum_{\boldsymbol A, \boldsymbol b}y(\boldsymbol A)P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}),\\ \sigma^2_y &= \sum_{\boldsymbol A, \boldsymbol b}(y(\boldsymbol A)-\hat y)^2P(\boldsymbol A, \boldsymbol b | \boldsymbol{\mathcal{D}}) \end{align}\end{split}\]

with uncertainty given by \(\sigma_y\). This is gives us a complete probabilistic reconstruction framework that fully reflects both the information and the uncertainty in the measurement data. Furthermore, the use of the SBM means that the reconstruction can take advantage of the correlations observed in the data to further inform it, which generally can lead to substantial improvements [peixoto-reconstructing-2018] [peixoto-network-2019].

In graph-tool there is support for reconstruction with the above framework for three measurement processes: 1. Repeated measurements with uniform errors (via MeasuredBlockState), 2. Repeated measurements with heterogeneous errors (via MixedMeasuredBlockState), and 3. Extraneously obtained edge probabilities (via UncertainBlockState), which we describe in the following.

In addition, it is also possible to reconstruct networks from observed dynamics and behavioral observation, as described in Network reconstruction from dynamics and behavior.

Measured networks#

This model assumes that the node pairs \((i,j)\) were measured \(n_{ij}\) times, and an edge has been recorded \(x_{ij}\) times, where a missing edge occurs with probability \(p\) and a spurious edge occurs with probability \(q\), uniformly for all node pairs, yielding a likelihood

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, p, q) = \prod_{i<j}{n_{ij}\choose x_{ij}}\left[(1-p)^{x_{ij}}p^{n_{ij}-x_{ij}}\right]^{A_{ij}} \left[q^{x_{ij}}(1-q)^{n_{ij}-x_{ij}}\right]^{1-A_{ij}}.\]

In general, \(p\) and \(q\) are not precisely known a priori, so we consider the integrated likelihood

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha,\beta,\mu,\nu) = \int P(\boldsymbol x | \boldsymbol n, \boldsymbol A, p, q) P(p|\alpha,\beta) P(q|\mu,\nu)\;\mathrm{d}p\,\mathrm{d}q\]

where \(P(p|\alpha,\beta)\) and \(P(q|\mu,\nu)\) are Beta distributions, which specify the amount of prior knowledge we have on the noise parameters. An important special case, which is the default unless otherwise specified, is when we are completely agnostic a priori about the noise magnitudes, and all hyperparameters are unity,

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A) \equiv P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha=1,\beta=1,\mu=1,\nu=1).\]

In this situation the priors \(P(p|\alpha=1,\beta=1)\) and \(P(q|\mu=1,\nu=1)\) are uniform distribution in the interval \([0,1]\).

Note

Since this approach also makes use of the correlations between edges to inform the reconstruction, as described by the inferred SBM, this means it can also be used when only single measurements have been performed, \(n_{ij}=1\), and the error magnitudes \(p\) and \(q\) are unknown. Since every arbitrary adjacency matrix can be cast in this setting, this method can be used to reconstruct networks for which no error assessments of any kind have been provided.

Below, we illustrate how the reconstruction can be performed with a simple example, using MeasuredBlockState:

g = gt.collection.data["lesmis"].copy()

# pretend we have measured and observed each edge twice

n = g.new_ep("int", 2)   # number of measurements
x = g.new_ep("int", 2)   # number of observations

e = g.edge(11, 36)
x[e] = 1                 # pretend we have observed edge (11, 36) only once

e = g.add_edge(15, 73)
n[e] = 2                 # pretend we have measured non-edge (15, 73) twice,
x[e] = 1                 # but observed it as an edge once.

# We inititialize MeasuredBlockState, assuming that each non-edge has
# been measured only once (as opposed to twice for the observed
# edges), as specified by the 'n_default' and 'x_default' parameters.

state = gt.MeasuredBlockState(g, n=n, n_default=1, x=x, x_default=0)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

def collect_marginals(s):
   global u, bs, cs
   u = s.collect_marginal(u)
   bstate = s.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())
   cs.append(gt.local_clustering(s.get_graph()).fa.mean())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

Which yields the following output:

Posterior probability of edge (11, 36): 0.810481...
Posterior probability of non-edge (15, 73): 0.135613...
Estimated average local clustering: 0.574558 ± 0.006835...

We have a successful reconstruction, where both ambiguous adjacency matrix entries are correctly recovered. The value for the average clustering coefficient is also correctly estimated, and is compatible with the true value \(0.57313675\), within the estimated error.

Below we visualize the maximum marginal posterior estimate of the reconstructed network:

# The maximum marginal posterior estimator can be obtained by
# filtering the edges with probability larger than .5

u = gt.GraphView(u, efilt=u.ep.eprob.fa > .5)

# Mark the recovered true edges as red, and the removed spurious edges as green
ecolor = u.new_ep("vector<double>", val=[0, 0, 0, .6])
for e in u.edges():
    if g.edge(e.source(), e.target()) is None or (e.source(), e.target()) == (11, 36):
        ecolor[e] = [1, 0, 0, .6]
for e in g.edges():
    if u.edge(e.source(), e.target()) is None:
        ne = u.add_edge(e.source(), e.target())
        ecolor[ne] = [0, 1, 0, .6]

# Duplicate the internal block state with the reconstructed network
# u, for visualization purposes.

bstate = state.get_block_state()
bstate = bstate.levels[0].copy(g=u)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

edash = u.new_ep("vector<double>")
edash[u.edge(15, 73)] = [.1, .1, 0]
bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_color=ecolor, edge_dash_style=edash, edge_gradient=None,
            output="lesmis-reconstruction-marginals.svg")
../../_images/lesmis-reconstruction-marginals.svg

Reconstructed network of characters in the novel Les Misérables, assuming that each edge has been measured and recorded twice, and each non-edge has been measured only once, with the exception of edge (11, 36), shown in red, and non-edge (15, 73), shown in green, which have been measured twice and recorded as an edge once. Despite the ambiguity, both errors are successfully corrected by the reconstruction. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Heterogeneous errors#

In a more general scenario the measurement errors can be different for each node pair, i.e. \(p_{ij}\) and \(q_{ij}\) are the missing and spurious edge probabilities for node pair \((i,j)\). The measurement likelihood then becomes

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \boldsymbol p, \boldsymbol q) = \prod_{i<j}{n_{ij}\choose x_{ij}}\left[(1-p_{ij})^{x_{ij}}p_{ij}^{n_{ij}-x_{ij}}\right]^{A_{ij}} \left[q_{ij}^{x_{ij}}(1-q_{ij})^{n_{ij}-x_{ij}}\right]^{1-A_{ij}}.\]

Since the noise magnitudes are a priori unknown, we consider the integrated likelihood

\[P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha,\beta,\mu,\nu) = \prod_{i<j}\int P(x_{ij} | n_{ij}, A_{ij}, p_{ij}, q_{ij}) P(p_{ij}|\alpha,\beta) P(q_{ij}|\mu,\nu)\;\mathrm{d}p_{ij}\,\mathrm{d}q_{ij}\]

where \(P(p_{ij}|\alpha,\beta)\) and \(P(q_{ij}|\mu,\nu)\) are Beta prior distributions, like before. Instead of pre-specifying the hyperparameters, we include them from the posterior distribution

\[P(\boldsymbol A, \boldsymbol b, \alpha,\beta,\mu,\nu | \boldsymbol x, \boldsymbol n) = \frac{P(\boldsymbol x | \boldsymbol n, \boldsymbol A, \alpha,\beta,\mu,\nu)P(\boldsymbol A, \boldsymbol b)P(\alpha,\beta,\mu,\nu)}{P(\boldsymbol x| \boldsymbol n)},\]

where \(P(\alpha,\beta,\mu,\nu)\propto 1\) is a uniform hyperprior.

Operationally, the inference with this model works similarly to the one with uniform error rates, as we see with the same example:

state = gt.MixedMeasuredBlockState(g, n=n, n_default=1, x=x, x_default=0)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=200, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

Which yields:

Posterior probability of edge (11, 36): 0.654665...
Posterior probability of non-edge (15, 73): 0.063406...
Estimated average local clustering: 0.570908 ± 0.007573...

The results are very similar to the ones obtained with the uniform model in this case, but can be quite different in situations where a large number of measurements has been performed (see [peixoto-reconstructing-2018] for details).

Extraneous error estimates#

In some situations the edge uncertainties are estimated by means other than repeated measurements, using domain-specific models. Here we consider the general case where the error estimates are extraneously provided as independent edge probabilities \(\boldsymbol Q\),

\[P_Q(\boldsymbol A | \boldsymbol Q) = \prod_{i<j}Q_{ij}^{A_{ij}}(1-Q_{ij})^{1-A_{ij}},\]

where \(Q_{ij}\) is the estimated probability of edge \((i,j)\). Although in principle we could reconstruct networks directly from the above distribution, we can also incorporate it with SBM inference to take advantage of large-scale structures present in the data. We do so by employing Bayes’ rule to extract the noise model from the provided values [martin-structural-2015] [peixoto-reconstructing-2018],

\[\begin{split}\begin{align} P_Q(\boldsymbol Q | \boldsymbol A) &= \frac{P_Q(\boldsymbol A | \boldsymbol Q)P_Q(\boldsymbol Q)}{P_Q(\boldsymbol A)},\\ & = P_Q(\boldsymbol Q) \prod_{i<j} \left(\frac{Q_{ij}}{\bar Q}\right)^{A_{ij}}\left(\frac{1-Q_{ij}}{1-\bar Q}\right)^{1-A_{ij}}, \end{align}\end{split}\]

where \(\bar Q = \sum_{i<j}Q_{ij}/{N\choose 2}\) is the estimated network density, and \(P_Q(\boldsymbol Q)\) is an unknown prior for \(\boldsymbol Q\), which can remain unspecified as it has no effect on the posterior distribution. With the above, we can reconstruct the network based on the posterior distribution,

\[P(\boldsymbol A, \boldsymbol b | \boldsymbol Q) = \frac{P_Q(\boldsymbol Q | \boldsymbol A)P(\boldsymbol A, \boldsymbol b)}{P(\boldsymbol Q)}\]

where \(P(\boldsymbol A, \boldsymbol b)\) is the joint SBM distribution used before. Note that this reconstruction will be different from the one obtained directly from the original estimation, i.e.

\[P(\boldsymbol A | \boldsymbol Q) = \sum_{\boldsymbol b}P(\boldsymbol A, \boldsymbol b | \boldsymbol Q) \neq P_Q(\boldsymbol A | \boldsymbol Q).\]

This is because the posterior \(P(\boldsymbol A | \boldsymbol Q)\) will take into consideration the correlations found in the data, as captured by the inferred SBM structure, as further evidence for the existence and non-existence of edges. We illustrate this with an example similar to the one considered previously, where two adjacency matrix entries with the same ambiguous edge probability \(Q_{ij}=1/2\) are correctly reconstructed as edge and non-edge, due to the joint SBM inference:

g = gt.collection.data["lesmis"].copy()

N = g.num_vertices()
E = g.num_edges()

q = g.new_ep("double", .98)   # edge uncertainties

e = g.edge(11, 36)
q[e] = .5                     # ambiguous true edge

e = g.add_edge(15, 73)
q[e] = .5                     # ambiguous spurious edge

# We inititialize UncertainBlockState, assuming that each non-edge
# has an uncertainty of q_default, chosen to preserve the expected
# density of the original network:

q_default = (E - q.a.sum()) / ((N * (N - 1))/2 - E)

state = gt.UncertainBlockState(g, q=q, q_default=q_default)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

def collect_marginals(s):
   global bs, u, cs
   u = s.collect_marginal(u)
   bstate = s.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())
   cs.append(gt.local_clustering(s.get_graph()).fa.mean())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

The above yields the output:

Posterior probability of edge (11, 36): 0.945194...
Posterior probability of non-edge (15, 73): 0.054805...
Estimated average local clustering: 0.556156 ± 0.021553...

The reconstruction is accurate, despite the two ambiguous entries having the same measurement probability. The reconstructed network is visualized below.

# The maximum marginal posterior estimator can be obtained by
# filtering the edges with probability larger than .5

u = gt.GraphView(u, efilt=u.ep.eprob.fa > .5)

# Mark the recovered true edges as red, and the removed spurious edges as green
ecolor = u.new_ep("vector<double>", val=[0, 0, 0, .6])
edash = u.new_ep("vector<double>")
for e in u.edges():
    if g.edge(e.source(), e.target()) is None or (e.source(), e.target()) == (11, 36):
        ecolor[e] = [1, 0, 0, .6]

for e in g.edges():
    if u.edge(e.source(), e.target()) is None:
        ne = u.add_edge(e.source(), e.target())
        ecolor[ne] = [0, 1, 0, .6]
        if (e.source(), e.target()) == (15, 73):
            edash[ne] = [.1, .1, 0]

bstate = state.get_block_state()
bstate = bstate.levels[0].copy(g=u)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_color=ecolor, edge_dash_style=edash, edge_gradient=None,
            output="lesmis-uncertain-reconstruction-marginals.svg")
../../_images/lesmis-uncertain-reconstruction-marginals.svg

Reconstructed network of characters in the novel Les Misérables, assuming that each edge as a measurement probability of \(.98\). Edge (11, 36), shown in red, and non-edge (15, 73), shown in green, both have probability \(0.5\). Despite the ambiguity, both errors are successfully corrected by the reconstruction. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Latent Poisson multigraphs#

Even in situations where measurement errors can be neglected, it can still be useful to assume a given network is the outcome of a “hidden” multigraph model, i.e. more than one edge between nodes is allowed, but then its multiedges are “erased” by transforming them into simple edges. In this way, it is possible to construct generative models that can better handle situations where the underlying network possesses heterogeneous density, such as strong community structure and broad degree distributions [peixoto-latent-2020]. This can be incorporated into the scheme of Eq. (1) by considering the data to be the observed simple graph, \(\boldsymbol{\mathcal{D}} = \boldsymbol G\). We proceed in same way as in the previous reconstruction scenarios, but using instead LatentMultigraphBlockState.

For example, in the following we will obtain the community structure and latent multiedges of a network of political books:

g = gt.collection.data["polbooks"]

state = gt.LatentMultigraphBlockState(g)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

u = None              # marginal posterior multigraph
bs = []               # partitions

def collect_marginals(s):
   global bs, u
   u = s.collect_marginal_multigraph(u)
   bstate = state.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

# compute average multiplicities

ew = u.new_ep("double")
w = u.ep.w
wcount = u.ep.wcount
for e in u.edges():
    ew[e] = (wcount[e].a * w[e].a).sum() / wcount[e].a.sum()

bstate = state.get_block_state()
bstate = bstate.levels[0].copy(g=u)

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_pen_width=gt.prop_to_size(ew, .1, 8, power=1), edge_gradient=None,
            output="polbooks-erased-poisson.svg")
../../_images/polbooks-erased-poisson.svg

Reconstructed latent Poisson degree-corrected SBM for a network of political books, showing the marginal mean edge multiplicities as line thickness. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Latent triadic closures#

Another useful reconstruction scenario is when we assume our observed network is the outcome of a mixture of different edge placement mechanisms. One example is the combination of triadic closure with community structure [peixoto-disentangling-2022]. This approach can be used to separate the effects of triangle formation from node homophily, which are typically conflated. We proceed in same way as in the previous reconstruction scenarios, but using instead LatentClosureBlockState.

For example, in the following we will obtain the community structure and latent closure edges of a network of political books:

g = gt.collection.data["polbooks"]

state = gt.LatentClosureBlockState(g, L=10)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

us = None             # marginal posterior graphs
bs = []               # partitions

def collect_marginals(s):
   global bs, us
   us = s.collect_marginal(us)
   bstate = state.bstate
   bs.append(bstate.levels[0].b.a.copy())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

u = us[0]             # marginal seminal edges

# Disambiguate partitions and obtain marginals
pmode = gt.PartitionModeState(bs, converge=True)
pv = pmode.get_marginal(u)

bstate = state.bstate.levels[0].copy(g=u)

# edge width
ew = u.ep.eprob.copy()
ew.a = abs(ew.a - .5)

# get a color map
clrs = [(1, 0, 0, 1.0),
        (0, 0, 0, 1.0)]
red_cm = matplotlib.colors.LinearSegmentedColormap.from_list("Set3", clrs)

# draw red edge last
eorder = u.ep.eprob.copy()
eorder.a *= -1

bstate.draw(pos=u.own_property(g.vp.pos), vertex_shape="pie", vertex_pie_fractions=pv,
            edge_pen_width=gt.prop_to_size(ew, .1, 4, power=1),
            edge_gradient=None, edge_color=u.ep.eprob, ecmap=red_cm,
            eorder=eorder, output="polbooks-closure.svg")
../../_images/polbooks-closure.svg

Reconstructed degree-corrected SBM with latent closure edges for a network of political books, showing the marginal probability of an edge being due to triadic closure as the color red. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color.#

Triadic closure can also be used to perform uncertain network reconstruction, using MeasuredClosureBlockState, in a manner analogous to what was done in Measured networks:

g = gt.collection.data["lesmis"].copy()

# pretend we have measured and observed each edge twice

n = g.new_ep("int", 2)   # number of measurements
x = g.new_ep("int", 2)   # number of observations

e = g.edge(11, 36)
x[e] = 1                 # pretend we have observed edge (11, 36) only once

e = g.add_edge(15, 73)
n[e] = 2                 # pretend we have measured non-edge (15, 73) twice,
x[e] = 1                 # but observed it as an edge once.

# We inititialize MeasuredBlockState, assuming that each non-edge has
# been measured only once (as opposed to twice for the observed
# edges), as specified by the 'n_default' and 'x_default' parameters.

state = gt.MeasuredClosureBlockState(g, n=n, n_default=1, x=x, x_default=0, L=10)

# We will first equilibrate the Markov chain
gt.mcmc_equilibrate(state, wait=100, mcmc_args=dict(niter=10))

# Now we collect the marginals for exactly 100,000 sweeps, at
# intervals of 10 sweeps:

us = None             # marginal posterior edge probabilities
bs = []               # partitions
cs = []               # average local clustering coefficient

def collect_marginals(s):
   global us, bs, cs
   us = s.collect_marginal(us)
   bstate = s.get_block_state()
   bs.append(bstate.levels[0].b.a.copy())
   cs.append(gt.local_clustering(s.get_graph()).fa.mean())

gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_marginals)

u = us[-1]
eprob = u.ep.eprob
print("Posterior probability of edge (11, 36):", eprob[u.edge(11, 36)])
print("Posterior probability of non-edge (15, 73):", eprob[u.edge(15, 73)] if u.edge(15, 73) is not None else 0.)
print("Estimated average local clustering: %g ± %g" % (np.mean(cs), np.std(cs)))

Which yields the following output:

Posterior probability of edge (11, 36): 0.996199...
Posterior probability of non-edge (15, 73): 0.018201...
Estimated average local clustering: 0.573772 ± 0.003389

Edge prediction as binary classification#

A more traditional approach to the prediction of missing and spurious edges formulates it as a supervised binary classification task, where the edge/non-edge scores are computed by fitting a generative model to the observed data, and computing their probabilities under that model [clauset-hierarchical-2008] [guimera-missing-2009]. In this setting, one typically omits any explicit model of the measurement process (hence intrinsically assuming it to be uniform), and as a consequence of the overall setup, only relative probabilities between individual missing and spurious edges can be produced, instead of the full posterior distribution considered in the last section. Since this limits the overall network reconstruction, and does not yield confidence intervals, it is a less powerful approach. Nevertheless, it is a popular procedure, which can also be performed with graph-tool, as we describe in the following.

We set up the classification task by dividing the edges/non-edges into two sets \(\boldsymbol A\) and \(\delta \boldsymbol A\), where the former corresponds to the observed network and the latter either to the missing or spurious edges. We may compute the posterior of \(\delta \boldsymbol A\) as [valles-catala-consistencies-2018]

(2)#\[P(\delta \boldsymbol A | \boldsymbol A) \propto \sum_{\boldsymbol b}\frac{P(\boldsymbol A \cup \delta\boldsymbol A| \boldsymbol b)}{P(\boldsymbol A| \boldsymbol b)}P(\boldsymbol b | \boldsymbol A)\]

up to a normalization constant 1Note that the posterior of Eq. (2) cannot be used to sample the reconstruction \(\delta \boldsymbol G\), as it is not informative of the overall network density (i.e. absolute number of missing and spurious edges). It can, however, be used to compare different reconstructions with the same density.. Although the normalization constant is difficult to obtain in general (since we need to perform a sum over all possible spurious/missing edges), the numerator of Eq. (2) can be computed by sampling partitions from the posterior, and then inserting or deleting edges from the graph and computing the new likelihood. This means that we can easily compare alternative predictive hypotheses \(\{\delta \boldsymbol A_i\}\) via their likelihood ratios

\[\lambda_i = \frac{P(\delta \boldsymbol A_i | \boldsymbol A)}{\sum_j P(\delta \boldsymbol A_j | \boldsymbol A)}\]

which do not depend on the normalization constant.

The values \(P(\delta \boldsymbol A | \boldsymbol A, \boldsymbol b)\) can be computed with get_edges_prob(). Hence, we can compute spurious/missing edge probabilities just as if we were collecting marginal distributions when doing model averaging.

Below is an example for predicting the two following edges in the football network, using the nested model (for which we need to replace \(\boldsymbol b\) by \(\{\boldsymbol b_l\}\) in the equations above).

../../_images/football_missing.svg

Two non-existing edges in the football network (in red): \((101,102)\) in the middle, and \((17,56)\) in the upper right region of the figure.#

g = gt.collection.data["football"]

missing_edges = [(101, 102), (17, 56)]

L = 10

state = gt.minimize_nested_blockmodel_dl(g)

probs = ([], [])

def collect_edge_probs(s):
    p1 = s.get_edges_prob([missing_edges[0]], entropy_args=dict(partition_dl=False))
    p2 = s.get_edges_prob([missing_edges[1]], entropy_args=dict(partition_dl=False))
    probs[0].append(p1)
    probs[1].append(p2)

# Now we collect the probabilities for exactly 100,000 sweeps
gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                    callback=collect_edge_probs)


def get_avg(p):
   p = np.array(p)
   pmax = p.max()
   p -= pmax
   return pmax + log(exp(p).mean())

p1 = get_avg(probs[0])
p2 = get_avg(probs[1])

p_sum = get_avg([p1, p2]) + log(2)

l1 = p1 - p_sum
l2 = p2 - p_sum

print("likelihood-ratio for %s: %g" % (missing_edges[0], exp(l1)))
print("likelihood-ratio for %s: %g" % (missing_edges[1], exp(l2)))
likelihood-ratio for (101, 102): 0.0201805
likelihood-ratio for (17, 56): 0.979819

From which we can conclude that edge \((17, 56)\) is more likely than \((101, 102)\) to be a missing edge.

The prediction using the non-nested model can be performed in an entirely analogous fashion.

References#

[peixoto-reconstructing-2018] (1,2,3,4)

Tiago P. Peixoto, “Reconstructing networks with unknown and heterogeneous errors”, Phys. Rev. X 8 041011 (2018). DOI: 10.1103/PhysRevX.8.041011 [sci-hub, @tor], arXiv: 1806.07956

[peixoto-disentangling-2022]

Tiago P. Peixoto, “Disentangling homophily, community structure and triadic closure in networks”, Phys. Rev. X 12, 011004 (2022), DOI: 10.1103/PhysRevX.12.011004 [sci-hub, @tor], arXiv: 2101.02510

[peixoto-latent-2020]

Tiago P. Peixoto, “Latent Poisson models for networks with heterogeneous density”, Phys. Rev. E 102 012309 (2020) DOI: 10.1103/PhysRevE.102.012309 [sci-hub, @tor], arXiv: 2002.07803

[martin-structural-2015]

Travis Martin, Brian Ball, M. E. J. Newman, “Structural inference for uncertain networks”, Phys. Rev. E 93, 012306 (2016). DOI: 10.1103/PhysRevE.93.012306 [sci-hub, @tor], arXiv: 1506.05490

[clauset-hierarchical-2008]

Aaron Clauset, Cristopher Moore, M. E. J. Newman, “Hierarchical structure and the prediction of missing links in networks”, Nature 453, 98-101 (2008). DOI: 10.1038/nature06830 [sci-hub, @tor]

[guimera-missing-2009]

Roger Guimerà, Marta Sales-Pardo, “Missing and spurious interactions and the reconstruction of complex networks”, PNAS vol. 106 no. 52 (2009). DOI: 10.1073/pnas.0908366106 [sci-hub, @tor]

[valles-catala-consistencies-2018]

Toni Vallès-Català, Tiago P. Peixoto, Roger Guimerà, Marta Sales-Pardo, “Consistencies and inconsistencies between model selection and link prediction in networks”, Phys. Rev. E 97 062316 (2018), DOI: 10.1103/PhysRevE.97.062316 [sci-hub, @tor], arXiv: 1705.07967

[guimera-modularity-2004]

Roger Guimerà, Marta Sales-Pardo, and Luís A. Nunes Amaral, “Modularity from fluctuations in random graphs and complex networks”, Phys. Rev. E 70, 025101(R) (2004), DOI: 10.1103/PhysRevE.70.025101 [sci-hub, @tor]

[hayes-connecting-2006]

Brian Hayes, “Connecting the dots. can the tools of graph theory and social-network studies unravel the next big plot?”, American Scientist, 94(5):400-404, 2006. http://www.jstor.org/stable/27858828

[ulanowicz-network-2005]

Robert E. Ulanowicz, and Donald L. DeAngelis. “Network analysis of trophic dynamics in south florida ecosystems.” US Geological Survey Program on the South Florida Ecosystem 114 (2005). https://fl.water.usgs.gov/PDF_files/ofr99_181_gerould.pdf#page=125

[read-cultures-1954]

Kenneth E. Read, “Cultures of the Central Highlands, New Guinea”, Southwestern J. of Anthropology, 10(1):1-43 (1954). DOI: 10.1086/soutjanth.10.1.3629074 [sci-hub, @tor]

Footnotes