NormalBPState#

class graph_tool.dynamics.NormalBPState(g, x=1, mu=0, theta=1, em_m=None, em_s=None, vm_m=None, vm_s=None, marginal_init=False, frozen=None, converge=True)[source]#

Bases: BPBaseState

Belief-propagation equations for the multivariate Normal distribution.

Parameters:
gGraph

Graph to be used for the dynamics.

xfloat or EdgePropertyMap (optional, default: 1.)

Inverse covariance couplings. If a EdgePropertyMap is given, it needs to be of type double. If a scalar is given, this will be determine the value for every edge.

mufloat or VertexPropertyMap (optional, default: 0.)

Node means. If a VertexPropertyMap is given, it needs to be of type double. If a scalar is given, this will be determine the value for every vertex.

thetafloat or iterable or VertexPropertyMap (optional, default: 1.)

Diagonal of the inverse covariance matrix. If VertexPropertyMap, this needs to be of type double. If a scalar is given, this will be determine the value for every vertex.

em_mEdgePropertyMap (optional, default: None)

If provided, it should be an EdgePropertyMap of type vector<double>, containing the edge messages for the means.

em_sEdgePropertyMap (optional, default: None)

If provided, it should be an EdgePropertyMap of type vector<double>, containing the edge messages for the variances.

vm_mVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type vector<double>, containing the node marginal means.

vm_sVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type vector<double>, containing the node marginal variances.

marginal_initboolean (optional, default: False)

If True, the messages will be initialized from the node marginals.

frozenVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type bool, where a value True means that a vertex is not a variable, but a fixed field.

convergeboolean (optional, default: True)

If True, the function GenPottsBPState.converge() will be called just after construction.

Notes

This implements BP equations [mezard_information_2009] for the mutivariate Normal distribution given by

\[P(\boldsymbol s | \boldsymbol A, \boldsymbol x, \boldsymbol \mu \boldsymbol\theta) = \frac{\exp\left(-\frac{1}{2}(\boldsymbol s-\boldsymbol\mu)^{\intercal} \boldsymbol X (\boldsymbol s - \boldsymbol\mu)\right)} {Z(\boldsymbol X)}\]

where \(X_{ij}=A_{ij}x_{ij}\) for \(i\neq j\), \(X_{ii}=\theta_i\), and \(Z(\boldsymbol X) = (2\pi)^{N/2}\left|\boldsymbol X\right|^{-1/2}\).

The BP equations consist in the Bethe approximation

\[\log Z(\boldsymbol X) = \log Z_i - \sum_{i<j}A_{ij}\log Z_{ij}\]

with \(Z_{ij}=Z_j/Z_{j\to i}=Z_i/Z_{i\to j}\), obtained from the message-passing equations

\[\begin{split}\begin{aligned} m_{i\to j} &= \frac{\sum_{k\in \partial i\setminus j}A_{ik}x_{ik}m_{k\to i} - \mu_i} {\theta_i - \sum_{k\in \partial i\setminus j}A_{ik}x_{ik}^2\sigma_{k\to i}^2},\\ \sigma_{i\to j}^2 &= \frac{1}{\theta_i - \sum_{k\in \partial i\setminus j}A_{ik}x_{ik}^2\sigma_{k\to i}^2}, \end{aligned}\end{split}\]

with

\[\begin{split}\begin{aligned} \log Z_{i\to j} &= \frac{\beta_{i\to j}^2}{4\alpha_{i\to j}} - \frac{1}{2}\log\alpha_{i\to j} + \frac{1}{2}\log\pi\\ \log Z_{i} &= \frac{\beta_{i}^2}{4\alpha_{i}} - \frac{1}{2}\log\alpha_{i} + \frac{1}{2}\log\pi \end{aligned}\end{split}\]

where

\[\begin{split}\begin{aligned} \alpha_{i\to j} &= \frac{\theta_i - \sum_{k\in \partial i\setminus j}A_{ik}x_{ik}^2\sigma_{k\to i}^2}{2}\\ \beta_{i\to j} &= \sum_{k\in \partial i\setminus j}A_{ik}x_{ik}m_{k\to i} - \mu_i\\ \alpha_{i} &= \frac{\theta_i - \sum_{j\in \partial i}A_{ij}x_{ij}^2\sigma_{j\to i}^2}{2}\\ \beta_{i} &= \sum_{j\in \partial i}A_{ij}x_{ij}m_{j\to i} - \mu_i. \end{aligned}\end{split}\]

From these equations, the marginal node probability densities are normal distributions with mean and variance given by

\[\begin{split}\begin{aligned} m_i &= \frac{\sum_{j}A_{ij}x_{ij}m_{j\to i} - \mu_i} {\theta_i - \sum_{j}A_{ij}x_{ij}^2\sigma_{j\to i}^2},\\ \sigma_i^2 &= \frac{1}{\theta_i - \sum_{j}A_{ij}x_{ij}^2\sigma_{j\to i}^2}. \end{aligned}\end{split}\]

References

[mezard_information_2009]

Marc Mézard, and Andrea Montanari, “Information, physics, and computation”, Oxford University Press, 2009. https://web.stanford.edu/~montanar/RESEARCH/book.html

Examples

>>> g = gt.GraphView(gt.collection.data["polblogs"].copy(), directed=False)
>>> gt.remove_parallel_edges(g)
>>> g = gt.extract_largest_component(g, prune=True)
>>> state = gt.NormalBPState(g, x=1/200, mu=g.vp.value.t(lambda x: arctanh((2*x-1)*.9)))
>>> s = state.sample()
>>> gt.graph_draw(g, g.vp.pos, vertex_fill_color=s,
...               output="bp-normal.svg")
<...>
../_images/bp-normal.svg

Marginal sample of a multivariate normal distribution.#

Methods

converge([epsilon, max_niter, update_marginals])

Calls iterate() until delta falls below epsilon or the number of iterations exceeds max_niter.

copy()

Return a copy of the state.

energy(s)

Obtains the energy (Hamiltonean) of state s (a VertexPropertyMap).

iterate([niter, parallel, update_marginals])

Updates meassages synchronously (or asyncrhonously if parallel=False), niter number of times.

log_Z()

Obtains the log-partition function from the current messages.

log_prob(s)

Obtains the log-probability of state s (a VertexPropertyMap).

marginal_log_prob(s)

Obtains the marginal log-probability of state s (a VertexPropertyMap).

sample([update_marginals])

Samples a state from the marignal distribution.

update_marginals()

Update the node marginals from the current messages.

converge(epsilon=1e-08, max_niter=1000, update_marginals=True, **kwargs)#

Calls iterate() until delta falls below epsilon or the number of iterations exceeds max_niter.

If update_marignals=True, this function calls update_marginals() at the end.

The remaining keyword arguments are passed to iterate().

copy()#

Return a copy of the state.

energy(s)#

Obtains the energy (Hamiltonean) of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total energy sum is returned.

iterate(niter=1, parallel=True, update_marginals=True)#

Updates meassages synchronously (or asyncrhonously if parallel=False), niter number of times. This function returns the interation delta of the last iteration.

If update_marignals=True, this function calls update_marginals() at the end.

If enabled during compilation, this algorithm runs in parallel (i.e. using more than one thread.)

log_Z()#

Obtains the log-partition function from the current messages.

log_prob(s)#

Obtains the log-probability of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total log-probability sum is returned.

marginal_log_prob(s)#

Obtains the marginal log-probability of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total marginal log-probability sum is returned.

sample(update_marginals=True)[source]#

Samples a state from the marignal distribution. This functio returns a VertexPropertyMap of type given by val_type.

If update_marignals=True, this function calls update_marginals() before sampling.

update_marginals()#

Update the node marginals from the current messages.