GenPottsBPState#

class graph_tool.dynamics.GenPottsBPState(g, f, x=1, theta=0, em=None, vm=None, marginal_init=False, frozen=None, converge=True)[source]#

Bases: BPBaseState

Belief-propagtion equations for a genralized Potts model.

Parameters:
gGraph

Graph to be used for the dynamics.

fndarray or list of list

\(q\times q\) 2D symmetric with iteraction energies between the \(q\) spin values.

xfloat or EdgePropertyMap (optional, default: 1)

Edge coupling weights. If a EdgePropertyMap is given, it needs to be of type double. If a scalar is given, this will be determine the value for every edge.

thetafloat or iterable or VertexPropertyMap (optional, default: 0.)

Vertex fields. If VertexPropertyMap, this needs to be of type vector<double>, containing \(q\) field values for every node. If it’s an iterable, it should contains \(q\) field values, which are the same for every node. If a scalar is given, this will be determine the value for every field and vertex.

emEdgePropertyMap (optional, default: None)

If provided, it should be an EdgePropertyMap of type vector<double>, containing the edge messages.

vmVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type vector<double>, containing the node marginals.

marginal_initboolean (optional, default: False)

If True, the messages will be initialized from the node marginals.

frozenVertexPropertyMap (optional, default: None)

If provided, it should be an VertexPropertyMap of type bool, where a value True means that a vertex is not a variable, but a fixed field.

convergeboolean (optional, default: True)

If True, the function GenPottsBPState.converge() will be called just after construction.

Notes

This implements BP equations [mezard_information_2009], for a generalized Potts model given by

\[P(\boldsymbol s | \boldsymbol A, \boldsymbol x, \boldsymbol\theta) = \frac{\exp\left(\sum_{i<j}A_{ij}x_{ij}f_{s_i,s_j} + \sum_i\theta_{i,s_i}\right)} {Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta)}\]

where \(Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta)\) is the partition function.

The BP equations consist in the Bethe approximation

\[\log Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta) = \log Z_i - \sum_{i<j}A_{ij}\log Z_{ij}\]

with \(Z_{ij}=Z_j/Z_{j\to i}=Z_i/Z_{i\to j}\), obtained from the message-passing equations

\[P_{i\to j}(s_i) = \frac{e^{\theta_{i,s_i}}}{Z_{i\to j}} \prod_{k\in \partial i\setminus j}\sum_{s_k=1}^{q}P_{k\to i}(s_k)e^{x_{ik}f_{x_i,x_k}},\]

where \(Z_{i\to j}\) is a normalization constant. From these equations, the marginal node probabilities are similarly obtained:

\[P_i(s_i) = \frac{e^{\theta_{i,s_i}}}{Z_i} \prod_{j\in \partial i}\sum_{s_j=1}^{q}P_{j\to i}(s_j)e^{x_{ij}f_{x_i,x_j}},\]

References

[mezard_information_2009]

Marc Mézard, and Andrea Montanari, “Information, physics, and computation”, Oxford University Press, 2009. https://web.stanford.edu/~montanar/RESEARCH/book.html

Examples

>>> g = gt.GraphView(gt.collection.data["polblogs"].copy(), directed=False)
>>> gt.remove_parallel_edges(g)
>>> g = gt.extract_largest_component(g, prune=True)
>>> state = gt.GenPottsBPState(g, f=array([[-1,  0,  1],
...                                        [ 0, -1,  1],
...                                        [ 1,  1, -1.25]])/20)
>>> s = state.sample()
>>> gt.graph_draw(g, g.vp.pos, vertex_fill_color=s,
...               output="bp-potts.svg")
<...>
../_images/bp-potts.svg

Marginal sample of a 3-state Potts model.#

Methods

converge([epsilon, max_niter, update_marginals])

Calls iterate() until delta falls below epsilon or the number of iterations exceeds max_niter.

copy()

Return a copy of the state.

energy(s)

Obtains the energy (Hamiltonean) of state s (a VertexPropertyMap).

iterate([niter, parallel, update_marginals])

Updates meassages synchronously (or asyncrhonously if parallel=False), niter number of times.

log_Z()

Obtains the log-partition function from the current messages.

log_prob(s)

Obtains the log-probability of state s (a VertexPropertyMap).

marginal_log_prob(s)

Obtains the marginal log-probability of state s (a VertexPropertyMap).

sample([update_marginals, val_type])

Samples a state from the marignal distribution.

update_marginals()

Update the node marginals from the current messages.

converge(epsilon=1e-08, max_niter=1000, update_marginals=True, **kwargs)#

Calls iterate() until delta falls below epsilon or the number of iterations exceeds max_niter.

If update_marignals=True, this function calls update_marginals() at the end.

The remaining keyword arguments are passed to iterate().

copy()#

Return a copy of the state.

energy(s)#

Obtains the energy (Hamiltonean) of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total energy sum is returned.

iterate(niter=1, parallel=True, update_marginals=True)#

Updates meassages synchronously (or asyncrhonously if parallel=False), niter number of times. This function returns the interation delta of the last iteration.

If update_marignals=True, this function calls update_marginals() at the end.

If enabled during compilation, this algorithm runs in parallel (i.e. using more than one thread.)

log_Z()#

Obtains the log-partition function from the current messages.

log_prob(s)#

Obtains the log-probability of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total log-probability sum is returned.

marginal_log_prob(s)#

Obtains the marginal log-probability of state s (a VertexPropertyMap).

If s is vector valued, it’s assumed to correspond to multiple states, and the total marginal log-probability sum is returned.

sample(update_marginals=True, val_type='int')#

Samples a state from the marignal distribution. This functio returns a VertexPropertyMap of type given by val_type.

If update_marignals=True, this function calls update_marginals() before sampling.

update_marginals()#

Update the node marginals from the current messages.