GenPottsBPState#
- class graph_tool.dynamics.GenPottsBPState(g, f, x=1, theta=0, em=None, vm=None, marginal_init=False, frozen=None, converge=True)[source]#
Bases:
BPBaseState
Belief-propagtion equations for a genralized Potts model.
- Parameters:
- g
Graph
Graph to be used for the dynamics.
- f
ndarray
or list of list \(q\times q\) 2D symmetric with iteraction energies between the \(q\) spin values.
- x
float
orEdgePropertyMap
(optional, default:1
) Edge coupling weights. If a
EdgePropertyMap
is given, it needs to be of typedouble
. If a scalar is given, this will be determine the value for every edge.- theta
float
or iterable orVertexPropertyMap
(optional, default:0.
) Vertex fields. If
VertexPropertyMap
, this needs to be of typevector<double>
, containing \(q\) field values for every node. If it’s an iterable, it should contains \(q\) field values, which are the same for every node. If a scalar is given, this will be determine the value for every field and vertex.- em
EdgePropertyMap
(optional, default:None
) If provided, it should be an
EdgePropertyMap
of typevector<double>
, containing the edge messages.- vm
VertexPropertyMap
(optional, default:None
) If provided, it should be an
VertexPropertyMap
of typevector<double>
, containing the node marginals.- marginal_init
boolean
(optional, default:False
) If
True
, the messages will be initialized from the node marginals.- frozen
VertexPropertyMap
(optional, default:None
) If provided, it should be an
VertexPropertyMap
of typebool
, where a value True means that a vertex is not a variable, but a fixed field.- converge
boolean
(optional, default:True
) If
True
, the functionGenPottsBPState.converge()
will be called just after construction.
- g
Notes
This implements BP equations [mezard_information_2009], for a generalized Potts model given by
\[P(\boldsymbol s | \boldsymbol A, \boldsymbol x, \boldsymbol\theta) = \frac{\exp\left(\sum_{i<j}A_{ij}x_{ij}f_{s_i,s_j} + \sum_i\theta_{i,s_i}\right)} {Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta)}\]where \(Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta)\) is the partition function.
The BP equations consist in the Bethe approximation
\[\log Z(\boldsymbol A, \boldsymbol x, \boldsymbol\theta) = \log Z_i - \sum_{i<j}A_{ij}\log Z_{ij}\]with \(Z_{ij}=Z_j/Z_{j\to i}=Z_i/Z_{i\to j}\), obtained from the message-passing equations
\[P_{i\to j}(s_i) = \frac{e^{\theta_{i,s_i}}}{Z_{i\to j}} \prod_{k\in \partial i\setminus j}\sum_{s_k=1}^{q}P_{k\to i}(s_k)e^{x_{ik}f_{x_i,x_k}},\]where \(Z_{i\to j}\) is a normalization constant. From these equations, the marginal node probabilities are similarly obtained:
\[P_i(s_i) = \frac{e^{\theta_{i,s_i}}}{Z_i} \prod_{j\in \partial i}\sum_{s_j=1}^{q}P_{j\to i}(s_j)e^{x_{ij}f_{x_i,x_j}},\]References
[mezard_information_2009]Marc Mézard, and Andrea Montanari, “Information, physics, and computation”, Oxford University Press, 2009. https://web.stanford.edu/~montanar/RESEARCH/book.html
Examples
>>> g = gt.GraphView(gt.collection.data["polblogs"].copy(), directed=False) >>> gt.remove_parallel_edges(g) >>> g = gt.extract_largest_component(g, prune=True) >>> state = gt.GenPottsBPState(g, f=array([[-1, 0, 1], ... [ 0, -1, 1], ... [ 1, 1, -1.25]])/20) >>> s = state.sample() >>> gt.graph_draw(g, g.vp.pos, vertex_fill_color=s, ... output="bp-potts.svg") <...>
Methods
converge
([epsilon, max_niter, update_marginals])Calls
iterate()
until delta falls belowepsilon
or the number of iterations exceedsmax_niter
.copy
()Return a copy of the state.
energy
(s)Obtains the energy (Hamiltonean) of state
s
(aVertexPropertyMap
).iterate
([niter, parallel, update_marginals])Updates meassages synchronously (or asyncrhonously if
parallel=False
), niter number of times.log_Z
()Obtains the log-partition function from the current messages.
log_prob
(s)Obtains the log-probability of state
s
(aVertexPropertyMap
).Obtains the marginal log-probability of state
s
(aVertexPropertyMap
).sample
([update_marginals, val_type])Samples a state from the marignal distribution.
Update the node marginals from the current messages.
- converge(epsilon=1e-08, max_niter=1000, update_marginals=True, **kwargs)#
Calls
iterate()
until delta falls belowepsilon
or the number of iterations exceedsmax_niter
.If
update_marignals=True
, this function callsupdate_marginals()
at the end.The remaining keyword arguments are passed to
iterate()
.
- copy()#
Return a copy of the state.
- energy(s)#
Obtains the energy (Hamiltonean) of state
s
(aVertexPropertyMap
).If
s
is vector valued, it’s assumed to correspond to multiple states, and the total energy sum is returned.
- iterate(niter=1, parallel=True, update_marginals=True)#
Updates meassages synchronously (or asyncrhonously if
parallel=False
), niter number of times. This function returns the interation delta of the last iteration.If
update_marignals=True
, this function callsupdate_marginals()
at the end.Parallel implementation.
If enabled during compilation, this algorithm will run in parallel using OpenMP. See the parallel algorithms section for information about how to control several aspects of parallelization.
- log_Z()#
Obtains the log-partition function from the current messages.
- log_prob(s)#
Obtains the log-probability of state
s
(aVertexPropertyMap
).If
s
is vector valued, it’s assumed to correspond to multiple states, and the total log-probability sum is returned.
- marginal_log_prob(s)#
Obtains the marginal log-probability of state
s
(aVertexPropertyMap
).If
s
is vector valued, it’s assumed to correspond to multiple states, and the total marginal log-probability sum is returned.
- sample(update_marginals=True, val_type='int')#
Samples a state from the marignal distribution. This functio returns a
VertexPropertyMap
of type given byval_type
.If
update_marignals=True
, this function callsupdate_marginals()
before sampling.
- update_marginals()#
Update the node marginals from the current messages.