Entropy#

kl_div_mvn#

pyvbmc.stats.kl_div_mvn(mu1, sigma1, mu2, sigma2)[source]#

Compute the analytical Kullback-Leibler divergence between two multivariate normal pdfs.

Parameters:
mu1np.ndarray

The k-dimensional mean vector of the first multivariate normal pdf.

sigma1np.ndarray

The covariance matrix of the first multivariate normal pdf.

mu2np.ndarray

The k-dimensional mean vector of the second multivariate normal pdf.

sigma2np.ndarray

The covariance matrix of the second multivariate normal pdf.

Returns:
kl_divnp.array

The computed Kullback-Leibler divergence.

entlb_vbmc#

pyvbmc.entropy.entlb_vbmc(vp: VariationalPosterior, grad_flags: tuple = (True, True, True, True), jacobian_flag: bool = True)[source]#

Entropy lower bound for variational posterior by Jensen’s inequality.

Parameters:
vpVariationalPosterior

An instance of VariationalPosterior class.

grad_flagstuple of bool, len(grad_flags)=4, optional

Whether to compute the gradients for [mu, sigma, lambda, w].

jacobian_flagbool, optional

Whether variational parameters are transformed. The variational parameters and corresponding transformations are: sigma (log), lambda (log), w (softmax).

Returns:
H: float

Entropy lower bound of vp [1].

dH: np.ndarray

Gradient of entropy lower bound.

Raises:
NotImplementedError

Not implemented for K > BigK.

References

[1]

Gershman, S. J., Hoffman, M. D., & Blei, D. M. (2012). Nonparametric variational inference. Proceedings of the 29th International Conference on Machine Learning, 235–242.

entmc_vbmc#

pyvbmc.entropy.entmc_vbmc(vp: VariationalPosterior, Ns: int, grad_flags: tuple = (True, True, True, True), jacobian_flag: bool = True)[source]#

Monte Carlo estimate of entropy of variational posterior.

Parameters:
vpVariationalPosterior

An instance of VariationalPosterior class.

Nsint

Number of samples to draw. Ns > 0.

grad_flagstuple of bool, len(grad_flags)=4, default=tuple([True] * 4)

Whether to compute the gradients for [mu, sigma, lambda, w].

jacobian_flagbool

Whether variational parameters are transformed. The variational parameters and corresponding transformations are: sigma (log), lambda (log), w (softmax).

Returns:
H: float

Estimated entropy of vp by Monte Carlo method.

dH: np.ndarray

Estimated entropy gradient by Monte Carlo method. \(dH = \left[\nabla_{\mu_1}^{T} H, ..., \nabla_{\mu_K}^{T} H, \nabla_{\sigma}^{T} H, \nabla_{\lambda}^{T} H, \nabla_{\omega}^{T} H\right]\)