## Tuesday, 18 April 2017

### How to calculate WBIC with Stan

In a previous post, I showed how to use Stan to calculate the Bayes marginal likelihood with the path sampling method. For large models, this method can be computationally expensive, and therefore estimates for the marginal likelihood have been developed. The most famous one is the Bayesian Information Criterion (BIC), an estimate for the Bayes free energy. Although the name suggests otherwise, the BIC is not a "true Bayesian" estimate, since it depends only on a point estimate, and not the entire posterior distribution: ${\sf BIC} = -2 L(\hat{\theta}) + k \cdot \log(n)\,,$ where $$L$$ is the log-likelihood function, $$\hat{\theta}$$ is the maximum likelihood estimate of the models parameters, $$k$$ is the dimension of $$\theta$$ and $$n$$ is the number of data points.
More importantly, the BIC does not work for singular models. As Watanabe points out, if a statistical model contains hierarchical layers or hidden variables then it is singular. Watanabe, therefore, proposes a generalization of the BIC, the WBIC, that works for singular models as well, and is truly Bayesian. This is not unlike AIC versus WAIC.

Suppose that you have some sort of MCMC algorithm to sample from the posterior distribution of your model, given your data. When you want to compute the WAIC, you will just have to store the likelihoods of each observation, given each sampled set of parameters. This can be done during an actual run of the MCMC algorithm. For the WBIC, however, you will need to do a separate run at a different "temperature". In fact, Watanabe gives the following formula: ${\sf WBIC} = -2\mathbb{E}_{\theta}^{\beta}[L(\theta)]\,,$ where $$\beta = 1/\log(n)$$, and the notation $$\mathbb{E}_{\theta}^{\beta}[f(\theta)]$$ expresses that the expectation of $$f$$ is taken with respect to the distribution with PDF $\theta \mapsto \frac{\exp(\beta L(\theta)) \pi(\theta)}{\int\exp(\beta L(\theta)) \pi(\theta)d\theta}$ which equals the posterior distribution when $$\beta = 1$$ and the prior distribution $$\pi$$ when $$\beta = 0$$.

Something very similar happens in the path sampling example, with the exception that the "path" is replaced by a single point. In a recent paper about another alternative to the BIC, the Singular BIC (sBIC), Drton and Plummer point out the similarity to the mean value theorem for integrals. Using the path sampling method, the log-marginal-likelihood is computed from the following integral $\log(P(D|M)) = \int_0^1 \mathbb{E}_{\theta}^{\beta}[L(\theta)]\, d\beta\,.$ Here, we write $$P(D|M)$$ for the marginal likelihood of model $$M$$, and data $$D$$. According to the mean value theorem $\exists \beta^{\ast} \in (0,1): \log(P(D|M)) = \mathbb{E}_{\theta}^{\beta^{\ast}}[L(\theta)]\,,$ and Watanabe argues that choosing $$1/\log(n)$$ for $$\beta^{\ast}$$ gives a good approximation. Notice that the mean value theorem does not provide a recipe to compute $$\beta^{\ast}$$, and that Watanabe uses rather advanced algebraic geometry to prove that his choice for $$\beta^{\ast}$$ is good; the mean value theorem only gives a minimal justification for Watanabe's result.

## Implementing WBIC in Stan

Let us start with a very simple example: the biased coin model. Suppose that we have $$N$$ coin flips, with outcomes $$x \in \{0,1\}^N$$ (heads = 1 and tails = 0), and let $$K$$ denote the number of heads. We compare the null model (the coin is fair) with the alternative model (the coin is biased) using $$\Delta {\sf WBIC}$$ and see if we get the "right" answer. Notice that for this simple example, we can compute the Bayes factor exactly.

In order to have a Stan model that can be used for both regular sampling, and estimating WBIC, a sampling mode is passed with the data to determine the desired behavior. In the transformed data block, a parameter watanabe_beta is then defined, determining the sampling "temperature".
The actual model is defined in the transformed parameters block, such that log_likes can be used in both the model block as the generated quantities block. In stead of a sampling statement (x[n] ~ bernoulli(p)), we have to use the bernoulli_lpmf function, which allows us to scale the log-likelihood with watanabe_beta (i.e. target += watanabe_beta * sum(log_likes)).
// coin_model.stan

data {
int<lower=3> N; // number of coin tosses
int<lower=0, upper=1> x[N]; // outcomes (heads = 1, tails = 0)
int<lower=0, upper=1> mode; // 0 = normal sampling, 1 = WBIC sampling
}
transformed data {
real<lower=0, upper=1> watanabe_beta; // WBIC parameter
if ( mode == 0 ) {
watanabe_beta = 1.0;
}
else { // mode == 1
watanabe_beta = 1.0/log(N);
}
}
parameters {
real<lower=0, upper=1> p; // probability of heads
}
transformed parameters {
vector[N] log_likes;
for ( n in 1:N ) {
log_likes[n] = bernoulli_lpmf(x[n] | p);
}
}
model {
p ~ beta(1, 1);
target += watanabe_beta * sum(log_likes);
}
generated quantities {
real log_like;
log_like = sum(log_likes);
}
Using the pystan module for Python (3), one could estimate $$p$$ from data $$x$$ as follows:
import pystan
import numpy as np
import scipy.stats as sts

## compile the model
sm = pystan.StanModel(file="coin_model.stan")

## create random data
N = 50
p_real = 0.75
x = sts.bernoulli.rvs(p_real, size=N)

## prepare data for Stan
data = {'N' : N, 'x' : x, 'mode' : 0}
pars = ['p', 'log_like']

## run the Stan model
fit = sm.sampling(data=data)
chain = fit.extract(permuted=True, pars=pars)
ps = chain['p']
print("p =", np.mean(ps), "+/-", np.std(ps))
Which should give output similar to
p = 0.67187824104 +/- 0.0650602081466
In order to calculate the WBIC, the mode has to be set to 1:
## import modules, compile the Stan model, and create data as before...

## prepare data for Stan
data = {'N' : N, 'x' : x, 'mode' : 1} ## notice the mode
pars = ['p', 'log_like']

## run the Stan model
fit = sm.sampling(data=data)
chain = fit.extract(permuted=True, pars=pars)
WBIC = -2*np.mean(chain["log_like"])

print("WBIC =", WBIC)
which should result in something similar to
WBIC = 66.036854275
In order to test if this result is any good, we first compute the WBIC of the null model. This is easy since the null model does not have any parameters ${\sf WBIC} = -2\cdot L(x) = -2 \cdot \log(\tfrac12^{K}(1-\tfrac12)^{N-K}) = 2N\cdot\log(2)$ and hence, when $$N=50$$, we get $${\sf WBIC} \approx 69.3$$. Hence, we find positive evidence against the null model, since $$\Delta{\sf WBIC} \approx 3.3$$.

For the coin-toss model, we know the posterior density explicitly, namely $P(x|M) = \int_0^1 p^K (1-p)^{N-K} dp = B(K+1, N-K+1)\,,$ where $$M$$ denotes the alternative (i.e. biased) model, and $$B$$ denotes the Beta function. In the above example, we had $$N = 50$$ and $$K = 34$$, and hence $$-2\cdot\log(P(x|M)) \approx 66.31$$, which is remarkably close to the $${\sf WBIC}$$ of the alternative model. Similar results hold for several values of $$p$$, as demonstrated by the following figure

## A non-trivial example

As the example above is non-singular, and WBIC is supposed to work also for singular models, I plan to present an example with a singular model here.

## Tuesday, 22 November 2016

### Using the "ones trick" to handle unbalanced missing data with JAGS

The so-called "ones trick" for JAGS and BUGS models allows the user to sample from distributions that are not in the standard list. Here I show another application for "unbalanced" missing data. More examples using the ones (or zeros) trick can be found in The BUGS Book, and in particular in Chapter 9.

Suppose that we want to find an association between a trait $$x$$ (possibly continuous) and another categorical trait $$k \in \{1, 2,\dots, K\}$$. We have $$N$$ observations $$x_i$$, however, some of the $$k_i$$ are missing. Whenever there is no information on the missing $$k_i$$ whatsoever, we can simply write the following JAGS model:
data {
/* alpha is the parameter for the Dirichlet prior of p,
* with p the estimated frequency of k
*/
for ( j in 1:K ) {
alpha[j] <- 1
}
}
model {
for ( i in 1:N ) {
/* Some model for x, given k */
x[i] ~ dnorm(xhat[k[i]], tau_x)
/* Sample missing values of k, and inform
* the distribution p with the known k's
*/
k[i] ~ dcat(p)
}
/* Model the distribution p_k of the trait k */
p ~ ddirch(alpha)

/* The precision of the likelihood for x */
tau_x ~ dgamma(0.01, 0.01)

/* Give the xhat's a shared prior to regularize them */
for ( j in 1:K ) {
xhat[j] ~ dnorm(mu_xhat, tau_xhat)
}
mu_xhat ~ dnorm(0, 0.01)
tau_xhat ~ dgamma(0.01, 0.01)
}
The data file must contain the vector $$k$$, with NA in the place of the missing values.

Suppose now that the missing $$k_i$$s are not completely unknown. In stead, suppose that we know that some of the $$k_i$$ must belong to a group $$g_i$$. The group $$g_i$$ is encoded as a binary vector, where a $$g_{i,j} = 1$$ indicates that the trait value $$j$$ is in the group, and $$0$$ that it is not. In particular, when $$k_i$$ is known, then $g_{i,j} = \left\{ \begin{array}{ll} 1 & {\rm if\ } j = k_i \\ 0 & {\rm otherwise} \end{array} \right.$ If $$k_i$$ is completely unknown, then we just let $$g_{i,j} = 1$$ for each $$j$$.
data {
for ( j in 1:K ) {
alpha[j] <- 1
}
/* for the "ones trick",
* we need a 1 for every observation
*/
for ( i in 1:N ) {
ones[i] <- 1
}
}
model {
for ( i in 1:N ) {
x[i] ~ dnorm(xhat[k[i]], tau_x)
/* sample missing k's using the group-vector g */
k[i] ~ dcat(g[i,])
/* in order to inform p correctly,
* we need to multiply the posterior with p[k[i]],
* which can be done by observing a 1,
* assumed to be Bernoulli(p[k[i]]) distributed
*/
ones[i] ~ dbern(p[k[i]])
}
p ~ ddirch(alpha)

tau_x ~ dgamma(0.01, 0.01)
for ( j in 1:K ) {
xhat[j] ~ dnorm(mu_xhat, tau_xhat)
}
mu_xhat ~ dnorm(0, 0.01)
tau_xhat ~ dgamma(0.01, 0.01)
}
In stead of using the "ones trick", it would have been more clear-cut if we were able to write
k[i] ~ dcat(g[i,]) /* sampling statement */
k[i] ~ dcat(p) /* statement to inform p */
but in this way JAGS can not tell that it is to sample from $${\rm Categorical}(g_i)$$, and not from $${\rm Categorical}(p)$$.
Similarly, it might have been tempting to write
k[i] ~ dcat(g[i,] * p) /* point-wise vector multiplication */
This would correctly prefer the common $$k\,$$s in group $$g_i$$ over the rare $$k\,$$s, but the fact that $$k_i$$ must come from the group $$g_i$$ is not used to inform $$p$$.

As an example, I generated some random $$k_i$$s sampled from $${\rm Categorical}(p)$$ (I did not bother to sample any $$x_i$$s). I have taken $$K = 15$$, $$N = 2000$$, and $$3$$ randomly chosen groups. For a $$1000$$ observations, I "forget" the actual $$k_i$$, and only know the group $$g_i$$. The goal is to recover $$p$$ and the missing $$k_i$$s.
Using a chain of length $$20000$$ (using the first $$10000$$ as a burn-in and $$1/10$$-th thinning) we get the following result:

## Thursday, 5 May 2016

### Computing Bayes factors with Stan and a path-sampling method

Stan is a great program for MCMC (or HMC, really). Vehtari et al. explain here how to use Stan to compute WAIC. For the Bayes factor, however, I have not found a method yet, and therefore I would like to demonstrate a possible method here. This will obviously not work well for every model; this is merely an experiment.
Recently, I was really intrigued by a paper by Gelman and Meng, where several methods for computing Bayes factors, or normalizing constants, are explained and connected (even the really bad ones). Here, I will use the path sampling method.
Let us implement a simple model in Stan, for which we can explicitly compute the marginal likelihood. Then we can try to estimate this marginal likelihood with the path-sampling method, and compare it with the exact value.
A very simple model is the 'fair coin' example (taken directly from wikipedia). The Bayes factor between a null-model $$M_0$$ and a model that incorporates a bias, $$M_1$$, can be computed directly as the quotient of the marginal likelihoods, and moreover, the null model does not have any parameters.
Let $$n$$ denote the number of coin tosses, and $$k$$ the number of 'heads'. Hence, the data $$D$$ is given by the pair $$(n,k)$$. Given a prior $$\theta \sim {\rm Beta}(\alpha, \beta)$$ on the probability of throwing heads, we get the posterior $$\theta \sim {\rm Beta}(\alpha + k, \beta +n-k)$$, and we can compute the marginal likelihood exactly:
$p(D|M_1) = \int_0^1 p(D|\theta)\pi(\theta) d\theta$ $= {n \choose k}\int_0^1 \theta^{k+\alpha-1}(1-\theta)^{n-k+\beta-1} d\theta = {n \choose k} B(k+\alpha, n-k+\beta)\,,$
where $$B$$ denotes the Beta function. Meanwhile,
$p(D | M_0) = {n \choose k} \left(\tfrac12\right)^k\left(1-\tfrac12\right)^{n-k}\,.$
In this instance of path sampling (and closely following Gelman and Meng), we consider a family of (un-normalized) distributions $$Q_T$$, indexed by a parameter $$T \in [0,1]$$, such that $$Q_0(\theta) = \pi(\theta)$$ and $$Q_1(\theta) = p(D|\theta)\pi(\theta)$$. The normalizing constants are denoted by $$z(T)$$. Notice that $$z(0) = 1$$ and $$z(1) = p(D|M_1)$$.
Let $$\Theta = [0,1]$$ denote the support of $$\theta$$. Since
$\frac{d}{dT} \log z(T) = \frac{1}{z(T)} \frac{d}{dT} z(T) = \frac{1}{z(T)} \frac{d}{dT} \int_{\Theta} Q_T(\theta) d\theta\,,$
we get that
$\frac{d}{dT} \log z(T) = \int_{\Theta} \frac{1}{z(T)} \frac{d}{dT} Q_T(\theta) d\theta\,,$
and hence
$\frac{d}{dT} \log z(T) = \int_{\Theta} \frac{Q_T(\theta)}{z(T)} \frac{d}{dT} \log(Q_T(\theta)) d\theta\,.$
When we denote by $$\mathbb{E}_T$$ the expectation under $$P_T$$, we get that
$\frac{d}{dT} \log z(T) = \int_{\Theta} P_T(\theta) \frac{d}{dT} \log(Q_T(\theta)) d\theta = \mathbb{E}_T\left[ \frac{d}{dT} \log(Q_T(\theta)) \right]\,.$
We can think of $$U(\theta, T) := \frac{d}{dT} \log(Q_T(\theta))$$ as 'potential energy', and we get
$\int_0^1 \mathbb{E}_T\left[U(\theta, T)\right] dT = \int_0^1 \frac{d}{dT} \log(z(T)) dT$ $= \log(z(1)) - \log(z(0)) = \log(z(1)/z(0)) =: \lambda\,.$
Notice that in our case $$\lambda = \log(P(D|M_1))$$. We can interpret $$\int_0^1 \mathbb{E}_T\left[U(\theta, T)\right] dT$$ as the expectation of $$U$$ over the joint probability density of $$T$$ (with a uniform prior) and $$\theta$$:
$\lambda = \mathbb{E}\left[ U(\theta, T)\right]\,.$
This suggests an estimator $$\hat{\lambda}$$ for $$\lambda$$:
$\hat{\lambda} = \frac{1}{N} \sum_{i=1}^N U(\theta_i, T_i)\,,$
where $$(\theta_i, T_i)_i$$ is a sample from the joint distribution of $$\theta$$ and $$T$$. A way of creating such a sample, is first sampling $$T_i$$ from its marginal (uniform) distribution, and then sampling $$\theta_i$$ from $$P_{T_i}$$. This last step might require some Monte Carlo sampling.
First, we need to choose $$1$$-parameter family of distributions. A simple choice is the geometric path:
$Q_T(\theta) = \pi(\theta)^{1-T} (\pi(\theta)p(D|\theta))^{T} = \pi(\theta) p(D|\theta)^T\,.$
In this case, the potential energy simply equals $$\frac{d}{dT}\log(Q_T(\theta)) = \log p(D|\theta)$$

## The Stan model

Using the pystan interface, we can implement the model as follows. The most important parts are the parameter $$T$$ (declared in the data section), and the "generated quantity" $$U$$.
## import some modules
import pystan
import scipy.stats as sts
import scipy.special as spl
import numpy as np
import multiprocessing

## define a Stan model
model = """
data {
int<lower=0> n;
int<lower=0, upper=n> k;
real<lower=0> alpha;
real<lower=0> beta;
real<lower=0, upper=1> T; // parameter for path sampling
}
parameters {
real<lower=0, upper=1> theta;
}
model {
theta ~ beta(alpha, beta);
increment_log_prob(T*binomial_log(k, n, theta));
// replaces sampling statement "k ~ binomial(n, theta)"
}
generated quantities {
real U;
U <- binomial_log(k, n, theta);
}
"""

## let Stan translate this into C++, and compile...
sm = pystan.StanModel(model_code=model)

## A parallel method

We need to generate samples $$T_i$$ from $${\rm Uniform}(0,1)$$, and then, given $$T_i$$ we generate a sample $$\theta_i$$ from $$P_{T_i}$$. The simplest way is just to make a partition $$T_i = \tfrac{i}{N}$$ of $$[0,1]$$ and then for each $$i=0,\dots,N$$, use the Stan model with $$T = T_i$$. Notice that for each $$i$$, we will generate multiple ($$K$$, say) samples from $$P_{T_i}$$. This method lends itself well for multi-processing, as all $$N+1$$ Stan sessions can run in parallel.
## choose some parameters
n = 100 ## coin tosses
alp = 1 ## determines prior for q
bet = 1 ## determines prior for q
K = 100 ## length of each chain
N = 1000 ## number of Ts

## a function that prepares a data dictionary,
## and then runs the Stan model
def runStanModel(T):
coin_data = {
'n' : n,
'k' : k,
'alpha' : alp,
'beta' : bet,
'T' : T
}
fit = sm.sampling(data=coin_data, iter=2*K,
warmup=K, chains=1)
la = fit.extract(permuted=True)
return la['U'] ## U is a "generated quantity"

## make a partition of [0,1]
Ts = np.linspace(0, 1, N+1)
## start a worker pool
pool = multiprocessing.Pool(4) ## 4 threads
## for each T in Ts, run the Stan model
Us = np.array(pool.map(runStanModel, Ts))
Let's have a look at the result. Notice that for $$\alpha=\beta=1$$, the marginal likelihood does not depend on $$k$$ as $P(D|M_1) = {n \choose k} \frac{\Gamma(k+1)\Gamma(n-k+1)}{\Gamma(n+2)} = {n \choose k} \frac{k! (n-k)!}{(n+1)!} = \frac{1}{n+1}$ We could take for $$\hat{\lambda}$$ the average of all $$(N+1)\cdot K$$ samples, but in my experience, the standard error is more realistic when I only take one sample per $$T_i$$.
## take one sample for each T
lamhat = np.mean(Us[:,-1])
## we can also compute a standard error!!
se_lamhat = sts.sem(Us[:,-1])

print "extimated lambda = %f +/- %f"%(lamhat, se_lamhat)
print "estimated p(D|M_1) = %f"%np.exp(lamhat)

exactMargLike = spl.beta(k+alp, n-k+bet) * spl.binom(n,k)
exactMargLoglike = np.log(exactMargLike)

print "exact lambda = %f"%exactMargLoglike
print "exact p(D|M_1) = %f"%exactMargLike
In my case, the result is
estimated lambda = -4.724850 +/- 0.340359
estimated p(D|M_1) = 0.008872
exact lambda = -4.615121
exact p(D|M_1) = 0.009901

## A serial method

Another method does not use parallel processing, but uses the fact that the distributions $$P_{T_i}$$ and $$P_{T_{i+1}}$$ are very similar when $$T_{i}-T_{i+1} = \frac{1}{N}$$ is small. When we have a sample from $$P_{T_i}$$, we can use it as the initial condition for the Stan run with $$T = T_{i+1}$$. We then only need very little burn-in (warm-up) time before we are actually sampling from $$P_{T_{i+1}}$$. We can specify the number of independent chains that Stan computes, and also separate initial parameters for each of the chains. Hence, we can take multiple samples $$P_{T_i}$$ as initial choices for the next chain. For this very simple model, this "serial" method is much slower than the parallel method, but my guess is that it could be a lot faster for more complicated models. I hope to prove this claim in a future post.
## choose some parameters
n = 100 ## coin tosses
alp = 1 ## determines prior for q
bet = 1 ## determines prior for q
K = 100 ## size initial chain
N = 200 ## number of Ts

## initially, do a longer run with T=0
coin_data = {
'n' : n,
'k' : k,
'alpha' : alp,
'beta' : bet,
'T' : 0
}
fit = sm.sampling(data=coin_data, iter=2*K,
warmup=K, chains=1)
la = fit.extract(permuted=True)

## in stead of length K,
## now use a much shorter chain (of length L)
L = 10
chains = 4
Us = np.zeros(shape=(N+1,L*chains))
Ts = np.linspace(0, 1, N+1)

## now run the 'chain of chains'
for i, Ti in enumerate(Ts):
coin_data['T'] = Ti ## take another T
## take some thetas from the previous sample
thetas = np.random.choice(la["theta"], chains)
initial_guesses = [{'theta' : theta} for theta in thetas]
fit = sm.sampling(data=coin_data, iter=2*L, warmup=L,
chains=chains, init=initial_guesses)
la = fit.extract(permuted=True)
Us[i,:] = la['U']
Ok, let us have another look at the result. For this, I used the same code as above:
estimated lambda = -5.277354 +/- 1.120274
estimated p(D|M_1) = 0.005106
exact lambda = -4.615121
exact p(D|M_1) = 0.009901
As you can see, the estimate is less precise, but this is due to the fact that $$N=200$$ instead of $$1000$$.
I've written in, and I ran the above code fragments from a Jupyter notebook. As compiling and sampling can take a lot of time, such an interface can be very convenient. Please let me know when this was somehow useful for you or if you have any questions, and also please tell me if I did something stupid...

## Monday, 2 May 2016

### A Sunset Colormap for Python

Room Z533 in the Kruyt building in Utrecht can have a wonderful view, especially at sunset. Not too long ago, I took the following photograph.

Having a love-hate relationship with colormaps, and in particular choosing a good one, I INSTANTLY noticed the beauty of the gradient of the sky, and hence, its application as a colormap.

In Python, it is easy to import figures as a matrix, and then extract the RGB values along a line. To make things easier, I first cropped out the part of the sky I wanted:
Then, I had a look at this website.
Let's import the "slice of sky" in Python and have a look:
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
from PIL import Image

im = Image.open("/path/to/sunset.jpg")

xs = range(im.size[1])
y = im.size[0]/2
rs = [pix[y,x][0] for x in xs]
gs = [pix[y,x][1] for x in xs]
bs = [pix[y,x][2] for x in xs]

fig = plt.figure(figsize=(5,2))

ax.plot(xs, rs, color='red')
ax.plot(xs, gs, color='green')
ax.plot(xs, bs, color='blue')
ax.set_xlabel("pixel")
ax.set_ylabel('r/g/b value')

plt.savefig('rgb-plot.png', dpi=300, bboxinches='tight')

This results in the following figure:

Now let's make the actual colormap.
Dp = 1 ## determines the number of pixels between "nodes"
xs = np.linspace(0,1,im.size[1]/Dp + 1) ## points between 0 and 1
idxs = range(0,im.size[1],Dp) ## indices in the original picture matrix

redlist = [rs[idx]/255. for idx in idxs] + [rs[-1]/255., 0]
greenlist = [gs[idx]/255. for idx in idxs] + [gs[-1]/255., 0]
bluelist = [bs[idx]/255. for idx in idxs] + [bs[-1]/255., 0]

## LinearSegmentedColormap wants these weird triples,
## where some end points are ignored...

redtuples = [(x, redlist[i], redlist[i+1]) for i, x in enumerate(xs)]
greentuples = [(x, greenlist[i], greenlist[i+1]) for i, x in enumerate(xs)]
bluetuples = [(x, bluelist[i], bluelist[i+1]) for i, x in enumerate(xs)]

cdict = {'red' : redtuples, 'green' : greentuples, 'blue' : bluetuples}

cmap = LinearSegmentedColormap('tbz533', cdict) ## choose a name
plt.register_cmap(cmap=cmap)
## now we can use our new colormap!

OK. Let's make some 70s wallpaper to celebrate the new colormap:
xs = np.linspace(-5,5,1000)
ys = np.linspace(-5,5,1000)
zs = np.array([[np.sin(x)*np.cos(y) for x in xs] for y in ys])

fig = plt.figure(figsize=(5.5,5))
C = ax1.pcolormesh(xs, ys, zs, cmap='tbz533')
fig.colorbar(C)

fig.savefig('my70swallpaper.png', dpi=200, bboxinches='tight')

This results in the following figure:

## Friday, 6 November 2015

### ETA for C++

Simulations can take some time, and I'd like to know how long. This is easy, right? Yes, it is. I've done it lots of times, but every time I do, I curse myself for not using an old piece of code.
most likely, there is some standard, best way of doing this, but I haven't found it. Most recently, I did this: I made a simple object "EtaEstimator", that can be updated every (costly) time step and asked for an estimated time of "arrival" at any time. Here's the header:
// eta.hpp
#include <ctime>
#include <cmath> // floor
#include <iostream>

class EtaEstimator {
public:
EtaEstimator(int );
// constuction starts the clock. Pass the number of steps
void update();
void print(std::ostream & ) const;
private:
double ct, etl; // cumulative time, estimated time left
int n, N; // steps taken, total amount of steps
clock_t tick; // time after update ((c) matlab)
// statics...
static const int secperday = 86400;
static const int secperhour = 3600;
static const int secperminute = 60;
};

std::ostream & operator<<(std::ostream & , const EtaEstimator & );
The members are straight forward too:
// eta.cpp
#include "eta.hpp"

EtaEstimator::EtaEstimator(int N) :
ct(0.0), etl(0.0), n(0), N(N) {
tick = clock();
}

void EtaEstimator::update() {
clock_t dt = clock() - tick;
tick += dt;
ct += (double(dt)/CLOCKS_PER_SEC); // prevent integer division
// CLOCKS_PER_SEC is defined in ctime
++n;
etl = (ct/n) * (N-n);
}

void EtaEstimator::print(std::ostream & os) const {
double etlprime = etl;
int days = floor(etlprime / secperday);
etlprime -= days * secperday;
int hours = floor(etlprime / secperhour);
etlprime -= hours * secperhour;
int minutes = floor(etlprime / secperminute);
etlprime -= minutes * secperminute;
int seconds = floor(etlprime);
os << (days > 0 ? std::to_string(days) + " " : "")
<< hours << ":"
<< (minutes < 10 ? "0" : "") << minutes << ":"
<< (seconds < 10 ? "0" : "") << seconds;
}

std::ostream & operator<<(std::ostream & os,
const EtaEstimator & eta) {
eta.print(os);
return os;
}
Typical usage of EtaEstimator would be the following:
#include <iostream>
#include "eta.hpp"

// about to do lots of work...
int N = 1000;
EtaEstimator eta(N);
for ( int n = 0; n < N; ++n ) {
// do something very expensive
eta.update()
std::cout << "\rETA: " << eta << std::flush;
}
// ...
PS: std::to_string is a C++11 feature, and can be ignored by using something like
if ( days > 0 ) os << days << " "; // else nothing at all

## Monday, 26 October 2015

### Carnes's life span distribution

In a paper by Carnes et. al, a simple parametric, but realistic life span distribution is given, and here I show how you can sample from it. In addition, assuming a demographic equilibrium, the age of individuals will have a particular distribution. I will show what this distribution is, and again how to sample from it. Sampling ages instead of lifespans might be useful for initiating simulations. I model epidemics, and I want my disease free (a.k.a. virgin) population to have the 'right' age distribution.
The life span distribution has hazard $\lambda(a) = e^{u_1 a + v_1} + e^{u_2 a + v_2}\,.$ Typical parameters are given by $$u_1 = 0.1$$, $$v_1 = -10.5$$, $$u_2 = -0.4$$, and $$v_2 = -8$$, so that infants have a slightly increased hazard of dying, and after the age of 60, the hazard rapidly starts to grow, until it becomes exceedingly large around $$a = 100$$.
The survival function $$S(a) = {\rm Pr}(A > a)$$, where $$A$$ is the random variable denoting 'age at death' is given by $$S(a) = e^{-\Lambda(a)}$$, with $$\Lambda(a) := \int_0^a \lambda(a')da'$$ denoting the cumulative hazard. The cumulative hazard $$\Lambda$$ is easily calculated: $\Lambda(a) = \frac{e^{v_1}}{u_1}(e^{u_1 a}-1) + \frac{e^{v_2}}{u_2}(e^{u_2 a}-1)\,,$ but its inverse, or the inverse of the survival function is more difficult to calculate.
We need the inverse of $$S$$, because sampling random deviates typically involves uniformly sampling a number $$u\in [0,1)$$. The number $$S^{-1}(u)$$ is then the desired deviate.

In a future post, I will show how to use the GNU Scientific Library (GSL) to sample deviates from $$A$$.

Suppose that the birth rate $$b$$ in our population is constant. A PDE describing the population is given by $\frac{\partial N(a,t)}{\partial t} + \frac{\partial N(a,t)}{\partial a} = -\lambda(a)N(a,t)\,,$ where $$N(a,t)$$ is the number (density) of individuals of age $$a$$, alive at time $$t$$. The boundary condition (describing birth) is given by $N(0,t) = b$ When we assume that the population is in a demographic equilibrium, the time derivative with respect to $$t$$ vanishes, and we get an ODE for the age distribution: $\frac{\partial N(a)}{\partial a} = -\lambda(a) N(a)\,,\quad N(0) = b\,,$ where we omitted the variable $$t$$. This equation can be solved: $\frac{1}{N}\frac{\partial N}{\partial a} = \frac{\partial \log(N)}{\partial a} = -\lambda(a) \implies N(a) = c \cdot e^{-\Lambda(a)}$ for some constant $$c$$. Since $$b = N(0) = c \cdot e^{-\Lambda(0)} = c$$, we have $N(a) = b\cdot e^{-\Lambda(a)}\,.$ Hence, we now know the PDF of the age distribution (up to a constant). Unfortunately, we can't get a closed form formula for the CDF, let alone invert it. Therefore, when we want to sample, we need another trick. I've used a method from Numerical Recipies in C. It involves finding a dominating function of the PDF, with an easy, and easily invertible, primitive.
Let's just assume that $$b = 1$$, so that the objective PDF is $$N(a) = e^{-\Lambda(a)}$$. Please notice that $$N$$ is not a proper PDF, since, in general, the expectation won't be $$1$$. We need to find a simple, dominating function for $$N$$. A stepwise defined function might be a good choice, since the hazard is practically zero when the age is below $$50$$, and then increases rapidly. We first find a comparison cumulative hazard $$\tilde{\Lambda}$$ that is dominated by the actual cumulative hazard $$\Lambda$$. Many choices are possible, but one can take for instance $\tilde{\Lambda}(a) = \left\{ \begin{array}{ll} 0 & \mbox{if } a < a_0 \\ \lambda(a^{\ast})\cdot (a-a_0) & \mbox{otherwise} \end{array}\right.$ where $a_0 = a^{\ast} - \frac{\Lambda(a^{\ast})}{\lambda(a^{\ast})}\,.$ The constant $$a_0$$ is chosen such that the cumulative hazards $$\Lambda$$ and $$\tilde{\Lambda}$$ are tangent at $$a^{\ast}$$.
Since $$\Lambda$$ dominates $$\tilde{\Lambda}$$, the survival function $$\tilde{S}$$ defined by $$\tilde{S}(a) = e^{-\tilde{\Lambda}(a)}$$ dominates $$S$$. It is easy to find the inverse of $$a\mapsto\int_0^a \tilde{S}(a')da'$$, and hence we can easily sample random deviates from the age distribution corresponding to $$\tilde{\Lambda}$$. In order to sample from the desired age distribution, one can use a rejection method: (i) sample an age $$a$$ from the easy age distribution. (ii) compute the ratio $$r = S(a)/\tilde{S}(a) \leq 1$$. (iii) sample a deviate $$u \sim \mbox{Uniform}(0,1)$$. (iv) accept the age $$a$$ when $$u \leq r$$, and reject $$a$$ otherwise. Repeat these steps until an age $$a$$ was accepted.

The only thing we still need to do, is to find a good value for $$a^{\ast}$$. To make the sampler as efficient as possible, we want to minimize the probability that we have to reject the initially sampled age $$a$$ from $$S$$. This boils down to minimizing $\int_0^{\infty} \tilde{S}(a)da = a_0 + \frac{1}{\lambda(a^{\ast})} = a^{\ast} + \frac{1 - \Lambda(a^{\ast})}{\lambda(a^{\ast})}\,.$ The derivative of $$a^{\ast} \mapsto \int_0^{\infty} \tilde{S}(a)da$$ equals $\frac{(\Lambda(a^{\ast}) - 1)\lambda'(a^{\ast})}{\lambda(a^{\ast})^2}$ and thus, we find an extreme value for $$\int_0^{\infty} \tilde{S}(a)da$$ when $$\Lambda(a^{\ast}) = 1$$ or when $$\lambda'(a^{\ast}) = \frac{d\lambda}{da^{\ast}} = 0$$. The second condition can only correspond to a very small $$a^{\ast}$$, and therefore will not minimize $$\int_0^{\infty} \tilde{S}(a)da$$. Hence, we have to solve $$\Lambda(a^{\ast}) = 1$$. When we ignore the second term of $$\Lambda$$, we find that: $a^{\ast} = \log(1 + u_1 \exp(-v_1))/u_1$

## Friday, 18 September 2015

### Basic statistics using higher-order functions in C++

I do a lot of individual-based simulations, and often I want to calculate some population statistics 'on the fly'. I found that it can be helpful to use C/C++'s basic functional capabilities.

A higher-order function is a function that takes other functions as arguments (or returns a function as a result). In C/C++, functions can be passed to other functions, but the notation can be a bit awkward, as one needs to pass a reference to a function. If the function is a method of some class, then the notation gets even more involved. You can make your life easier by using a typedef.

The following code snippet shows the header file of a simple example. The goal is to calculate some statistics on a population of "Critters". These Critters have multiple "traits", and the traits are accessed by methods of the class Critter of signature "double Critter::trait() const". Suppose that we want to calculate the mean of the trait "happiness". It's trivial to write a function that does this, but then we might also get interested in the average "wealth". The function that calculates the average wealth is identical to the function that calculates average happiness, except that happiness is replaced by wealth. We can get rid of this redundancy by defining the typedef Trait as a method of Critter, and writing a function that takes the average of an arbitrary Trait.

Let us now look at the source file. The most important things to notice are...
(1) whenever you pass a member "trait" (e.g. wealth) of Critter to a function, you should pass it as "&Critter::trait" (i.e. pass the reference).
(2) when you want to evaluate Trait x of Critter alice, you'll need to de-reference x, and call the resulting function: "(alice.*x)()"

If you want to play with this example, put the header in a file called "main.hpp", and the source in a file called "main.cpp", and compile main.cpp by typing "g++ main.cpp -o critters" in your terminal (I assume that you are using Linux and have the gcc compiler installed).