predict.reglogit {reglogit} | R Documentation |
Sampling from the posterior predictive distribution of a regularized (multinomial) logistic regression fit, including entropy information for variability assessment
## S3 method for class 'reglogit' predict(object, XX, burnin = round(0.1 * nrow(object$beta)), ...) ## S3 method for class 'regmlogit' predict(object, XX, burnin = round(0.1 * dim(object$beta)[1]), ...)
object |
a |
XX |
a |
burnin |
a scalar positive |
... |
For compatibility with generic |
Applies the logit transformation (reglogit
) or multinomial logit (regmlogit
)
to convert samples of the linear predictor at XX
into a samples from a predictive
posterior probability distribution. The raw probabilties, averages (posterior means),
entropies, and posterior mean casses (arg-max of the average probabilities) are returned.
The output is a list
with components explained below. For
predict.regmlogit
everyhing (except entropy) is expanded by one
dimension into an array
or matrix
as appropriate.
p |
a |
mp |
a vector of average probablities calculated over the rows of |
pc |
class labels formed by rouding (or arg max for |
ent |
The posterior mean entropy given the probabilities in |
Robert B. Gramacy rbg@vt.edu
R.B. Gramacy, N.G. Polson. “Simulation-based regularized logistic regression”. (2012) Bayesian Analysis, 7(3), p567-590; arXiv:1005.3430; http://arxiv.org/abs/1005.3430
C. Holmes, K. Held (2006). “Bayesian Auxilliary Variable Models for Binary and Multinomial Regression”. Bayesian Analysis, 1(1), p145-168.
## see reglogit for a full example of binary classifiction complete with ## sampling from the posterior predictive distribution. ## the example here is for polychotomous classification and prediction ## Not run: library(plgp) x <- seq(-2, 2, length=40) X <- expand.grid(x, x) C <- exp2d.C(X) xx <- seq(-2, 2, length=100) XX <- expand.grid(xx, xx) CC <- exp2d.C(XX) ## build cubically-expanded design matrix (with interactions) Xe <- cbind(X, X[,1]^2, X[,2]^2, X[,1]*X[,2], X[,1]^3, X[,2]^3, X[,1]^2*X[,2], X[,2]^2*X[,1], (X[,1]*X[,2])^2) ## perform MCMC T <- 1000 out <- regmlogit(T, C, Xe, nu=6, normalize=TRUE) ## create predictive (cubically-expanded) design matrix XX <- as.matrix(XX) XXe <- cbind(XX, XX[,1]^2, XX[,2]^2, XX[,1]*XX[,2], XX[,1]^3, XX[,2]^3, XX[,1]^2*XX[,2], XX[,2]^2*XX[,1], (XX[,1]*XX[,2])^2) ## predict class labels p <- predict(out, XXe) ## make an image of the predictive surface cols <- c(gray(0.85), gray(0.625), gray(0.4)) par(mfrow=c(1,3)) image(xx, xx, matrix(CC, ncol=length(xx)), col=cols, main="truth") image(xx, xx, matrix(p$c, ncol=length(xx)), col=cols, main="predicted") image(xx, xx, matrix(p$ent, ncol=length(xx)), col=heat.colors(128), main="entropy") ## End(Not run)