A practitioner's guide to Bayesian estimation of discrete choice dynamic programming models

Ching, A., Imai, S., Ishihara, M. & Jain, N. (2012). A practitioner's guide to Bayesian estimation of discrete choice dynamic programming models. Quantitative Marketing and Economics, 10(2), pp. 151-196. doi: 10.1007/s11129-012-9119-6

[img]
Preview
Text - Accepted Version
Download (1MB) | Preview

Abstract

This paper provides a step-by-step guide to estimating infinite horizon discrete choice dynamic programming (DDP) models using a new Bayesian estimation algorithm (Imai et al., Econometrica 77:1865–1899, 2009a) (IJC). In the conventional nested fixed point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the IJC algorithm extensively uses the computational results obtained from the past iterations to help solve the DDP model at the current iterated parameter values. Consequently, it has the potential to significantly alleviate the computational burden of estimating DDP models. To illustrate this new estimation method, we use a simple dynamic store choice model where stores offer “frequent-buyer” type rewards programs. Our Monte Carlo results demonstrate that the IJC method is able to recover the true parameter values of this model quite precisely. We also show that the IJC method could reduce the estimation time significantly when estimating DDP models with unobserved heterogeneity, especially when the discount factor is close to 1.

Item Type: Article
Additional Information: The final publication is available at Springer via http://dx.doi.org/10.1007/s11129-012-9119-6
Uncontrolled Keywords: Bayesian estimation, Dynamic programming, Discrete choice models, Rewards programs
Subjects: H Social Sciences > HB Economic Theory
Divisions: School of Social Sciences > Department of Economics
URI: http://openaccess.city.ac.uk/id/eprint/14216

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics