Introduction to Statistical Decision Theory
The Bayesian revolution in statistics—where statistics is integrated with decision making in areas such as management, public policy, engineering, and clinical medicine—is here to stay. Introduction to Statistical Decision Theory states the case and in a self-contained, comprehensive way shows how the approach is operational and relevant for real-world decision making under uncertainty.
Starting with an extensive account of the foundations of decision theory, the authors develop the intertwining concepts of subjective probability and utility. They then systematically and comprehensively examine the Bernoulli, Poisson, and Normal (univariate and multivariate) data generating processes. For each process they consider how prior judgments about the uncertain parameters of the process are modified given the results of statistical sampling, and they investigate typical decision problems in which the main sources of uncertainty are the population parameters. They also discuss the value of sampling information and optimal sample sizes given sampling costs and the economics of the terminal decision problems.
Unlike most introductory texts in statistics, Introduction to Statistical Decision Theory integrates statistical inference with decision making and discusses real-world actions involving economic payoffs and risks. After developing the rationale and demonstrating the power and relevance of the subjective, decision approach, the text also examines and critiques the limitations of the objective, classical approach.