• DocumentCode
    985314
  • Title

    Information conversion, effective samples, and parameter size

  • Author

    Lin, Xiaodong ; Pittman, Jennifer ; Clarke, Bertrand

  • Author_Institution
    Dept. of Math. Sci., Cincinnati Univ., Cincinnati, OH
  • Volume
    53
  • Issue
    12
  • fYear
    2007
  • Firstpage
    4438
  • Lastpage
    4456
  • Abstract
    Consider the relative entropy between a posterior density for a parameter given a sample and a second posterior density for the same parameter, based on a different model and a different data set. Then the relative entropy can be minimized over the second sample to get a virtual sample that would make the second posterior as close as possible to the first in an informational sense. If the first posterior is based on a dependent dataset and the second posterior uses an independence model, the effective inferential power of the dependent sample is transferred into the independent sample by the optimization. Examples of this optimization are presented for models with nuisance parameters, finite mixture models, and models for correlated data. Our approach is also used to choose the effective parameter size in a Bayesian hierarchical model.
  • Keywords
    Bayes methods; data handling; entropy; Bayesian hierarchical model; dependent dataset; effective inferential power; effective samples; finite mixture models; independence model; information conversion; nuisance parameters; parameter size; posterior density; relative entropy; virtual sample; Bayesian methods; Bioinformatics; Conducting materials; Entropy; Pattern recognition; Random variables; Seminars; Statistical learning; Statistics; Asymptotic relative efficiency; number of parameters; relative entropy; sample size;
  • fLanguage
    English
  • Journal_Title
    Information Theory, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9448
  • Type

    jour

  • DOI
    10.1109/TIT.2007.909168
  • Filename
    4385775