Prior distributions for covariance matrices are a well-studied topic in Bayesian modeling. The most popular priors, such as inverse Wishart, require a completely unrestricted covariance matrix, which is not satisfied in some structural equation models. In these models, the covariance matrices are “structured”: certain covariances in the matrix are fixed to 0 or constrained to be equal. While some prior distributions exist for this situation, the parameterizations are relatively complicated, making it difficult for researchers to meaningfully specify their priors. For example, previous approaches include transforming the covariance matrix to spherical coordinates or placing prior distributions on partial correlations.
In this project, we explore the “naive” way of placing priors on structured covariance matrices. These involve prior distributions for standard deviations separately from correlations, with a univariate distribution placed on each parameter. While these priors provide interpretable parameterizations and allow for varying degrees of informativeness, they are deceiving because they allow for covariance matrices that are not positive definite. This means that if we only consider positive definite covariance matrices under these priors, the priors are more informative than we might naively expect. We discuss a method for generating only positive definite covariance matrices from these naive prior distributions, study the amount of information actually implied by the priors, and provide results on the resulting MCMC algorithms’ calibration. We seek to answer the question, “what exactly do you get when placing naive priors on covariance matrices?”