Peer review is an integral part of the scientific enterprise. For example, grant peer review allocates billions of dollars of research funding, makes decisions that can make or break a scientific career, and consumes substantial resources to sustain on the part of applicants, reviewers, and grant agencies. With so much at stake, it is natural to ask about the quality of these peer review assessments themselves. Can peer review quality be summarized in one measure such as inter-rater reliability? Does a rubric with scored criteria guarantee that the review procedure is fair for all? How far are members of a review panel from consensus, and how much peer pressure do they experience from other panelists? This talk will describe several specific examples to illustrate these open questions in studying characteristics and limits of peer review. We consider those peer review settings where reviewers assess the quality of proposed research by assigning numeric scores to applications. Understanding characteristics and limits of peer review assessments can be useful for scientific communities and funding agencies in their evaluations about whether and how peer review should be used to make funding decisions.