Proposal Review Criteria

Annual Meeting

Proposal Review Criteria

Review panels consisting of members with experience in the topic areas will review the individual submissions, coordinated sessions, and organized discussions. Submitting authors will have an opportunity to submit additional recommended keywords as appropriate. These keywords are neither mutually exclusive nor exhaustive but serve to improve the likelihood that papers will be reviewed by appropriate raters. All coordinated paper sessions and organized discussions will be reviewed by reviewers with expertise in relevant topic areas.

Individual Presentations, Coordinated Sessions, and Organized Discussions

For all except the demonstrations proposals, review ratings will be based on the degree to which:

  • The research offers a novel and well-articulated contribution to measurement theory and/or practice.
  • There is a sound conceptual basis articulating the measurement challenge to be addressed.
  • The choice of research methods supports the desired conclusions.
  • There is evidence that the work is well-defined in scope and will be completed by April 2021.

Demonstrations

Demonstrations at the NCME Annual Conference are distinct from traditional research studies and will be evaluated according to familiar but not identical criteria. First, it is important to clarify what constitutes a sufficiently original product.

Minimum standards for originality

The innovation addresses the stated problem in a unique and novel way, which may build on prior research but should not be a newer version of an existing tool. Moreover, a demonstration author does not need to have invented the subject of his or her presentation (it may be a website with resources for teaching); however, for such resources to be considered “something new,” they should be familiar to few measurement professionals and for the author to take any credit, he or she must be expert in their use. This is what separates a demonstration from a recommendation.

Review Criteria
The dimensions below are intended to be compensatory – promoting inclusivity through multiple paths rather than exclusivity through multiple checkpoints. For example, an innovation that resolves a major, growing concern (e.g., baked-in bias in human and automated scoring) would probably be complex, but high utility ratings would more than make up for low simplicity ratings.

  • Utility. The combined depth and breadth of the problem the innovation addresses. High-utility innovations solve seemingly intractable problems, with severe consequences, with which just about everyone struggles.
  • Inventiveness and artistry. We anticipate most proposed demonstrations will at least consider both of these dimensions but will not pursue both with equal vigor. Ratings will reflect the intended emphasis of the innovation (e.g., a new tool for choosing cut scores probably wouldn’t be evaluated on its artistry, but for a new score reporting technique, artistry probably matters).
    • Inventiveness: The innovation addresses the stated problem with such ingenuity that it provides not just an answer or a workaround, but a new and productive way of conceptualizing the problem. The innovation introduces new technology to the profession, either with unfamiliar methods and equipment or by using familiar technology in unexpected, nonroutine ways.
    • Artistry: The innovation achieves form as well as function, making priorities of style, craftsmanship, wit, and aesthetic appeal.
  • Simplicity. The innovation provides an elegant solution to the stated problem. Each component of the innovation takes the shortest path to the objective, accomplishing everything it has to and nothing it does not.