Individual Paper Presentations, Coordinated Paper Sessions, and Organized Discussions
For all proposals, review ratings will be based on the degree to which:
- The research offers a novel and well-articulated contribution to measurement theory and/or practice.
- The research methods are well articulated and appropriate.
- There is evidence that the work is well-defined in scope and will be completed by March 2023.
- The proposal addresses a topic that is of perceived interest to NCME members.
Submitting authors will be asked to identify topic areas most relevant to their proposed work by answering the following three questions. They will have an opportunity to select recommended keywords as appropriate. These keywords are neither mutually exclusive nor exhaustive but serve to improve the likelihood that papers will be reviewed by appropriate reviewers.
- To which assessment or testing setting does your research apply?
- Higher Education
- PreK-12 Education
- Credentialing/Licensure
- College Admissions
- Language Proficiency Testing
- Assessment of Students with Disabilities
- International Assessment
- Other
- What connections does your research have to policies, practices, and current events?
- Assessment Design
- Validity and/or Validation
- Reliability
- Fairness and Equity
- Score Comparability
- Educational Accountability
- Assessment Delivery/Administration
- Assessing Noncognitive Skills
- Test Security
- Score Reporting
- Classroom Assessment
- Impact of COVID-19
- Impact of Test Optional/Test Blind Policies
- Remote Testing
- Other
- Which of the following topics or methodologies are central to your research?
- Classical Test Theory
- Generalizability Theory
- Item Response Theory
- Cognitive Diagnostic Models/Diagnostic Classification Models
- Regression Modeling (Application of General or Generalized Linear Models)
- Structural Equation Modeling
- Mixed Models (e.g., Multilevel Models)
- Bayesian Techniques
- Applying Artificial Neural Networks (e.g., AI Scoring)
- Growth Models/Longitudinal Analysis
- Natural Language Processing
- Data Mining Techniques
- Qualitative Analysis (Interviews, Case studies, Ethnography)
- Design Innovation
- Conceptual/Historical/Philosophical Issues Related to Educational Measurement
- Mixed Methods Evaluations
- Computer Adaptive Testing
- Analysis of Process Data
- Performance Levels and Standard Setting
- Alignment Studies
- Scaling, Linking and Equating
- Testing Invariance and Differential Item Functioning
- Other
Innovation Demonstrations
Innovation Demonstrations at the NCME Annual Meeting are distinct from traditional research studies and will be evaluated according to familiar but not identical criteria.
The innovation should address the stated problem in a unique and novel way, which may build on prior research but should not be a newer version of an existing tool with minor fixes and updates. Moreover, a demonstration author does not need to have invented the subject of his or her presentation (it may be a website with resources for teaching); however, for such resources to be considered “something new,” they should be familiar to few measurement professionals and the author must be an expert in their use. This is what separates a demonstration from a recommendation.
The following criteria will be used for the evaluation of innovation demonstration proposals.
- The proposed demonstration offers a novel contribution to the measurement community.
- The proposed demonstration offers an elegant/creative/appealing solution to a well-stated problem.
- The proposed demonstration offers a product or solution that participants can readily use.
- The proposed demonstration is expected to be of reasonably high interest to NCME members.