American Board of Internal Medicine Internship
Summer 2021 Psychometric Internship Announcement
The ABIM is pleased to announce the return of its summer internship in psychometrics.
The ABIM’s psychometric internship program is an eight-week summer internship running from Monday, June 14th to Friday, August 6th in Philadelphia, PA*. During the program, the intern will take primary ownership of an applied psychometric research project under the guidance of one of the ABIM’s measurement scientists. The intern will also have opportunities to assist psychometric staff on other research projects and to learn about operational processes (e.g., item analysis, IRT calibration, equating) within the context of medical certification.
* If necessary for health and safety reasons, the ABIM summer internship will be conducted remotely. ABIM will communicate with candidates about such decisions in the spring.
† If the internship is conducted remotely, interns will not receive the housing allowance.
- Doctoral student in an educational measurement (or related field) program with at least two years of coursework completed by the start of the internship
- Preference will be given to applicants who have experience with item response theory
- Excellent communication skills
- Interest in certification testing
- Eligible to be legally employed in the United States
The ABIM provides a total of $10,000 for the eight-week internship program. This total includes an $8,000 stipend as well as a $2,000 housing allowance†.
For their primary research project, the intern should expect to perform all stages of the research process, from literature review to discussion and dissemination of results. At the conclusion of the program, the intern will be expected to share their results by giving a brief presentation to an audience of psychometric staff. Further, the intern will be encouraged to submit their summer project(s) for presentation at a professional conference and/or for publication. The intern will work with their mentor to select an appropriate project for their experience level and interests. Examples of previously completed internship projects can be found on the next page.
Please submit your curriculum vitae and a letter of interest to Michele Johnson, Research Program Manager (email@example.com) by Monday, February 1st 2021.
Examples of Previously Completed Internship Projects
Please note these are examples of previous ABIM Interns’ projects. The summer 2021 internship project will be determined jointly by the 2021 mentor and summer intern.
- Anchor Item Replacement in the Presence of Consequential Item Parameter Drift. The purpose of this project was to develop evidence-based recommendations for replacing anchor items when a significant number are flagged for exhibiting item parameter drift. The intern conducted a simulation study that investigated different item replacement strategies, and examined the effects of each strategy with respect to outcomes such as pass/fail classification accuracy and RSMD of thetas.
Chang, K., & Rewley, K. Currently under review.
- Evaluating Use of an Online Open-Book Resource in a High Stakes Credentialing Exam. This project examined item and examinee characteristics associated with the use of an open-book resource throughout a high-stakes medical certification exam. Using exam process data and a generalized estimation equations modeling framework, the intern examined use of the open-book resource and how it might affect examinees’ test-taking experience and performance.
Myers, A. & Bashkov B. Paper presented at NCME 2020.
- Investigating the Impact of Parameter Instability on IRT Proficiency Estimation. This project examined how poorly estimated item parameters impact different proficiency estimators. The intern conducted a simulation study to examine how different levels of parameter instability impact Bayesian vs. non-Bayesian estimators as well as pattern vs. summed-score estimators.
McGrath, K. & Smiley, W. Paper presented at NCME 2019.
- Using Data Visualization to Explore Test Speededness in Certification Exams. This project examined different ways to determine if a test is speeded. The intern conducted a thorough literature review of methods used to detect and quantify test speededness. She then used data visualization, a nonparametric tool that does not have any data assumptions, to examine operational test data for speededness. This was shown to be a viable approach to assessing the impact of examination timing.
Sullivan, M. & Bashkov, B. Paper presented at TIME 2017.
- Automatic Flagging of Items for Key Validation. This project developed an automatic method for determining if there is a problem with an item’s key. The intern collected data from psychometricians regarding which items required key validation and used logistic regression to mimic that professional judgement to automatically flag problematic items.
Sahin, F. & Clauser, J. Paper presented at NCME 2016