This event is being organized by the NCME Artificial Intelligence in Measurement and Education (AIME) SIGIMIE.
Generative AI and off-the-shelf large language models (LLMs) offer transformative possibilities for enhancing test security. This presentation delves into the potential of AI to prevent cheating by enabling the creation of randomly parallel tests. We will share initial explorations at Caveon, highlighting research focused on leveraging AI for test security purposes. Specifically, we will discuss methods for generating a large volume of diverse items and domain templates, which help to randomize tests and mitigate the risk of pre-knowledge. A key case study will be presented, showcasing the development of approximately 1,500 items for a client through a semi-automated process. Additionally, we will explore the use of LLMs to facilitate the programming of SmartItems and examine prompt engineering techniques for binary response items. This presentation provides a comprehensive overview of our research, advocating for the use of AI not just to expedite existing processes, but to enable more profound innovations in test development practices.
National Council on Measurement in Education