Securing Tests with AI: Initial Explorations in Item Generation

This event is being organized by the NCME Artificial Intelligence in Measurement and Education (AIME) SIGIMIE.

Generative AI and off-the-shelf large language models (LLMs) offer transformative possibilities for enhancing test security. This presentation delves into the potential of AI to prevent cheating by enabling the creation of randomly parallel tests. We will share initial explorations at Caveon, highlighting research focused on leveraging AI for test security purposes. Specifically, we will discuss methods for generating a large volume of diverse items and domain templates, which help to randomize tests and mitigate the risk of pre-knowledge. A key case study will be presented, showcasing the development of approximately 1,500 items for a client through a semi-automated process. Additionally, we will explore the use of LLMs to facilitate the programming of SmartItems and examine prompt engineering techniques for binary response items. This presentation provides a comprehensive overview of our research, advocating for the use of AI not just to expedite existing processes, but to enable more profound innovations in test development practices.

Presenter:
  • Sergio Araneda, Caveon
When:  Aug 14, 2024 from 04:00 PM to 05:00 PM (ET)

Location

Online Instructions:
Url: http://us02web.zoom.us/j/81728443502?pwd=aGtPb0x0NGhTYVFPYzl2L2xaZEdGUT09
Login: Meeting ID: 817 2844 3502 Passcode: 649132