Download PDFOpen PDF in browserCurrent version

Physics Assessment Generation Through Pattern Matching and Large Language Models

EasyChair Preprint 14739, version 1

Versions: 12history
6 pagesDate: September 6, 2024

Abstract

Question generation has been an active area of research in Natural Language Processing (NLP) for some time, particularly for educational applications. This need has become even more pressing in the evolving educational landscape where online assessments are increasingly common. Our research focuses on generating physics assessments due to the unique challenge presented by the combination of generating both textual and numerical content. This paper presents an innovative approach to automated physics assessment generation by integrating pattern matching techniques with large language models (LLMs) which are Pegasus, T5, ChatGPT-3.5 Turbo, and Mistral 7B. The proposed method involves two main processes: generating variable values through pattern matching using regular expressions and paraphrasing the generated assessment questions using LLMs to ensure syntactic and semantic diversity. The generated paraphrases then get evaluated using automatic metrics (BLEU, METEOR, ROUGE, and ParaScore) and human assessments. The results indicate that LLMs with larger parameters used in this research, which are ChatGPT-3.5 Turbo and Mistral-7B, excel in generating high-quality paraphrases that are both syntactically correct and contextually meaningful. Both models achieved perfect human evaluation scores (3.000) compared to Pegasus (1.705) and T5 (1.529). Additionally, they received higher ParaScore scores, with ChatGPT-3.5 Turbo scoring 0.803 and Mistral-7B scoring 0.788, outperforming Pegasus (0.768) and T5 (0.760). Additionally, the results highlight the limitations of traditional n-gram based evaluation metrics and the potential of ParaScore as a more representative measure. This research contributes to the development of more reliable and varied question banks, aiding educators in creating personalized and cheat-resistant assessments.

Keyphrases: Automated Question Generation, aided exam question generation, anti cheating software tool, large language models, paraphrase generation, paraphrasing, pattern matching, physics assessment generation, quality of paraphrases, question generation, regular expressions

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14739,
  author    = {Marchotridyo and Fariska Zakhralativa Ruskanda},
  title     = {Physics Assessment Generation Through Pattern Matching and Large Language Models},
  howpublished = {EasyChair Preprint 14739},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browserCurrent version