Location

https://www.kennesaw.edu/ccse/events/computing-showcase/sp24-cday-program.php

Streaming Media

Document Type

Event

Start Date

25-4-2024 4:00 PM

Description

This study aims to develop an Artificial Intelligence (AI) quiz generation system for engineering students to enhance personalized learning. In the rapidly evolving field of educational education, the emergence of AI and, more specifically, Large Language Models (LLMs) such as GPT-4, Llama, Claude, and Gemini, has marked a significant advancement. Our literature review method employs a systematic approach, analyzing peer-reviewed articles, conference papers, and authoritative reports to uncover the trends and challenges in AI-driven quiz generation. The notable gap identified in our literature review is the lack of LLM-based quiz generation methods specifically for engineering education, which incorporate interactive and adaptive learning features to enhance student engagement and comprehension. This study examines the application of OpenAI LLM with a Retrieval-Augmented Generation (RAG) system in creating personalized quiz questions for engineering education, focusing on a novel methodology to enhance learning experiences through dynamic, adaptive quizzes and tutorials, particularly targeting the development of math reasoning skills in visual contexts. The proposed methodology leverages the MathVista dataset, comprising 6,141 examples, to enhance the capabilities of the OpenAI LLM. The RAG system populated with this dataset serves as a reference context for generating more relevant and accurate quiz questions. Prompt engineering techniques guide the OpenAI LLM in creating detailed multiple-choice questions (MCQs) focused on visual-mathematical reasoning challenges. The quizzes are designed to adapt to varying levels of student performance, incorporating feedback loops to customize future quizzes based on student responses. The evaluation of our AI pipeline's effectiveness employed metrics such as accuracy, relevance, and adaptability. The results indicated a significant performance in the generation of accurate questions with the least hallucinations.

Share

COinS
 
Apr 25th, 4:00 PM

UR-116 Enhancing Engineering Education Through LLM-Driven Adaptive Quiz Generation

https://www.kennesaw.edu/ccse/events/computing-showcase/sp24-cday-program.php

This study aims to develop an Artificial Intelligence (AI) quiz generation system for engineering students to enhance personalized learning. In the rapidly evolving field of educational education, the emergence of AI and, more specifically, Large Language Models (LLMs) such as GPT-4, Llama, Claude, and Gemini, has marked a significant advancement. Our literature review method employs a systematic approach, analyzing peer-reviewed articles, conference papers, and authoritative reports to uncover the trends and challenges in AI-driven quiz generation. The notable gap identified in our literature review is the lack of LLM-based quiz generation methods specifically for engineering education, which incorporate interactive and adaptive learning features to enhance student engagement and comprehension. This study examines the application of OpenAI LLM with a Retrieval-Augmented Generation (RAG) system in creating personalized quiz questions for engineering education, focusing on a novel methodology to enhance learning experiences through dynamic, adaptive quizzes and tutorials, particularly targeting the development of math reasoning skills in visual contexts. The proposed methodology leverages the MathVista dataset, comprising 6,141 examples, to enhance the capabilities of the OpenAI LLM. The RAG system populated with this dataset serves as a reference context for generating more relevant and accurate quiz questions. Prompt engineering techniques guide the OpenAI LLM in creating detailed multiple-choice questions (MCQs) focused on visual-mathematical reasoning challenges. The quizzes are designed to adapt to varying levels of student performance, incorporating feedback loops to customize future quizzes based on student responses. The evaluation of our AI pipeline's effectiveness employed metrics such as accuracy, relevance, and adaptability. The results indicated a significant performance in the generation of accurate questions with the least hallucinations.