Location
https://www.kennesaw.edu/ccse/events/computing-showcase/sp25-cday-program.php
Streaming Media
Document Type
Event
Start Date
15-4-2025 4:00 PM
Description
This research presents a personalized, agentic AI-powered system for multiple-choice question (MCQ) generation tailored to college-level tutoring in machine learning and software engineering domains. The primary objective is to enhance adaptive learning through reliable, context-aware quiz generation using long-context large language models (LLMs) and modular agent workflows. Our methodology is based on an eight-stage agentic architecture that separates tasks into two main phases: vector indexing and personalized quiz generation. In the indexing phase, academic PDFs are parsed, chunked with LangChain’s RecursiveCharacterTextSplitter, embedded via Google's text-embedding-005, and indexed using FAISS. A verification agent ensures topic alignment and integrity of the vector database. Upon receiving a user query, a retriever agent performs vector search, followed by a selector agent that filters high-quality chunks. A processor agent curates the final prompt, and a response agent generates the MCQ using Gemini 1.5 Pro. The evaluator agent assesses generated questions against ground truth using metrics like ExactMatch, Faithfulness, and BERTScore. Experimental results over 150 MCQs show Gemini’s accuracy improves from 78.00% (raw) to 93.33% when enhanced with context vectors, a 100k-token cache, and a 1M-token long-context window—achieving a +15.33% overall gain. Gemini also excels in Non-Hallucination (0.9150), Certainty (0.8883), and Answer Correctness (0.9260), indicating safe and reliable generation. When supplemented with context vectors and a training cache, these results highlight Gemini’s effectiveness as a reliable and context-sensitive model for personalized, agentic quiz generation in educational settings, offering strong potential for scalable and adaptive AI tutoring systems.
Included in
UC-100 Agentic AI Quiz Generation: Personalized Tutoring through Intelligent Retrieval and Adaptive Learning
https://www.kennesaw.edu/ccse/events/computing-showcase/sp25-cday-program.php
This research presents a personalized, agentic AI-powered system for multiple-choice question (MCQ) generation tailored to college-level tutoring in machine learning and software engineering domains. The primary objective is to enhance adaptive learning through reliable, context-aware quiz generation using long-context large language models (LLMs) and modular agent workflows. Our methodology is based on an eight-stage agentic architecture that separates tasks into two main phases: vector indexing and personalized quiz generation. In the indexing phase, academic PDFs are parsed, chunked with LangChain’s RecursiveCharacterTextSplitter, embedded via Google's text-embedding-005, and indexed using FAISS. A verification agent ensures topic alignment and integrity of the vector database. Upon receiving a user query, a retriever agent performs vector search, followed by a selector agent that filters high-quality chunks. A processor agent curates the final prompt, and a response agent generates the MCQ using Gemini 1.5 Pro. The evaluator agent assesses generated questions against ground truth using metrics like ExactMatch, Faithfulness, and BERTScore. Experimental results over 150 MCQs show Gemini’s accuracy improves from 78.00% (raw) to 93.33% when enhanced with context vectors, a 100k-token cache, and a 1M-token long-context window—achieving a +15.33% overall gain. Gemini also excels in Non-Hallucination (0.9150), Certainty (0.8883), and Answer Correctness (0.9260), indicating safe and reliable generation. When supplemented with context vectors and a training cache, these results highlight Gemini’s effectiveness as a reliable and context-sensitive model for personalized, agentic quiz generation in educational settings, offering strong potential for scalable and adaptive AI tutoring systems.