Publication Date
10-13-2025
Abstract
This study explores the application of prompt engineering in cybersecurity education, mainly by evaluating the performance of different generative artificial intelligence (GenAI) platforms when performing tasks consistent with the NICE framework role pr-ir-001 - Network Defense Incident Responder. The study employed structured prompts designed for a medical technology environment compliant with HIPAA and NIST SP 800-53, while the tasks of the three GenAI models (GPT-4, Gemini, and DeepSeek) were to generate event response scenarios. Their outputs will be evaluated from four aspects: accuracy, relevance, clarity and completeness.
The results show that the three models differ in depth and consistency, but all perform well in identifying risks and proposing security enhancement measures. GPT-4 offers the most structured and complete responses, while Gemini provides concise but insufficiently detailed output. DeepSeek performed relatively evenly. The research results highlight the prospects and current limitations of GenAI in cybersecurity education and compliance analysis.
This paper ultimately provides a repeatable method for designing prompt words and proposes an evaluation framework for using AI-generated scenarios in cybersecurity teaching. It emphasizes the teaching value of the performance of many models of GenAI in training students in risk analysis, incident response and compliance assessment.
Included in
Information Security Commons, Management Information Systems Commons, Technology and Innovation Commons