•  
  •  
 

Publication Date

11-3-2025

Abstract

The increasing volume of cyber threats, combined with a critical shortage of skilled professionals and rising burnout among practitioners, highlights the urgent need for innovative solutions in cybersecurity operations. Generative Artificial Intelligence (GenAI) offers promising potential to augment human analysts in cybersecurity, but its integration requires rigorous validation of the fundamental competencies that enable effective collaboration of human-GenAI teams. This research study employed a mixed-methods research project designed to evaluate human-GenAI teams, emphasizing the role of expert consensus in shaping the experimental assessment of the Fundamental Cybersecurity Competency Index (FCCI) in a commercial cyber range. We engaged 20 Subject Matter Experts (SMEs) in a panel to validate the fundamental cybersecurity Knowledge, Skills, and Tasks (KSTs), which are the measures of competencies. The expert panel refined the cybersecurity scenarios and experimental procedures used in hands-on simulations, ensuring they align with current standards outlined in the United States (U.S.) Department of Defense (DoD) Cyber Workforce Framework (DCWF). Our findings indicated that 46 of 47 fundamental cybersecurity Knowledge, Skills, and Tasks (KSTs) were validated by the SMEs as essential components of the FCCI. The validated scenarios and experiments pave the way for future research on assessing cybersecurity competencies in a commercial cyber range with and without GenAI support (e.g., large language models such as ChatGPT). By establishing the baseline for competency assessment in this research, the SMEs’ feedback contributed to advancing cybersecurity workforce development and provided critical insights for integrating GenAI into collaborative cybersecurity human-GenAI teaming operations.

Share

COinS