SUBJECT MATTER EXPERTS’ FEEDBACK ON EXPERIMENTAL PROCEDURES TO MEASURE USER’S JUDGMENT ERRORS IN SOCIAL ENGINEERING ATTACKS
Distracted users can fail to correctly distinguish the differences between legitimate and malicious emails or search engine results. Mobile phone users can have a more challenging time identifying malicious content due to the smaller screen size and the limited security features in mobile phone applications. Thus, the main goal of this research study was to design, develop, and validate a set of field experiments to assess user’s judgment when exposed to two types of simulated social engineering attacks: phishing and Potentially Malicious Search Engine Results (PMSER), based on the interaction of the environment (distracting vs. non-distracting) and type of device used (mobile vs. computer). In this paper, we provide the results from the Delphi methodology research we conducted using an expert panel consisting of 28 cybersecurity Subject Matter Experts (SMEs) who participated, out of 60 cybersecurity experts invited. Half of the SMEs were with over 10 years of experience in cybersecurity, the rest around five years. SMEs were asked to validate two sets of experimental tasks (phishing & PMSER) as specified in RQ1. The SMEs were then asked to identify physical and Audio/Visual (A/V) environmental factors for distracting and non-distracting environments. About 50% of the SMEs found that an airport was the most distracting environment for mobile phone and computer users. About 35.7% of the SMEs also found that a home environment was the least distracting environment for users, with an office setting coming into a close second place. About 67.9% of the SMEs chose “all” for the most distracting A/V distraction level, which included continuous background noise, visual distractions, and distracting/loud music. About 46.4% of the SMEs chose “all” for the least distracting A/V level, including a quiet environment, relaxing background music, and no visual distractions. The SMEs were then asked to evaluate a randomization table. This was important for RQ2 to set up the eight experimental protocols to maintain the validity of the proposed experiment. About 89.3% indicated a strong consensus that we should keep the randomization as it is. The SMEs were also asked whether we should keep, revise, or replace the number of questions for each mini-IQ test to three questions each. About 75% of the SMEs responded that we should keep the number of mini-IQ questions to three. Finally, the SMEs were asked to evaluate the proposed procedures for the pilot testing and experimental research phases conducted in the future. About 96.4% of the SMEs selected to keep the first pilot testing procedure. For second and third pilot testing procedures, the SMEs responded with an 89.3% strong consensus to keep the procedures. For the first experimental procedure, a strong consensus of 92.9% of the SMEs recommended keeping the procedure. Finally, for the third experimental procedure, there was an 85.7% majority to keep the procedure. The expert panel was used to validate the proposed experimental procedures and recommended adjustments. The conclusions, study limitations, and recommendations for future research are discussed.
Information Security Commons, Management Information Systems Commons, Technology and Innovation Commons