Security professionals often find themselves tasked with performing different social engineering exercises. Whether overseeing these projects for our organization or executing them for another, many considerations go into every exercise. What is the goal? Who will be targeted? What are our tactics? However, the most critical question is how to perform these exercises ethically, causing the least harm while creating the most value for everyone involved.

“Realism” in Phishing

We often see effective phishing attacks performed by real threat actors that make significant, empty promises: charitable donation requests, the promise of money or benefits, and even promotions. We also often see terrible threats, such as blackmail or the threat of employment termination.

As security professionals, our goal is to help our organization or clients measure their security posture and provide actionable improvements. Social engineering is certainly a very real, very effective threat constantly leveled at organizations and individuals. Many experts consider social engineering to be an area that requires testing and education, and rightly so. Far too often, however, we strive for an absurd level of realism, overriding and, sometimes, even outright going against the end goal: Making the organization more secure.

“Attackers send these sorts of emails, so why can’t we?”

When we conduct phishing exercises which make dangerous promises we cannot or will not make good on, we do more harm than good. We create a gap between the security team and the staff who will be targeted, fostering distrust or even outright hostility between users and security staff. Our goal is to improve the security posture, educate users, and identify and fill the gaps in defenses. Promising staff additional money, free COVID tests, or work-at-home opportunities may be realistic. It shows whether employees are susceptible to these threats and may identify training needs. However, it may also indicate that security does not care about the well-being of the targeted staff.

We’re far better off using tamer scenarios that illustrate the technical and procedural “tells” of a realistic phishing attack without using emotional tools that can cause unnecessary damage. For example, consider the following general pretexts.

  • Employees can “acknowledge policy documents” instead of “accept benefits” by clicking on a link.
  • Employees can click to review account information for an internal system, suggesting that an upcoming change will render the account inaccessible if the information is incorrect.
  • Users can click a link to verify that accrued vacation time accumulates appropriately following a change to the system.

Users will want to click these links (or feel they need to) but will not feel particularly betrayed should they click on them and discover it’s a phishing exercise. Furthermore, if the training material points out all the tells in that phishing email immediately (or as soon as is feasible for the exercise), they’ll feel better equipped for future attacks.

The Importance of Trust

When a phishing attack bypasses email-focused security systems, such as filters or sandboxes, and lands in an employee’s inbox, the time for alerting is limited. The target either recognizes it is a phish and raises an alert, or they click the link, open the attachment, or send a response. Maybe another system flags those actions and generates an alert, or maybe not. The only remaining chance for a quick response is for that employee— that person— to warn the organization.

The most successful response to a well-crafted, well-executed phishing attack stems from an employee noticing the problem and manually raising a red flag. If employees do not trust the security department, or if there is not a strong working relationship, there is no motivation to sound the alarm. Things can be further complicated if employees fall for the phish. Fears of punishment or employer retaliation may lead employees to avoid alerting and continue with their work as if nothing ever happened. Without the support of the employees, reliance shifts to another system further down the line.

As security professionals, it is imperative that we foster that trust and working relationship. Users are not the enemy of a security team; they’re one of the greatest assets. They know the work they must do and what is needed to be efficient; they will notice things that IDS systems and SOC staff may overlook. They know about “Shadow IT” departments and the workarounds staff use to do their job despite security measures. They may have suggestions on how to balance efficiency and security effectively.

Better Security

As a CISO, security department staff member, or manager, your primary responsibility is typically to ensure the security of your organization, its systems, its data, and its people. When performing these phishing exercises, internally or with a third-party consultant or service, make sure you’re reading in the appropriate members of your team. Consider bringing in legal and HR teams to review potential scenarios, and always review scenarios before letting your team or third-party vendor begin the exercise. If your organization possesses an internal “security council” comprised of staff members, consider reading them in as well, as they may point out issues and concerns other parties would have missed.

Furthermore, consider the current shifts and needs of your organization. If it’s the annual time for employees to sign updated policies, and the proposed scenario involves tricking employees into clicking a link to “sign the updated employee handbook,” it may make users wary of following through on the actual, necessary signing of documents. This has the additional benefit of revealing business processes that may resemble realistic phishing attacks, providing an opportunity to change processes and make social engineering attacks more difficult.

Finally, remember that it is the job of the security team and the systems in place to try and prevent successful attacks from reaching users. Staff education on the identification of social engineering attacks and the appropriate response should continue, but design systems to assume a certain degree of successful social engineering attacks. When users fall victim to a social engineering attack, provide education in place of punishment. That user now has hard-won experience, and they can be an invaluable asset in the future.

Better Consulting

As a security consultant, fortifying the client is my top priority. I work with some clients annually or even quarterly. I take time to learn their systems, their concerns, and their business; obtaining crucial information helps me be a better consultant to them, leveraging my expertise and their knowledge of their organization to have the most positive impact possible. The more I work with the client, the more valuable I become to them. I’d rather have a simple social engineering engagement every year for five years than one engagement where I “win” with a realistic but unethical phishing email that ends with the client not wanting to work with me again.

When designing phishing scenarios, consider potential consequences. Have another member of your team review the scenario and provide their feedback, have the client sign-off on the scenario, and have them bring in an executive, HR, and perhaps even legal to give their sign-off. If they have an internal “security council” comprised of end-users, have them share their opinions. Avoiding potential disaster is worth reading three or four more people into the exercise.

Will the target of the phishing scenario feel betrayed? Will they feel like you’ve broken a promise? Will they be inclined to share the phish on social media? Will your phish have long-term negative impacts on the organization’s operational effectiveness? Consider all these factors when designing and executing a phishing scenario.

TL;DR

Since the beginning of the COVID-19 pandemic, reports of organizations conducting tone-deaf or outright unethical pandemic-related phishing exercises have been regular. The results are severe negative publicity, potentially lost profits, and more. Targeted staff may seek “greener pastures” at other organizations that won’t play with their emotions. Those that stay won’t trust the security team or the organization the way they did before, and that may result in a crucial loss of time in responding to genuine social engineering attacks.

I have had clients tell me that the choices made by previous phishing organizations led them to drop that vendor. Once the damage is done, it’s rather hard to undo. When designing social engineering scenarios for a phishing exercise, consider the long-term impact; remember that a more secure organization is the goal, not realism.

Written by Ray Ancel, Senior Security Consultant