Application security is most often associated with a strongly technical aspect. However, threats related to the human element—the user, administrator, or even the tester—are becoming an increasingly significant challenge. But wait, a human as the target of an attack? This is because the human is the weakest link. How do you hack a person? All thanks to the so-called social engineering, which is a “strategy” used by individuals or groups to manipulate or mislead people, successfully persuading them to reveal confidential information or perform actions that reduce the security level. The basis for such actions is psychology, not technical knowledge (though that is also needed, of course). The late Kevin Mitnick, who passed away two years ago, is a perfect example. I’ve mentioned him in several of my articles on phishing and security policy.

We must understand that attackers don’t always have to break complicated security measures—it’s often enough to induce a potential victim to click a link, provide a password, or download an attachment that turns out to be a trap. Classic examples include phishing and spear phishing. In the context of applications, this means that even the best-secured system can be compromised if a user is manipulated into revealing access data. Considering the omnipresent remote work, such actions are becoming more and more common. Therefore, thinking about application security cannot be limited to code analysis alone—it must also include processes, people, and the testing method.

Craftware, the first Polish Salesforce partner in Poland - since 2014
What Makes Social Engineering So Dangerous?

Modern organizations invest enormous resources in penetration testing, code audits, and automated vulnerability scanners. It’s actually great that security is so well covered from a technical perspective. Unfortunately, an increasing number of attackers realize that the human remains the weakest link.

According to industry reports, such as the CERT 2024 report:

“The threat landscape in the Polish cyberspace is evolving. On the one hand, we observe phishing campaigns well-known from recent years, aimed at scamming logins and passwords for popular email services or social media, or pages with fake advertisements imitating services such as OLX or Allegro. On the other hand, new campaigns, interesting from a cybersecurity perspective, are emerging.”

No matter how we look at it, we must realize that most successful cyberattacks start with user manipulation. By acknowledging that social engineering threats are just as serious as code bugs today, we understand that all firewalls, encryption, or authorization mechanisms lose their significance.

All right, we know what social engineering is and why it’s so dangerous. Let’s focus now on the group so close to my heart: software testers. You might ask why them? Wouldn’t it be better to attack someone important, like an administrator or a CEO? Well, not exactly. Testers are, unfortunately, a very marginalized and undervalued group. Moreover, in mature organizations, they are often neglected, and training is skimped on. If we look closely at who a software tester is, they often possess key permissions and work with data that can be very valuable from an attacker’s point of view. But let’s break this down into primary factors to more easily understand why we were mistaken.

  • Access to Environments

    Testers use test, staging, and sometimes even production environments. They have access to:

    CI/CD systems,
    application servers,
    test databases,
    APIs with a reduced security level.
    In this aspect alone, an attacker can obtain login credentials or use the test environment as a gateway to the production infrastructure, especially when environment separation is not complete.

  • Access to Test Accounts

    Test accounts often have privileged roles (e.g., an “admin” enabling the verification of all functionalities). Passwords for these accounts are sometimes shared within the team, kept in text files or spreadsheets.

  • Access to Data

    In many organizations, real production data is used in testing (e.g., fragments of customer databases, financial data, medical data). Even if they are partially anonymized, they can still contain valuable information (e.g., email addresses).

    This makes testers possess sensitive data, and their workstations and accounts become an attractive target for phishing or malware.

Now we know why testers are a valuable “object” for a potential attack. Unfortunately, that’s not the end of the problem. In practice, we often save money on testing and testers, which leads us to several issues.

  • Usually (not always), the main security training is directed at developers and administrators; testers are often omitted.
  • Some organizations don’t pay attention to protecting test environments, treating them as “less important.”
  • Testers often use tools intended to facilitate/accelerate testing.
Media CRM
Potential Consequences of a Successful Attack on a Tester
  • Credential Harvesting – taking over test accounts and, consequently, access to staging and CI/CD systems.
  • Test Data Exfiltration – the possibility of acquiring personal or business data.
  • Lateral Attack – using the test environment to penetrate the internal network and production.
  • QA Process Sabotage – manipulating test data, manipulating test results, which can lead to security flaws being passed into production.

The whole thing sounds like a serious problem. Fortunately, there are several basic practices, both for testers and management, that can save us.

  • Selective Test Data – use data generators, not real production databases.
  • Environment Separation – no network connection to production, separate accounts and roles.
  • Secure Password Storage – a password manager instead of .txt files or spreadsheets; don’t use the same password for multiple systems.
  • Periodic Security Training – phishing, safe tool usage, account hygiene.
  • Activity Monitoring – access logs to test environments and data should be analyzed just like production logs.
Financial Services Technology that keeps your customers safe

A sensible solution would be to utilize the testing team to a greater extent. The traditional, basic, and most common use of testers focuses on functional, performance, or usability testing. What if we used testers’ skills more broadly to:

  • Include social engineering attack scenarios – e.g., check if the application responds appropriately to suspicious login attempts or warns the user about a suspicious link.
  • Educate the project and business teams – point out areas where a user might be manipulated (e.g., due to a lack of clear error messages, weak data validation, or misleading interfaces).
  • Cooperate with the security department – combine knowledge of software quality with a security perspective to detect not only technical bugs but also risks related to application usage.
  • Promote “security by design” – an approach where security and resilience to manipulation are considered from the first stages of the project, not just after deployment.
    Thanks to this, Quality Assurance teams would become a key link in protecting the organization against attacks that are not based solely on technology but on human vulnerability. Periodic training, in my opinion, is very important for a simple reason: security is not a one-time thing; security is a continuous process that should never be completed.
Autor
  • Krzysztof Nancka
  • Senior software tester
  • Tester associated with the industry for almost 5 years. At that time, he implemented projects in the e-commerce sector. Always eager for new projects, as he combines work with passion. Security enthusiast who privately deals with Viking historical reconstruction and traditional archery.