Automated cyber attacks: no system remains untouched

Automated attacks

Independent of the size of a company or enterprise, everyone has to expect becoming a target of cyber attacks. Many attacks are not aimed at a specific target, but happen randomly and automated. Upon deploying a new server for the provisioning of our own vulnerability database, we noticed that already in the first 20 hours of online time, almost 800 requests could be logged on the webserver. In this article we want to dissect which origin these requests have and illustrate that attackers target far more than well-known systems and companies these days. In addition, we give practical advice, how to protect your own system against these attacks.

Legitimate requests to the vulnerability database (37%)

In a first step we want to filter all requests from our log file that constitue valid queries to our vulnerability database (the majority of which were executed in test cases). We do this by filtering all known source IP addresses, as well as regular requests to known API endpoints. The vulnerability database provides the following API endpoints for the retrieval of vulnerability data:

  • /api/status
  • /api/import
  • /api/query_cve
  • /api/query_cpe
  • /api/index_management

After a first evaluation, we observed that 269 of 724 requests were legitimate requests to the vulnerability database:

Cyber attacks Figure 1: Sample of legitimate requests to the webserver

But which origin do the remaining 455 requests have?

Directory enumeration of administrative database backends (14%)

A single IP address was particularly persistent: With 101 requests an attacker attempted to enumerate various backends for database administration:logs 2 cut

Figure 2: Directory scanning to find database backends

Vulnerability scans from unknown sources (14%)

Furthermore we could identify 102 requests, where our attempts to associate the source IPs with domains or specific organisations (e.g., using nslookup, user-agent) were unsuccessful. The 102 requests originate from 5 different IP addresses or subnets. This means around 20 requests per scan were executed.

logs 3 cut

Figure 3: Various vulnerability scans with unknown origin

Enumerated components were:

  • boaform Admin Interface (8 requests)
  • /api/jsonws/invoke: Liferay CMS Remote Code Execution and other exploits

Requests to / (11,5%)

Overall, we could identify 83 requests that requested the index file of the webserver. This allows to identify, whether a webserver is online and to observe which service is initially returned.

logs 4

Figure 4: Index-requests of various sources

We could identify various providers and tools that have checked our webserver for its availability:

Vulnerability scans from (9%)

During our evaluation of the log file we could identify further 65 requests that originate from two IP addresses, using a user agent of “”:

logs 5

Figure 5: Vulnerability scan of

The page itself explains that the service scans the entire Internet randomly for known vulnerabilities:

logs 6

Figure 6: – About

HAFNIUM Exchange Exploits (2,8%)

Furthermore we could identify 20 requests that attempted to detect or exploit parts of the HAFNIUM Exchange vulnerabilities. (Common IOCs can be found under

  • autodiscover.xml: Attempt to obtain the administrator account ID of the Exchange server
  • \owa\auth\: Folder that shells are uploaded into post-compromise to establish a backdoor to the system

logs 7

Figure 7: Attempted exploitation of HAFNIUM/Proxylogon Exchange vulnerabilities

NGINX .env Sensitive Information Disclosure of Server Variables (1,5%)

11 requests have attempted to read a .env file in the root directory of the webserver. Should this file exist and be accessible it is likely to contain sensitive environment variables (such as passwords).

logs 8

Figure 8: Attempts to read a .env file

Remaining Requests (10,2%)

Further 58 requests were not part of larger scanning activities and have enumerated single vulnerabilities:

  • Server-Side Request Forgery attempts: 12 requests
  • CVE-2020-25078: D-Link IP Camera Admin Passwort Exploit: 9 requests
  • Hexcoded Exploits/Payloads: 5 requests
  • Spring Boot: Actuator Endpoint for reading (sensitive) server information: 3 requests
  • Netgear Router DGN1000/DGN2200: Remote Code Execution Exploit: 2 requests
  • Open Proxy CONNECT: 1 request
  • Various single exploits or vulnerability checks: 27 requests

Furthermore the following harmless data was queried:

  • favicon.ico – Bookmark graphic: 7 requests
  • robots.txt – file for search engine indexing: 9 requests


Using tools like zmap, attackers are able to scan the entire Internet in less than 5 minutes (see The listed statistics have shown that IT systems are an immediate target of automated attacks and vulnerability scans, as soon as they are available on the Internet.The size of a company or the degree of familiarity are irrelevant, since attackers are able to scan the entire Internet for vulnerable hosts and oftentimes cover the entire IPv4 address range. Even using common infrastructural components like reverse proxies or load balancers to hide applications behind specific hostnames can be targeted. A secret or special hostname is not hidden, like oftentimes assumed, and does not protect from unauthorized access. Already with the retrieval of SSL certificates for your services and applications, hostnames are logged in so called SSL transparency logs. These are publicly available. This similarly allows automated tools to conduct attacks, since hostnames can be queried using services like Further information regarding this topic can be found in our articleSubdomains under the hood: SSL Transparency Logs .

The implementation of access controls and hardening measures thus has to be done before your services and applications are exposed to the Internet. As soon as an IT system is reachable on the Internet, you have to expect active attacks that may succeed in the worst case.


Expose only required network services publicly

When you publish IT systems on the public Internet, you should only expose services that are required for the business purpose. Running a web application or service based on the HTTP(S) protocol, this usually means port 443 TCP is required.

Refrain from exposing the entire host (all available network services) on the Internet.

Network separation

Implement a demilitarized zone (DMZ) using firewalls to achieve an additional layer of network separation between the public Internet and your internal IT infrastructure. Place all infrastructure components that you want to expose on the Internet in the designated DMZ. Further information can be found in the IT baseline of the BSI.

Patch-Management and Inventory Creation

Keep all your software components up to date and implement a patch management process. Create an inventory of all IT infrastructure components, listing all used software versions, virtual hostnames, SSL certificate expiration dates, configuration settings, etc.

Further information can be found under:

Hardening measures

Harden all exposed network services and IT systems according to the best-practices of the vendor or hardening measures of the Center for Internet Security (CIS). Change all default passwords or simple login credentials that may still exist from the development period and configure your systems for productive use. This includes the deactivation of debug features or testing endpoints. Implement all recommended HTTP-Response-Headers and harden the configuration of your webservers. Ensure that sensitive cookies have the Secure, HttpOnly and SameSite flags set.

Transport encryption

Offer your network services via an encrypted communication channel. This ensures the confidentiality and integrity of your data and allows clients to verify the authenticity of the server. Refrain from using outdated algorithms like RC4, DES, 3DES, MD2, MD4, MD5 or SHA1. Employ SSL certificates that are issued from a trustworthy certification authority, e.g., Let’s Encrypt. Keep these certificates up to date and renew them in time. Use a single, unique SSL certificate per application (service) and set the correct domain name in the Common Name field of the certificate. Using SSL wildcard certificates is only necessary in rare cases and not recommended.

Access controls and additional security solutions

Limit access to your network services, in case they are not publicly available on the Internet. It may make sense to implement an IP whitelisting, which limits connections to a trustworthy pool of static IPv4 addresses. Configure this behavior either in your firewall solution or directly within the deployed network service, if possible. Alternatively you can also use SSL client certificates or Basic-Authentication

Implement additional security solutions for your network services like Intrusion Prevention Systems (IPS) or a Web Application Firewall (WAF), to have advanced protection against potential attacks. For IPS we can reommend the open source solution Fail2ban. As a WAF, ModSecurity with the known OWASP Core Rule Set can be set up.

Fail2ban is an IPS written in Python, which identifies suspicious activity based on log entries and regex filters and allows to set up automatical defense actions. It is for instance possible to recognized automated vulnerability scans, brute-force attacks or bot-based requests and block attackers using IPtables. Fail2ban ist open source and can be used freely.

  • Installation of Fail2ban
    • Fail2ban can usually be installed using the native packet manager of your Linux distribution. The following command is usually sufficient:
sudo apt update && sudo apt install fail2ban
    • Afterwards the Fail2ban service should have started automatically. Verify succesful startup using the following command:
sudo systemctl status fail2ban
  • Configuration of Fail2ban
    • After the installation of Fail2ban, a new directory /etc/fail2ban/ is available, which holds all relevant configuration files. By default, two configuration files are provided:/etc/fail2ban/jail.conf and /etc/fail2ban/jail.d/defaults-debian.conf. They should however not be edited, since they may be overriden with the next package update.
    • Instead you should create specific configuration files with the .local file extension. Configuration files with this extension will override directives from the .conf files. The easiest configuration method for most users is copying over the supplied jail.conf tojail.local and then editing the .local file for desired changes. The .local file only needs to hold entries that shall override the default config.
  • Fail2ban for SSH
    • After the installation of Fail2ban, a default guard is active for the SSH service on TCP port 22. Should you use a different port for your SSH service, you have to adapt the configuration setting port in your jail.local file. Here you can also adapt important directives like findtime, bantime and maxretry, should you require a more specific configuration. Should you not require this protection, you can disable it by setting the directive enabled to falseFurther information can be found under:
  • Fail2ban for web services
    • Furthermore, Fail2ban can be set up to protect against automated web attacks. You may, for instance, recognize attacks that try to enumerate web directories (Forceful Browsing) or known requests associated with vulnerability scans and block them.
    • The community provides dedicated configuration files, which can be used freely:
    • Store these exemplary filter configurations in the directory /etc/fail2ban/filter.d/ and configure a new jail in your jail.local file. In the following we provide an example.
  • Blocking search requests from bots
    • Automated bots and vulnerability scanners continuously crawl the entire Internet to identify vulnerable hosts and execute exploits. Oftentimes, known tools are used, whose signature can be identified in the User-Agent HTTP-Header. Using this header, many simple bot attacks can be detected and blocked. Attackers may change this header, which leaves more advanced attacks undetected. The Fail2ban filters *badbots.conf are mainly based on the “User-Agent” header. 
    • Alternatively, it is also possible to block all requests that follow a typical attack pattern. This includes automated requests, which continuously attempt to identify files or directories on the web server. Since this type of attack requests several file and directory names at random, the probability of many requests resulting in a 404 Not Found error message is relatively high. Analysing these error messages and the associated log files, Fail2ban is able to recognize attacks and ban attacker systems early on.
    • Example: Nginx web server:

1. Store the following file under /etc/fail2ban/filter.d/nginx-botsearch.conf

2. Add configuration settings to your /etc/fail2ban/jail.local:

ignoreip =
enabled = true
port = http,https
filter = nginx-botsearch
logpath = /var/log/nginx/access.log
bantime = 604800 # Bann für 1 Woche
maxretry = 10 # Bann bei 10 Fehlermeldungen
findtime = 60 # zurücksetzen von maxretry nach 1 Minute

3. If necessary, include further trustworthy IP addresses of your company in the ignoreip field, which shall not be blocked by Fail2ban. If necessary, adapt other directives according to your needs and verify the specified port number of the web server, as well as correct read permissions for the   /var/log/nginx/access.log log file.

4. Restart the Fail2ban service

sudo systemctl restart fail2ban

Automated enumeration requests will now be banned if they generate more than ten 404 error messages within one minute. The IP address of the attacking system will be blocked for a week using IPtables and enabled again afterwards. If desired, you can also be informed about IP bans via e-mail using additional configuration settings. A Push-notification to your smartphone using a Telegram-Messenger-Bot in Fail2ban is also possible. Overall, Fail2ban is very flexible and allows unlimited banactions,  like custom shell scripts, in case a filter matches

To view already banned IP addresses the following command can be used:

  • View available jails
sudo fail2ban-client status
  • View banned IP address in a jail
sudo fail2ban-client status 

Fail2ban offers several ways to protect your services even better. Inform yourself about additional filters and start using them, if desired. Alternatively, you can also create your own filters using regex and test them on log entries.

Premade Fail2ban filter lists can be found here:  

CVE Quick Search: Implementing our own vulnerability database

Not only for penetration testing it is interesting to know, which vulnerabilities exist for a certain software product. Also from the perspective of an IT team it can be useful to quickly obtain information about an employed product version. So far various databases existed for these queries like e.g.,, or

However, during the last years, we could identify several issues with these databases:

  • Many databases only index vulnerabilities for certain product groups (e.g., Snyk: Web Technologies)
  • Many databases search for keywords in full-text descriptions. Searching for specific product versions is not precise.
  • Many databases are outdated or list incorrect information

1Figure: Incorrect vulnerability results for Windows 10

3Figure: Keyword search returns a different product than the originally searched for product

This is why we decided to implement our own solution. We considered the following key points:

  • Products and version numbers can be searched using unique identifiers. This allows a more precise search query.
  • The system performs a daily import of the lastest vulnerability data from the National Institute of Standards and Technology (NIST). Vulnerabilities are thus kept up to date and have a verified CVE entry.
  • The system is based on Elastic Stack to query and visualize data in real time.

Technical Implementation: NIST NVD & Elastic Stack

Upon finding vulnerabilities in products, security researchers commonly register a CVE entry per vulnerability. These CVE entries are given a unique identifier, detailed vulnerability information, as well as a general description.

They can be registered at and are indexed in the National Vulnerability Database (NVD) in real time ( NIST publishes these data sets publicly and freely, which contain all registered vulnerabilities. We use this data stream as a basis for our own database.

The technical details of the data import and subsequent provisioning are illustrated as follows:

4Figure: Overview of the technical components of the vulnerability database

1. Daily import of vulnerability data from the NIST NVD

The data sets are organized by year numbers and refreshed daily by NIST. Every night we download the latest files onto our file server.

2. Pre-Processing of vulnerability data

Afterwards the files are pre-processed to make them compatible with the Elastic Stack Parser. One process that happens here is the expansion of all JSON files: The downloaded files contain JSON objects, however they are often nested, which makes it harder to identify single objects for the parser. We read the JSON and write all object seperators into separate lines. This way we can use a regex ( ‘^{‘ ) to precisely determine, when a new object begins.


Furthermore we strip the file of all unneeded metadata (e.g., autor, version information, etc.), which leaves only the CVE entries in the file as sequential JSON objects.

3. Reading in the pre-processed vulnerability data using Logstash

After the pre-processing, our Logstash parser is able to read the individual lines of the files using the Multiline Codec ( Every time a complete JSON object is read in, Logstash forwards this CVE object to our Elasticsearch instance.

The CVE Quick Search – Data formats and vulnerability queries

After all CVE entries were read and stored in the Elasticsearch database, we have to understand, which format these entries have and how we can search them for specific products and product vulnerabilities. Our final result is illustrated in the following screenshot: Using unique identifiers, we can return exact vulnerability reports for the queried product version.

2021 09 17 09 56 10 ClipboardFigure: Preview of our vulnerability query frontend

1. Format of product versions

The general format of product versions is specified in the NIST specification. Section 5.3.3 gives a short overview (



  • part: either ‘a’ (application), ‘o’ (operating system) or ‘h’ (hardware)
  • vendor: unique identifier of the product vendor
  • product_name: a unique name identifier of the product
  • version: the version number of the product
  • edition: deprecated
  • sw_edition: Version for identifiying different market versions
  • target_sw: Software environment the product is used with/in
  • target_hw: Hardware environment the product is used with/in
  • language: Supported language
  • other: other annotations

A colon is used as a separating character. Asterisk (*) is used as a wildcard symbol.

In our screenshot: “cpe:2.3:o:juniper:junos:17.4r3:*:*:*:*:*:*:*” we can determine that the operating system JunOS of the vendor Juniper in version 17.4r3 is prone to a vulnerability.

Looking at the JSON file, it becomes apparent that there are two formats that are used to store the version number of a vulnerability.

  • Format 1: Using the attributes “versionStartIncluding/versionStartExcluding” and “versionEndIncluding/versionEndExcluding” a range of vulnerable versions is defined.
  • Format 2: A single vulnerable software version is stored in “cpe23Uri”.

2. Querying the database

To query the database for specific products, an easy interface to find correct product identifiers is required. We have decided to implement this component using JavaScript Auto-Complete, that displays products and associated CPE identifiers dynamically:

9Figure: Autocomplete mechanism of the query frontend

After a choice was made, the vulnerabilities matching the specific product identifier can be queried.

Outlook: Kibana – Visualising vulnerabilities and trends

A big advantage that storing vulnerability data in an Elasticsearch database has, is its direct connection to Kibana. Kibana autonomously queries Elasticsearch to generate visualisations from it. In the following we illustrate a selection of visualizations of vulnerability data:

10Figure: Amount of registered vulnerabilities per year

11Figure: Fractions of the respective risk severity groups per year

We see great potential in using this data for real time statistics on our homepage to provide vulnerability trends which are updated on a daily basis.

Outlook – Threat Intelligence and automatization

Another item on our CVE database roadmap is the implementation of a system that automatically notifies customers of new vulnerabilities, once they are released for a certain CPE identifier. Elasticsearch offers an extensive REST API that allows us to realize this task with the already implemented ELK stack.

Currently we are working on implementing live statistics for our homepage. As soon as this milestone is complete, we will continue with the topic of “Threat Intelligence”. As you can see, we not only focus on the field of penetration testing here at Pentest Factory GmbH, but also have great interest in researching cybersecurity topics and extending our understanding, as well as our service line.

Why we crack >80% of your employees’ passwords


During our technical password audits, we were able to analyse more than 40.000 password hashes and crack more than 3/4 of them. This is mostly due to short passwords, an oudated password policy in the company, as well as frequent password reuse. Furthermore it happens that administrative accounts are not bound to the corporate password policy which allows weak passwords to be set. Issues in the onboarding process of employees may also be abused by attackers to crack additional passwords. Oftentimes a single password of a privileged user is enough to allow for a full compromise of the corporate IT infrastructure.

Task us with an Active Directory Password Audit, to increase the resilience of your company against attackers in the internal network and to verify the effectivity of your password policies. We gladly support you in identifying and remediating issues related to the handling of passwords and their respective processes.


The Corona pandemic caused a sudden change towards home office working spaces. However, IT infrastructure components, like VPNs and remote access, were oftentimes not readily available during this shift. Many companies had to upgrade their existing solutions and retrofit their IT infrastructure.

Besides the newly aquired components, also new accounts were created for accessing company ressources over the Internet. If technically possible, companies implemented Single-Sign-On (SSO) authentification, which requires a user to login only once with their domain credentails before access is granted to various company ressources.

According to professor Christoph Meinel, the director of the Hasso-Plattner-Institute, the Corona pandemic increased the attack surface for cyber attacks greatly and created complex challenges for IT departments. [1] Due to the increase in home office work, higher global Internet usage since the start of the pandemic and an extension of IT infrastructures, threat actors have gained new, attractive targets for hacking and phishing attacks. Looking at just the DE-CIX Internet backbone node in Frankfurt, where traffic of various ISPs is accumulated, a new high of 9.1 Terabits per second was registered. This value equals a data volume of 1800 downloaded HD movies per second. A new record compared to prior peaks of 8.3 Terabits. [2]

Assume Breach

The effects of the pandemic have thus continuously increased the attack surface of companies and their employees regarding cyber attacks. Terms like „Supply Chain Attacks“ or „0-Day Vulnerabilities“ are frequently brought up in the media, which shows that many enterprises are actively attacked and compromised. Oftentimes the compromise of a single employee or IT system is enough to obtain access to internal networks of a company. Here a multitude of attacks can happen, like phishing or the exploitation of specific vulnerabilities in public IT systems.

Microsoft operates according to the „Assume Breach“ principle and expects that attackers have already gained access, rather than assuming complete security of systems can be achieved. But what happens, once an attacker is able to access a corporate network? How is it possible that the compromise of a regular employee account causes the entire network to break down? Sensitive ressources and server systems are regularly updated and are not openly accessible. Only a limited number of people have access to critical systems. Furthermore a company-wide password policy ensures that attackers cannot guess the passwords of administrators or other employees. Where is the problem?

IT-Security and Passwords

The annual statistics of the Hasso-Plattner-Institute from 2020 [3] illustrate that the most popular passwords amongst Germans are „123456“, „123456789“ or „passwort“, just like in the statistics from the prior years. This does not constitute sufficient password complexity – without even covering the reuse of passwords.

Most companies are aware of this issue and implement technical policies to prevent the use of weak passwords. Usually, group policies are applied for all employees via the Microsoft Active Directory service. Users are then forced to set passwords with a sufficient length as well as certain complexity requirements. Everyone knows phrases like “Your password has to contain at least 8 characters”. Does this imply that weak passwords are a thing of the past? Unfortunately not, since passwords like „Winter21!“ are still very weak and guessable, even though they are compliant with the company-wide password policy.

Online Attacks vs. Offline Attacks

For online services likeOutlook Web Access (OWA) or VPN portals, where a user logs on with their username and password, the likelihood of a successful attack is grearly reduced. An attacker would have to identify a valid username and subsequently guess the respective password. Furthermore solutions like account lockouts after multiple invalid login attempts, rate limiting or two factor authentication (2FA) are used. These componentes reduce the success rate of attackers considerable, since the number of guessing attempts is limited.

But even if such defensive mechanisms are not present, the attack is still executed online, by choosing a combination of a username and password and sending it to the underlying web server. Only after the login request is processed by the server, the attacker receives the response with a successful login or an error message. This Client-Server communication limits the performance of an attack, since the required time and success rate lie greatly apart. Even a simple password containing lowercase letters and umtlauts with a length of 6 characters would require 729 million attacker requests to brute force all possible password combinations. Additionally, the attacker would already need to know the username of the victim or use further guessing to find it out. By using a company-wide password policy, including the above defensive mechanisms, the probability for a successful online brute-force attack is virtually zero.

However, for offline attacks, where an attacker has typically captured or obtained a password hash, brute-forcing can be executed with a significantly higher performance. But where do these password hashes come from and why are they more prone to guessing attempts?

Password Hashes

Let us go through the following scenario: As a great car enthusiast Max M. is always looking for new offers on the car market. Thanks to the digital change, not only local car dealerships but also the Internet is available with a great variety of cars. Max gladly uses these online platforms to look out for rare deals. For the use of these services, he generally needs a user account to save his favorites and place bids. A registration via e-mail and the password “Muster1234” is quickly done. But how does a subsequent login with our new user work? As a layman, you would quickly come to the conclusion that the username and password are simply stored by the online service and compared upon logging in.

This is correct on an abstract level, the technical details are however lacking a few details. The login credentials are stored after registration in a database. The database however does not contain clear-text passwords of a user anymore, but a so called password hash. The password hash is derived by a mathmatical one-way function based on a user’s password. Instead of our password „Muster1234“ the database now contains a string like „VEhWkDYumFA7orwT4SJSam62k+l+q1ZQCJjuL2oSgog=“ and the mathematical function ensures that this kind of computation is only possible in one direction. It is thus effectively not possible to reconstruct the clear-text password from the hash. This method ensures that the web hoster or page owner cannot access their customers’ passwords in clear-text.

During login, the clear-text password of the login form is sent to the application server, which applies the same mathematical function on the entered password and subsequently compares it to the hash that is stored in the database. Should both values be equal, the correct password was entered and the user is logged in. Are the values unequal, an incorrect password was submitted and the login results in an error. There are further technical features which are implemented in modern applications such as replicated databases or the use of “salted” hashing. These are however not relevant for our exemplary scenario.

An attacker that tries to compromise the user account, has the same difficulties doing so using an online attack. The provider of the car platform may allow only three failed logins before the user account is disabled for 5 minutes. An automated attack to guess the password is thus not feasible.

Should the attacker however gain access to the underlying database (e.g. using an SQL injection vulnerability), the outlook is different. An attacker then has access to the password hash and is able to conduct offline attacks. The mathematical one-way function is publicly known and can be used to compute hashes. An attacker may thus proceed in the following order:

  1. Choose any input string, which represents a guessing attempt of the password.
  2. Input the chosen string into the mathematical one-way function and compute its hash.
  3. Compare the computed hash with the password hash extracted from the application’s database. Results:
    1. If they are equal, the clear-text password has been successfully guessed.
    2. If they are unequal, choose a new input string and try again.

This attack is significantly more performant than online attacks, as no network communication takes place and no server-side security mechanisms become active. However it has to be noted that modern and secure hash functions are created in a way so that hash computation for attackers becomes complex and unfeasible. This is achieved by increasing the expense of a single hash computation by the factor n, which can be ignored for a singular hash computation and comparison with the login database. For an attacker who needs a multitude of hash computations to break a hash, the expense is increased by factor n, which results in successful guessing attempts to require multiple years of processing time. By using modern hash functions like Argon2 or PBKDF2, offline attacks are similarly complex to online attacks and rather complex to realize in a timely manner.

LM- and NT-Hashes

Our scenario can be translated to many other applications, like the logon to a Windows operating system. Similarly to the account of the online car dealership, Windows allows creating users that can log on to the operating system. Whether a login requires a password can be configured individually for every user account. The password is yet again stored as a hash and not in clear-text format. Microsoft uses two algorithms to compute the hash of a user password. The older of the two is called LM hash and is based on the DES algorithm. Out out security reasons this hash type was deactivated starting from versions Windows Vista and Windows Server 2008. As an alternative the so called NT hash was introduced, which is based on the MD4 algorithm.

The password hashes are stored locally in the so called SAM database on the hard drive of the operating system. Similarly to our previous scenario, a comparison between the entered password (after generating its hash) and the password hash stored in the SAM database is done. Are both values idententical, a correct password was used and the user is logged on to the system.

In corporate environments and especially in Microsoft Active Directory networks, these hashes are not only stored locally in the SAM database, but also on a dedicated server, the domain controller, in the NTDS database file. This allows for a uniform authentication against databases, file servers and further corporate ressources using the Kerberos protocol. Furthermore this reduces the complexity within the network, since IT assets and user accounts can be managed centrally via the Active Directory controllers. Using group policies, companies can furthermore ensure that employees must set a logon password and that the password adheres to a strict password policy. Passwords may need to be renewed on a regular basis. On the basis of the account password it is also possible to implement Single-Sign-On (SSO) for a variety of company ressources, since the NT hash is stored centrally on the domain controllers. Besides the local SAM database on every machine, as well as the domain controllers of an on-premise Active Directory solution, it is also possible to synchronize NT hashes with a cloud-based domain controller (e.g., Azure). This extends the possibilities of SSO logins to cloud assets, like Office 365. The password hashes of a user are thus used in several occasions, which increases the likelhood that they may be compromised.

Access to NT hashes

To obtain access to NT hashes as an attacker, several techniques exist. Out of brevity, we only mention a selection of well-known methods in this article:

  1. Compromising a single workstation (e.g., using a phishing e-mail) and dumping the local SAM database of the Windows operating system (e.g., using the tool “Mimikatz”).
  2. Compromising a domain controller in an active directory environment (e.g., using the PrintNightmare vulnerability) and dumping the NTDS database (e.g., via Mimikatz).
  3. Compromising a privileged domain user account with DCSync permissions (e.g., a domain admin or enterprise admin). Extracting all NT hashes from the domain controller in an Active Directory domain.
  4. Compromising a privileged Azure user account with the permissions to execute an Azure AD Connect synchronization. Extracting all NT hashes from the domain controller of an Active Directory domain.
  5. and many more attacks…

Password cracking

After the NT hashes of a company have been compromised, they can either be used in internal “relaying” attacks or targeted in password cracking attempts to recover the clear-text password of an employee.

This is possible, since NT hashes in Active Directory environments use an outdated algorithm called MD4. This hash function was published in 1990 by Ronald Rivest and considered insecure relatively quickly. A main problem of the hash function was its missing collision resistance, which leads to different input values generating the same output hash. This undermines the main purpose of a cryptographic hash function.

Furthermore, MD4 is highly performant and does not slow down cracking attempts, as opposed to modern hash functions like Argon2. This allows attackers to execute effective offline attacks against NT hashes. A modern gaming computer with a recent graphics card model is able to compute 50-80 billion hashes per second. Cracking short or weak passwords thus becomes dead easy.

To illustrate the implications of this cracking speed, we want to analyse all possible password combinations of 8-character long passwords. To simplify this analysis, we assume that the password consists only of lowercase letters and numbers. The german alphabet contains 26 base letters, as well as three umlauts ä, ö, ü and the special letter ß. Numerical values have a range of 10 diffferent values – the digits from 0 to 9. This results in 40 possible values for every digit of our 8 character long password. This equals about 6550 billion possible combinations, using the following mathematical formula:


A gaming computer, which generates 50 billion hashes per second, would thus only require 131 seconds to test all 6550 billion possibilities in our 8 character password space. Such a password would thus be cracked in a bit more than 2 minutes. Real threat actors employ special password cracking rigs, which can compute roughly 500 – 700 billion hashes per second. These systems cost around 10.000€ to set up.

Furthermore there is a variety of cracking methods that do not aim at brute-forcing the entire keyspace (all possible passwords). This allows cracking passwords that have more than 12 characters, which would require several years for a regular keyspace brute-force.

Such techniques are:

  • Dictionary lists (e.g., a German dictionary)
  • Password lists (e.g., from public leaks or breaches)
  • Rule based lists, combination attacks, keywalks, etc.

Pentest Factory password audit

By tasking us with an Active Directory password audit, we execute these attack scenarios. In a first step we extract all NT hashes of your employees from a domain controller without user associations. Our process is coordinated with your works council and is based on a proven process.

Afterwards we execute a real cracking attack on the extracted hashes to compute clear-text passwords. We employ several modern and realistic attack techniques and execute our attacks using a cracking rig with a hash power of 100 billion hashes per second.

After finalising the cracking phase, we create a final report that details the results of the audit. The report contains metrics regarding compromised employee passwords and lists extensive recommendations how password security can be improved in your company on the long term. Oftentimes, we are able to reveal procedural problems, e.g., with the onboarding of new employees or misconfigurations in group and password policies. Furthermore, we offer to conduct a password audit with user correlation. We have established a process that allows you as a customer to associate compromised passwords to the respective employee accounts (without our knowledge). This process is also coordinated with your works council and adheres to common data privacy regulations.

Statistics from our past password audits

Since the founding of Pentest Factory GmbH in 2019, we could conduct several password audits for our clients and improve the handling of passwords. Besides technical misconfigurations, like a password policy that was not applied uniformly, we were able to find procedural problems on several occasions. Especially the onboarding of new employees creates insecure processes that lead to weak passwords being chosen. It also occurs that administrative users can choose passwords independently of established policies. Since these users are highly privileged, weak passwords contribute to a significantly increased risk for a breach. Should an attacker be able to guess the password of an administrative user, this could result in a compromise of the entire company IT infrastructure.

To give you an insight into our work, we want to present statistics from our previously executed password audits.

Combining all performed audits, we could evaluate 32.493 unique password hashes. Including reused passwords we can count 40.288 password hashes. This means that 7795 passwords could be cracked that have been used with several user accounts at the same time. These are oftentimes passwords like „Winter2021“ or passwords that were handed out during initial onboarding and not changed. The highest password reuse we could detect was a password with around 450 accounts using the same password. This was an initialization password that had not been changed by the respective users.

Of overall 32.493 unique password hashes, we were able to crack 26.927 hashes and compute their clear-text passwords. This amounts to a percentage of over 82%. This means we were able to break more than two thirds of all employee passwords during our password audits. An alarming statistic.

cracked vs notcracked

This is mainly because passwords with a length less than 12 characters were used. The following figure highlights this insight.

Note: The figure does not include all cracked password lengths. Exceptions like password lengths over 20 characters or very short or even empty passwords were omitted.


Furthermore our statistics show the effects of a too weak password policy, as well as issues with applying a password policy company-wide.

Note: The below figure does not contain the password masks of all cracked passwords but only a selection.

masks 1

A multitude of employee passwords were guessable, because they were based on a known password mask. Over 12.000 cracked passwords consisted of an initial string, ending with numerical values. This includes especially weak passwords like „Summer2019“ and „password1“.

These passwords are usually already a part of publicly available password lists. One of the most known password lists is called “Rockyou”. It contains more than 14 million unique passwords from a breach of the company RockYou in 2009. The company fell victim to a hacker attack and had saved all their customer passwords as clear-text in their database. The hackers were able to access this data and published the records afterwards.

On the basis of these leaks it is possible to generate statistics about the structure of user passwords. These statistics, patterns and rules for password creation can subsequently be used to break another myriad of password hashes. The use of a password manager, which creates cryptographically random and complex passwords, can prevent these rule-based attacks and make it harder for patterns to occur.

Recommendations regarding password security

Our statistics have shown that a strict and modern password policy can reduce the success rate of a cracking or guessing attack drastically. Nevertheless, password security is based on multiple factors, which we want to illustrate as follows.

Password length

Distance yourself from outdated password polices that only enforce a password length of 8 characters. The costs for modern and powerful hardware are continously decreasing, which allows even attackers with a low budget to effectively execute password cracking attacks. The continuous growth of cost-effective cloud services furthermore enables attackers to dynamically execute attacks based on a fixed budget, without having to buy hardware or set up systems.

Already a password length of 10 characters can increase the effort needed to crack a password significantly – even considering modern cracking systems. For companies that employ Microsoft Active Directory we still recommend using a minimum password length of 12 characters.


Ensure that passwords have sufficient complexity by implementing the following minium requirements:

  • The password contains at least one lowercase letter
  • The password contains at least one uppercase letter
  • The password contains at least a digit
  • The password contains at least a special character

Regular password changes

Regular changes of passwords are not recommended by the BSI anymore, as long as the password is only accessible by authorized persons. [4]

Should the password have been compromised, which implies that it is known to an unauthorized person, it has to be ensured that the password is changed immediately. Furthermore it is recommended to regularly check public databases regarding new password leaks of your company. We gladly support you in this matter in our Cyber Security Check.

Password history

Ensure that users cannot choose passwords that they have previously used. Implement a password history that contains the last 24 used password hashes and prevents their reuse.

Employment of blacklists

Implement additional checks that prevent the use of known blacklisted words. This includes the own company name, seasons of the year, the name of clients, service owners, products or single words like “password”. Ensure that these blacklisted words are not only forbidden organisationally, but also on a technical level.

Automatic account lockout

Configure an automatic account lockout for multiple invalid logins to actively prevent online attacks. A proven guideline is locking a user account after 5 failed login attempts for 5-10 minutes. Locked accounts should be unlocked automatically after a set timespan, so regular usage continues and help desk overloads are prevented.


The sensitization of all employees including the management is essential to increase the security posture company-wide. Regular sensitization measures should become a part of the company’s culture so correct handling of sensitive access data is internalized.

A deliberate change in behavior is necessary in security relevant situations, e.g.,

  • locking your machine, even if you leave the desk only shortly;
  • locking away confidential documents;
  • never forward your password to anyone;
  • use secure (strong) passwords;
  • do not use passwords twice or multiple times.

Despite a technically strict password policy, users might still choose weak passwords that can be guessed with ease. Only the execution of regular password audits and a sensitization of employees can prevent damage in the long term.

Use of two-factor authentication (2FA)

Configure additional security features such as two-factor authentication. This ensures that even in the event that a password guessing attempt is successful, the attacker cannot gain access to the user account (and company ressources) without a secondary token.

Regular password audits

Execute regular password audits to identify user accounts with weak or reused passwords and protect them from future attacks. A continuous re-evaluation of your company-wide password policy and further awareness seminars enable you to generate metrics on a technical level that allow you to continuously measure and improve password security in your company.

Differentiated password policies

Introduce multiple password policies based on the protection level of the respective target group. Low privileged user accounts can thus be required to choose a password with a minimum length of 12 characters including complexity requirements, while administrative user accounts have to follow a more strict policy with at least 14 characters.

Additional security features

We gladly advise you regarding additional security features in your Active Directory environment to improve password security. This includes:

  • Local Administrator Password Solution (LAPS)
  • Custom .DLL Password Filters
  • Logging and Monitoring of Active Directory Events


Should we have sparked your interest in a password audit, we are looking forward to hearing from you. We gladly support you in evaluating the password security of your company, as well as making long-term improvements.

You can also use our online configurator to comission an audit.

More information regarding our password audit can be found under:


[4] – Section ORP.4.A8

Vulnerabilities in NEX Forms < 7.8.8

To protect our infrastructure against attacks, internal penetration tests are an inherent part of our strategy. We hereby put an additional focus on systems that process sensitive client data. During a penetration test of our homepage before the initial go-live, we were able to identify two vulnerabilities in the popular WordPress-Plugin NEX Forms.

Both vulnerabilities were fixed in the subsequent release and can not be exploited in current software versions anymore. More details can be found in this article.


NEX Forms is a popular WordPress plugin for the creation of forms and the management of submitted form data. It has been sold more than 12.500 times and can be found on several WordPress webpages. The plugin offers a functionality to create form reports. These reports can then be exported into PDF or Excel formats. In this component we were able to identify two vulnerabilities.

CVE-2021-34675: NEX Forms Authentication Bypass for PDF Reports

The “Reporting” section of the NEX Forms backend allows users to aggregate form submissions and export them into PDF files. As soon as a selection is exported into PDF, the server stores the resulting file under the following path:


nex forms vulnerability form
Visualization of the vulnerable NEX forms form

Figure 1: Reporting section with Excel and PDF export functions

During our testing, we were able to identify that this exported file is not access protected. An attacker is thus able to download the file without authentication:

34675 2

Figure 2: Proof-of-Concept: Unauthenticated access to the PDF report

CVE-2021-43676: NEX Forms Authentication Bypass for Excel Reports

Similar to the previously mentioned finding, another vulnerability for Excel exports exists. Here, the Excel file is not directly stored on the file system of the webserver, but directly returned as a server response.

To abuse this vulnerability a form report has to have been exported into the Excel format. The server then returns the latest Excel file, whenever the GET Parameter “export_csv” with a value of “true” is passed to the backend. This URL handler does not verify any authentication parameters, which allows an attacker to access the contents without prior authentication:

34676 2

Figure 3: Proof-of-Concept: Unauthenticated access to the Excel report

Possible Impact

An attacker that abuses these authentication vulnerabilities may cause the following damage:

  • Access to confidential files that have been submitted via any NEX Forms form.
  • Access to PII, such as name, e-mail, IP address or phone number

This could lead to a significant loss of the confidentiality of the data processed by the NEX Forms plugin.

Vulnerability Fix

Both vulnerabilities were fixed in the subsequent release of the vendor. More information can be found under:

We thank the Envato Security Team for patch coordination with the developers and the fast remediation of the identified vulnerabilities.

Subdomains under the hood: SSL Transparency Logs

Since the certification authority Let’s Encrypt was founded in 2014 and went live at the end of 2015, more than 182 million active certificates and 77 million active domains have been registered to date (as of 05/2021). [1]

To make the certification processes more transparent, all certificate registrations are logged publicly. Below, we take a look at how this information can be used from an attacker’s perspective to enumerate subdomains and what measures organizations can take to protect them.

Let’s Encrypt

Since the introduction of Let’s Encrypt, the way of handling SSL certificates has been revolutionized. Anyone who owns a domain name these days is able to obtain a free SSL certificate through Let’s Encrypt. Using open source tools such as Certbot, the application and configuration of SSL certificates can take place intuitively, securely and, above all, automatically. Certificates are renewed automatically and associated web server services such as Apache or Nginx are restarted fully automatically afterwards. The age of expensive SSL certificates and complex, manual configuration settings is almost over.

stats 1 Growth of Let’s Encrypt

Certificate Transparency (CT) Logs

Furthermore, Let’s Encrypt contributes to transparency. All issued Let’s Encrypt certificates are sent to “CT Logs” as well as also logged in a standalone logging system using Google Trillian in the AWS Cloud by Let’s Encrypt itself. [2]

The abbreviation CT stands for Certificate Transparency and is explained as follows:

“Certificate Transparency (CT) is a system for logging and monitoring the issuance of a TLS certificate. CT allows anyone to audit and monitor certificate issuances […].” [2]

Certificate Transparency was a response to the attacks on DigiNotar [3] and other Certificate Authorities in 2011. These attacks showed that the lack of transparency in the way CAs operated posed a significant risk. [4]

CT therefore makes it possible for anyone with access to the Internet to publicly view and verify issued certificates.

Problem definition

Requesting and setting up Let’s Encrypt SSL certificates thus proves to be extremely simple. This is also shown by the high number of certificates issued daily by Let’s Encrypt. More than 2 million certificates are issued per day and their issuance is transparently logged (as of 05/2021) [5].

issued certs Let’s Encrypt certificates issued per day

Certificates are issued for all kinds of systems or projects. Be it productive systems, test environments or temporary projects. Users or companies are able to get free certificates for their domains and subdomains. Wildcard certificates have also been available since 2018. Everything transparently logged and publicly viewable.

Transparency is great, isn’t it?

Due to the fact that all issued certificates are transparently logged, this information can be viewed by any person. This information includes, for example, the common name of a certificate, which reveals the domain name or subdomain of a service. An attacker or pentester is thus able to identify the hostname and potentially sensitive systems in Certificate Transparency Logs.

At first glance, this does not pose a problem, provided that the systems or services offered behind the domain names are intentionally publicly accessible, use up-to-date software versions and are protected from unauthorized access by requiring authentication, if possible.

However, our experience in penetration testing and security analysis shows that systems are often unintentionally exposed on the Internet. Either this is done by mistake or under the assumption that an attacker needs further information such as the hostname to gain access at all. Furthermore, many companies no longer have an overview of their existing and active IT assets due to grown structures. By disabling the indexing of website content (e.g. by Google crawlers), additional supposed protection is implemented especially for test environments. Accordingly, an attacker would have no knowledge about the system at all and some sort of security is achieved assumedly. Developers or IT admins are also usually unaware that SSL certificate requests are logged and that this allows domain names to be enumerated publicly.

Readout of CT logs

A variety of methods now exist to access public CT log information. These methods often take place in so-called Open Source Intelligence (OSINT) operations to identify interesting information and attack vectors of a company. We at Pentest Factory also use these methods and web services to identify interesting systems of our customers during our passive reconnaissance.

A well-known web service is:

certsh 1 Sample excerpt from public CT logs of the domain

Furthermore, a variety of automated scripts exist on the Internet (e.g., GitHub) to extract the information automatically as well.

The myth of wildcard certificates

After realizing the possibility of enumerating CT logs and therefore the incoming potential problem, companies often come up with a grandiose idea. Instead of requesting individual SSL certificates for different subdomains and services, one general wildcard certificate is generated and set up across all systems or services.

This means that the subdomains are no longer published in Transparency Logs, since the certificate’s common name is simply a wildcard entry such as *.domain.tld. External attackers are thus no longer able to enumerate the various subdomains and services of a company. Problem solved, right?

Partially. Correctly, the hostnames or subdomains are no longer published in Transparency Logs. However, there are still many opportunities for an attacker to passively gain information about interesting services or subdomains of a company. The underlying problem that systems may be unintentionally exposed to the Internet, use outdated software with publicly known vulnerabilities, or fail to implement access controls still exists. The approach of using a wildcard certificate to provide more security by simply hiding information is called Security through Obscurity. In reality, reusing a single wildcard certificate across multiple services and servers reduces an organization’s security.

For example, an attacker can perform a DNS brute force attack using a large list of frequently used domain names. Public DNS servers such as Google ( or Cloudflare ( provide feedback on whether a domain can be successfully resolved or not. This again gives an attacker the opportunity to identify services and subdomains of interest.

gobuster 2
Example DNS brute force attack on the domain to enumerate subdomains

The dangers of wildcard certificates

Reuse of a wildcard certificate across multiple subdomains and different servers is strongly discouraged. The problem here lies in the reuse of the certificate.

Should an attacker succeed in compromising a web server that uses a wildcard certificate, the certificate must be considered fully compromised. This compromises the confidentiality and integrity of traffic to any service of an organization that uses the same wildcard certificate. An attacker, in possession of the wildcard certificate, would be able to decrypt, read or even modify traffic of all services reusing the certificate. However, an attacker must be in a Man-in-the-Middle (MitM) position between the client and the server to intercept the traffic. Accordingly, this attack is not trivial, but it is practically feasible by skilled attackers.

If unique SSL certificates were used for each domain/service instead of a single wildcard certificate, the attacker would not have been able to compromise all services at once. Other domains and corporate services would not have been affected at all, since an attacker would have to compromise those individual SSL certificates too. The company would therefore only have to revoke and reissue a single certificate and not the extensively reused wildcard certificate across multiple servers and services. Furthermore, the damage can be reduced by using unique certificates and its extent can be measured in case of a successful attack. Companies then know exactly which certificate for which domain or service has been compromised and where attacks may already have taken place. With the use of wildcard certificates and a successful attack, all domains and services are potentially compromised and the impact of the damage is opaque or difficult to assess.

More information about wildcard certificates:


Always be aware that attackers can gain a lot of information about your company. Be it public sources or active tests to obtain valuable information. The security and resilience of a company stands and falls with the weakest link in the IT infrastructure. In general, refrain from Security through Obscurity practices and always keep your systems up to date (patch management).

Rather, make sure that all your publicly accessible systems are intentionally exposed to the Internet and implement access control if necessary. Development environments or test instances should always remain hidden from the public and only be made available to the company itself and its developers. This can be achieved by whitelisting your company-wide IP addresses on the firewall or by implementing a simple authentication wall (e.g. using basic authentication for web services). Use a complex password with a sufficiently large length (> 12 characters).

SSL-Zertifikate sollten den exakten (Sub)Domänennamen im Common Name des Zertifikats definieren sowie von einer vertrauenswürdigen Zertifizierungsstelle (CA) ausgestellt worden sein. Continue to ensure that all certificates are valid and renewed early before expiration. Furthermore, it is recommended to use only strong algorithms for singing SSL certificates, such as SHA-256. The use of the outdated SHA-1 hashing algorithm should be avoided, as it is vulnerable to practical collision attacks [6].

Professional support

Are you unsure what information about your organization is circulating on the Internet and which systems are publicly accessible? Order a passive reconnaissance via our pentest configurator and we will be happy to gather this information for you.

You are interested in the resilience of your public IT infrastructure against external attackers? Want to identify the weakest link in your IT assets and have your SSL configuration technically verified? Then order a penetration test of your public IT infrastructure via our pentest configurator.



Code-Assisted Pentests

Many companies use regular penetration tests to check the security of their applications and IT infrastructure. Oftentimes these tests are conducted from the viewpoint of an external attacker (black-box approach), without information about the application or infrastructure itself. This constitutes a tradeoff between testing depth and available time. A comissioned pentest has the goal to find as many vulnerabilities as possible in a given period.

With code-assisted pentesting, the source code or a relevant subset of the code is shared with the pentester during the assessment This method has essential advantages regarding the effectivity and testing depth of the test.

In the following we take a look at the advantages from a company perspective, and show how code-assistance is able to resolve common issues during pentests.

Code-assisted pentesting is efficient

Many penetration tests assume that the test should imitate an external attacker as closely as possible. The indirect question asked is often: Can an external attacker compromise our application?

This is why common pentests are conducted externally using a black-box approach. The pentester has no information about the application, like a real hacker. However, we must consider that a pentester only has limited time for the assessment. This does not apply to a real attacker equally. This means a pentester needs to test very efficiently within the given time frame.

With a code-assisted pentest, the focus can be put on the identification of vulnerabilities and is not dependent on time consuming enumeration tasks. The following examples illustrate common issues with black-box testing.

Issue 1: Enumeration of directories and endpoints

At the beginning of every penetration test it is necessary to get to know the structure of the application and to identify as many endpoints as possible. The endpoints interact with the user and could potentially be vulnerable.

Since the tester accesses the application server externally, he is not able to take a look at the directory or route structure on the server itself. He thus needs to use a word list with possible endpoint names and request every single entry from the application server tofind out which endpoints exist. The following figure shows this process:

cap 1
Figure 1: Enumeration of application endpoints using wordlists

The word list commonly contains more than 100.000 entries. Oftentimes scan repetitions are required to adapt the scanning configuration optimally to the response behaviour of the server.

In case the used word list does not contain the name of an endpoint, the endpoint often remains undiscovered and is not tested. Especially the time constraints of a pentest limit the tester in his possibilities of finding endpoints or routes. Should you include the product name in the wordlist to find potential endpoints? Should you download the homepage of the client and extract all words into a word list for subsequent scans? These are decisions with unknown consequences – they require additional time without increasing the endpoint detection rate predictably.

Without a code-assisted pentest, endpoints and vulnerabilities could remain undiscovered.

Issue 2: Input validation

After an overview of endpoints has been created, the tester needs to identify which interfaces process user inputs and subsequently evaluate them for common vulnerability classes such as Cross-site Scripting (XSS), command injection or SQL injection. Here the issue frequently arises that the application server employs input filtering. However, the tester in a black-box scenario does not know how this filter is implemented.

The following shows a code sample (download.php), that implements the download of system backups in a web application.

$file = str_replace('../', '', $_GET['file']);

The application reads in the file parameter of a user request and returns the selected file as a download. To prevent a directory traversal attack, the directory change sequence “../” is filtered with a call of str_replace(). An attacker can thus not execute an immediate attack by submitting a request like GET /download.php?file=../../../../../etc/passwd to download the password file of the system (so called Path Traversal attack). As an external tester, the server input validation logic equally remains unknown.

Due to the time limitation of the penetration test, a tricky game is played, where the tester needs to decide how many malicious inputs need to be tested to make a well-founded decision whether the tested component is secure or not. In our example he could try appending another “../” to his request – maybe he did not use enough traversal sequences to jump into the server’s root directory?

After a few more tests he might come to the conclusion that the component does not show exploitable behaviour. With insight into the code, the vulnerability would have been apparent: The str_replace() function does not replace malicious sequences recursively but just once. An attacker could use the following request to obtain the password file of the server:

GET /download.php?file=….//….//….//….//….//etc/passwd

The str_replace() function replaces all malicious “../” sequences in the file parameter – what remains is a parameter value of: ../../../../../etc/passwd – this is a traversal sequence that allows us to break out of the backup directory!

With a black-box approach it is practically impossible to test all input variants. A code-assisted pentest could identify this vulnerability quickly and efficiently.

Code-assisted pentesting as a solution

Like previously discussed, these issues arise in several phases of a penetration test. The following questions are examples for further topics that can not be covered sufficiently in a black-box test:

  • How are passwords stored in the application? Are customer passwords directly extractable in a compromise or are they saved as “salted hashes”?
  • Are random tokens (e.g., for a user’s password reset) truly random? Does enough entropy exist to prevent brute-force attacks?
  • Was an endpoint forgotten in the implementation of access control checks?

To answer these questions and test critical applications with adequate depth, a code-assisted pentest makes sense. Together with the pentester critical application components are identified and source code is selected for analysis. This means that only necessary code parts are shared with the tester, and not necessarily the entire source repository.

The expense of a code-assisted pentest is only marginally higher compared to a black-box test. This expense is compensated through increased effectivity and testing depth. The result is a pentest that covers more vulnerabilities and delivers more accurate results in a given testing period.

Vulnerabilities in FTAPI 4.0 – 4.11

To protect Pentest Factory’s own IT infrastrcture against attacks, internal penetration tests are an essential part of our strategy. We hereby put an additional focus on systems that process sensitive client data. In a penetration test of our file platform FTAPI, we could identify two vulnerabilities that we forwarded to the vendor for a patch. Both vulnerabilities were fixed in the subsequent FTAPI release and can not be exploited in current software versions anymore.

We thank the FTAPI team for a quick and easy disclosure, as well as remediation process.

The details of each vulnerability will be detailed in this blog article.

CVE-2021-25277: FTAPI Stored XSS (via File Upload)

The FTAPI web application is vulnerable to „Stored Cross-Site Scripting” (XSS). FTAPI offers so-called submit boxes, via which external users can submit a message, including a file attachment, without requiring a user account. We at Pentest Factory, use these submit boxes to offer our customers a secure and simple platform for submitting credentials, documentation or other sensitive files. The files are transmitted in encrypted form and are then retrieved by our penetration testers.

The file upload of the submit-box interface allows users to upload files with a malicious name. When hovering over the file name field, an alternative text element is displayed (see following screenshot), which shows the file name. This dynamically displayed element does not filter the file name for malicious characters, which creates an XSS vulnerability.

25277 1

Figure 1: Vulnerable alt-text field of the file name box

Proof-of-Concept (PoC)

When uploading a file with the following name, a JavaScript alert box is executed exemplarily to verify the vulnerability:

25277 2

Figure 2: Proof-of-Concept: malicious file name with alert() execution

For a successful upload the file must not be empty. You may create a proof-of-concept file with the following Linux command:

echo "test" >> "<iframe onload=alert('XSS')>"

The file name field is not only displayed during the upload for the file submitter himself, but also for the recipient when the submission is viewed. This allows JavaScript code to be executed on behalf of the owner of a submit box as soon as they retrieve the file. The attacker’s payload is executed as soon as the mouse touches the green file field with the malicious file name in the FTAPI web interface. Submit boxes are usually public. If the recipient’s submit box URL is known, any messages, including malicious files, can be submitted.

25277 3

Figure 3: PoC with JS-Alert-Box triggering in the inbox of an FTAPI user

CVE-2021-25278: FTAPI Stored XSS (via Submit Box Template)

Furthermore, we could identify a second Cross-site Scripting (XSS) vulnerability in the application. Administrative users are able to change the overall template of submit boxes. This includes a function for the change of background images. Uploaded images are not filtered for malicious content, which allows an attacker to upload SVG files with embeded JavaScript. This again allows the exection of JavaScript and introduces a XSS vulnerability in the application. The vulnerability can only be exploited by administrative users, which reduces the likelihood of real exploitation.

25278 1

Figure 4: Vulnerable background image upload in the layout editor for submit boxes.

Proof-of-Concept (PoC)

To exploit the vulnerability exemplarily an SVG file with the following content can be uploaded as a background image:

<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "">
<svg version="1.1" baseProfile="full" xmlns="">
   <polygon id="triangle" points="0,0 0,50 50,0" fill="#009900" stroke="#004400"/>
   <script type="text/javascript">
      alert('Pentest Factory XSS');

The uploaded file is stored in the /api/2/staticfile/ directory and triggers XSS, once it is opened:

25278 2

Figure 5: Stored XSS when opening the malicious SVG file

Possible Impact

An attacker that exploits one of the Cross-site Scripting (XSS) vulnerabilities could conduct the following attacks:

  • Session-Hijacking with access to confidential data and identifiers.
  • Manipulation of the website (e.g., phishing)
  • Insertion of malicious contents
  • Redirection of users to malicious pages
  • Malware infection

This could lead to a loss of confidentiality, integrity and availability of the data processed by FTAPI.

Vulnerability Fix

Both vulnerabilities were fixed in the subsequent release of the vendor. We have no evidence that the vulnerabilities were actively exploited on our systems before.
More information can be found under

Thank you to the FTAPI team for the quick and easy communication, as well as the remediation of the identified findings!