Cloud Recon

These days many organizations have migrated at least some of their IT services to a cloud environment. Cloud adaptation could be as basic as the use of Microsoft Office 365 on some workstations, or it could be much more comprehensive, such as the use of a fully integrated Azure or Amazon AWS infrastructure. With this increased importance comes an increased level of risk as well, which needs to be taken into account when allocating resources to security tasks. This is especially when it comes to regular penetration testing and vulnerability scanning of cloud services.

Reconnaissance and enumeration: When it comes to penetration testing and vulnerability scanning, knowledge is everything. The more information an attacker has about a targeted organization, the easier and further the system can be compromised. From a defensive perspective, the more information an entity has  about the network; the better an organization can protect and monitor it. There are many ways to gather this required information, both passively (reconnaissance) and actively (enumeration).

DNS Records: The first step in (public) cloud reconnaissance is to identify whether the target is using any cloud services and if so, which services they are. The best way to do this is to query specific DNS records. DNS MX Records are used to direct email to a company’s e-mail servers for processing, which means they hold important information. If the records point to for instance outlook.com, the target is likely using Office 365 for e-mail services. Many other service providers require the same type of authentication. If there is a DNS TXT record named amazonses for instance, the target is likely to use Amazon Simple Email Service. More information is available as well via CNAME, SPF and DFS records. There are a lot of tools available that can easily extract the required DNS information. Nmap, DNSEnum, and DIG are some of the tools that come pre-installed with Kali Linux .

Network and Application Scanning:  Traditional tools such as Nmap and Kismet scan the cloud perimeter without any issues. What is new, however, is that a cloud target is located within a shared network, owned by the Cloud Service Provider (CSP). To avoid any impact on other customers and any defensive or legal action from the CSP, always ask (you should always do this!) for written approval before starting scans, both to and from a cloud instance.

Cloud Specialized Tools: Development of new and adapted reconnaissance, enumeration and exploitation tools, specialized in targeting public cloud providers has been limited. However, there are a few useful cloud specific reconnaissance tools though. For instance, Azurite is a reconnaissance and visualization tool that gives a good understanding of which Azure services are in use and how they are connected. An interesting development from the offensive side is the use of bots that search sites like GitHub for uploaded code, accidentally containing cloud account access (API) keys.

Vulnerability scanning:  Finally, the most comprehensive, but the noisiest method or network reconnaissance is the use of a vulnerability scanner. Such a scanner simply runs through a standard or customized profile of passive and active scans and lists the detected vulnerabilities, sometimes alongside remediation actions. Such a scanner could be placed inside the cloud instance, such as the Qualys Virtual Scanner Appliance for Amazon AWS. Another option is to use the security services of the Cloud Service Provider, for instance in the form of Amazon Inspector. As with network scanning, prior written authorization from the Cloud Service Provider is required.

It is increasingly important for any company to know what network and security information is publicly accessible via the Internet. After proactively gathering this information, actions can be taken to limit the exposure and with that, the security risks. Regular scans of the perimeter, analysis, and clean-up of DNS records, taking obsolete services and cloud instances offline; there is much an organization could do to be proactive from a security perspective. In the end, it is critical to know what company data is out there so it can be best protected from malicious entities.

 

Advertisements

Are You Ready for Your Pen Test?

It is day three of a five-day penetration test engagement and we still don’t have all the information we need to proceed with the test. This particular test was scoped to focus on internal applications and we were to gain access to those applications through the client’s VPN solution. But instead we find ourselves waiting on the process to get VPN credentials. This probably means we have some late nights ahead so we can catch up.

This and many similar scenarios are unfortunately all too familiar to third-party penetration testing teams. Below are a few things companies should consider when engaging a third party for penetration testing or other security testing. All of these tips assume gray-box testing, where the security testers are provided with some information in order to expedite the test and make better use of time and money:

User Accounts. For applications, the rule of thumb is a minimum of two accounts per major role. In the simplest of applications this usually means two user accounts and two administrative accounts. In more complex applications six or eight or even ten accounts may be needed. Without these tests for vertical and horizontal escalation of privileges cannot be implemented.

Code Stability. Ensure that by the time your security testers are looking at it, the code is more than half-baked. Much of the security testing you do too early will become invalid if your developers are still in the middle of building out features. If the code can’t pass your QA team, then it isn’t ready for third-party security testing.

Whitelist. Beware of what is your primary goal of the test. It is often a waste of your time and money to spend half the penetration test circumventing a WAF or firewall rules. If the emphasis is to specifically focus on the host or application security on your network, then it will be much more efficient to whitelist these controls. Additional tests, towards the end of the testing window can be performed with the WAF in place.

Testing Assets. For mobile applications, security testers may need access to the binaries, so if the test is on a pre-production build they will need access accordingly. If you have implemented certificate pinning in your mobile application, it is most efficient to provide security testers with a version of the app minus the certificate pinning. For web services, most likely your goal is to determine if your developers are implementing their code in a secure fashion. This can be done by running sample HTTP requests for valid web service calls, or valid requests saved in tool projects (such as SoapUI or Postman).

Other Administrative Headaches. A number of additional things can go wrong when preparing for security testing. Perhaps you need your third party to connect in through a VPN. Perhaps you have to open a change window to start the test. Perhaps your dev team runs an automatic deployment in the environment being tested at 10am every day, taking the server offline for an hour and creating a moving target for the testing team.

It is ultimately in the client’s best interest to start on schedule. If a client’s delay at the beginning of the week results in testers having to work extra hours late into the night or over a weekend, their fatigue could impact the quality of the test and report. And that is assuming the testers will work the necessary hours to complete the test. In many cases the SoW includes a clause that places the responsibility for these types of delays into the hands of the client and does not require the testing team to work past the end of the scheduled test, even if testing tasks are incomplete.