2-2: Web App Recon

Not every machine has a web server that we can access, but for the majority that do there are quite a few specific steps to take to learn about the system. Of the recon categories this one is by far the most involved.

Manual Enumeration Methods

Before getting into details here, as mentioned in the pre-lab setup section either ZAP or Burp should already be running, and our browser has the localhost proxy enabled.

The first step is to load the website in the browser and visually inspect it, taking note in particular of any links, buttons, and form fields that are present. On first page load I also open the Wappalyzer extension and screenshot what it detects.

Now we can also go through and follow the various links around the site and get a feel for the overall structure of the visible site. If the app supports logins and you can register, that may be necessary to do further enumeration.

Following from the idea of registering a user, for any forms and other interactions on the page, try them out as if you were a regular user. We're partly testing to see if they work at all, as on some boxes they're window dressing and don't connect to anything, but ideally using that interaction to store the request in our proxy app to review the request.

While we're manually crawling pages it also never hurts to check /robots.txt and /sitemap.xml to see if those reveal any paths we didn't catch while walking the application.

DevTools

There are several categories of info we can use the browser dev tools to help enumerate:

  1. HTML/CSS source code (Inspector tab)
  2. Javascript code (Debugger tab)
  3. Cookies and stored application data (Application tab)
  4. Network traffic (Network tab): though the bulk of this information will be loaded into ZAP/Burp, so it isn't essential to go here.

For all of these one of the primary things to keep an eye out for are any forms of information leaks or sensitive information like credentials, whether that's left in comments (HTML/CSS/JS), within cookies/app state, or somewhere in the HTTP requests.

The HTML and Javascript can also reveal specific paths to check, or particular function calls in the Javascript to look into more closely to see if they could be vulnerable in any way. Actions on the page that trigger server requests to get data (via GET, POST, etc.) are especially important to look at.

For cookies and app state, look both at the value of the cookie but also the name of the cookie too. The cookie name can reveal the underlying framework that is in use, e.g., it's common for PHP applications to have a cookie called PHPSESSID.

Honorable mention goes to front-end framework specific DevTool integrations (React, Vue, Angular, etc.). I haven't used these yet for during a lab, but in theory if they are enabled they give you a ton of insight into the behavior of the client-side application and you might see something the developer didn't intend for you to see.

HTTP Requests

With ZAP/Burp logging all the requests as we walked the application, the next step after clicking through everything is to review the HTTP requests themselves.

Some info may be redundant with what earlier enumeration has gotten, but it's always good to get confirmation. With that said, when reading through requests look for a few particular points:

  • Any cookies passed in the request
  • Presence or absence of various security headers (SameSite cookie settings, CSRF tokens, CORS settings, etc.)
  • Server / User-Agent information
  • Referer and similar headers like X-Forwarded-For which may enable spoofing of the request's source
  • In GET requests any query parameters passed through
  • For POST requests the content of the POST body and also the Content-Type header. Look closely at the variable names used in POST requests as well, as they can also be a hint at the underlying tech.

Automated Web Enumeration

As a supplement to the manual walk of the website above, using ZAP's Spider/AJAX Spider and Active Scan modes can help to get a quick look through and see if anything clear was missed. As a bonus all of this work is now saved in ZAP's site map representation, so if we want to get a folder by folder look at the site structure we can do that, including what requests each endpoint supports.

Directory fuzzing

The spider's weakness is it does not attempt to fuzz directories at the same time. To look for hidden content the next phase is to use a directory fuzzer like Feroxscan. Gobuster is another good alternative, but for directory fuzzing specifically I nearly always use Feroxbuster with this basic syntax:

feroxbuster -u http[s]://[targetIP_or_Domain] -t 10 -L 10 -w /path/to/directory/wordlist.txt -o ferox.txt

Note: I have not generally replayed these through the proxy as I write out to a text file, but that's possible by passing the --burp flag as well.

Tweak the -t (threads) and -L (concurrent scans) as needed if the host can't handle the settings at 10.

Another optional setting to add on is the -x to append extensions to the wordlist, useful in cases when you're looking specific types of sites like PHP (.php), ASP (.aspx), etc., or trying to find other document types like PDF, TXT, and MS Office files.

Results from the scans which return 200 level responses, any of the 300 level redirects, and specific error types that suggest access issues (401, 403, 405) are the ones to spend more time examining, either through manually accessing them in the browser/cURL or through the HTTP request history in ZAP/Burp.

Virtual Host fuzzing

Another area for content to be hidden is on subdomains for our target. Unlike directory fuzzing I haven't settled on a single tool that I prefer here, I try three:

  • ffuf: a very flexible fuzzing tool, for virtual hosts use:
    • ffuf -c -u http[s]://[domain] -H "Host: FUZZ.[domain] -w /path/to/DNS/subdomainsWordlist.txt
    • The initial response will be extremely noisy. Identify a common response size (integer value) and run it again with the -fs [sizeInt] flag to filter out those responses.
  • gobuster: in vhost mode the --exclude-length [sizeInt] can be used similar to ffuf on a second run
    • gobuster vhost -u http[s]://[domain] -w /path/to/subdomain/wordlist
  • wfuzz: I use this one less frequently now, but worth a mention:
    • sudo wfuzz -c -f [outputFile] -w /path/to/subdomain.txt -u 'http:[s]://[domain] -H 'Host: FUZZ.[domain]
    • There are similar filtering tools, but a little more complex, refer to the docs

If any virtual host is found, I add that subdomain into the /etc/hosts file for the same IP as the original box and repeat all the earlier enumeration steps for that domain.

results matching ""

    No results matching ""