80,443 - Pentesting Web Methodology

If you want to **know** about my **latest modifications**/**additions** or you have **any suggestion for HackTricks or PEASS**, ****join the 💬 **PEASS & HackTricks telegram group here, or follow me on Twitter** 🐦@carlospolopm.
If you want to share some tricks with the community you can also submit pull requests to ****https://github.com/carlospolop/hacktricks ****that will be reflected in this book.
Don’t forget to give ⭐ on the github to motivate me to continue developing this book.

Basic Info

The web service is the most common and extensive service and a lot of different types of vulnerabilities exists.

Default port: 80 (HTTP), 443(HTTPS)

PORT    STATE SERVICE
80/tcp  open  http
443/tcp open  ssl/https
nc -v domain.com 80 # GET / HTTP/1.0
openssl s_client -connect domain.com:443 # GET / HTTP/1.0

Methodology summary

In this methodology we are going to suppose that you are going to a attack a domain (or subdomain) and only that. So, you should apply this methodology to each discovered domain, subdomain or IP with undetermined web server inside the scope.

  • Start by identifying the technologies used by the web server. Look for tricks to keep in mind during the rest of the test if you can successfully identify the tech.
    • Any known vulnerability of the version of the technology?
    • Using any well known tech? Any useful trick to extract more information?
    • Any specialised scanner to run (like wpscan)?
    • Any vulnerable cookie? JWT?
  • Check for vulnerable proxies being used (Test this in every new tech discovered in the webapp) :
    • hop-by-hop headers
    • Request Smuggling
    • Cache Poisoning/Cache Deception
  • Launch general purposes scanners. You never know if they are going to find something or if the are going to find some interesting information.
  • Start with the initial checks: robots, sitemap, 404 error and SSL/TLS scan (if HTTPS).
  • Start spidering the web page: It’s time to find all the possible files, folders and parameters being used. Also, check for special findings.
    • Note that anytime a new directory is discovered during brute-forcing or spidering, it should be spidered.
  • Directory Brute-Forcing: Try to brute force all the discovered folders searching for new files and directories.
    • Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.
  • Backups checking: Test if you can find backups of discovered files appending common backup extensions.
  • Brute-Force parameters: Try to find hidden parameters.
  • Once you have identified all the possible endpoints accepting user input, check for all kind of vulnerabilities related to it.
    • This is by far the most complex part of pentesting web, and depending of the vulnerability the pentester should know how to discover it. In this book you can find explained a lot of web vulnerabilities related to user input.

Server Version (Vulnerable?)

Identify

Check if there are known vulnerabilities for the server version that is running.
The HTTP headers and cookies of the response could be very useful to identify the technologies and/or version being used. Nmap scan can identify the server version, but it could also be useful the tools whatweb, webtech or https://builtwith.com/:

whatweb -a 1 <URL> #Stealthy
whatweb -a 3 <URL> #Aggresive
webtech -u <URL>

Search ****for ****vulnerabilities of the web application version****

Check if any WAF

Cookies

As commented the cookies can be very useful to identify the technology in used (if well known) but if the used cookies are custom, they could be vulnerable. So if you find a custom sensitive cookie you should check it for vulnerabilities.
Also, the flags of the cookies can also be interesting from a security point of view.

Web tech tricks

Some tricks for finding vulnerabilities in different well known technologies being used:

If the source code of the application is available in github, apart of performing by your own a White box test of the application (no guide available yet in hacktricks) there is some information that could be useful for the current Black-Box testing:

  • Is there a Changelog or Readme or Version file or anything with version info accesible via web?
  • How and where are saved the credentials? Is there any (accesible?) file with credentials (usernames or passwords)?
  • Are passwords in plain text, encrypted or which hashing algorithm is used?
  • Is it using any master key for encrypting something? Which algorithm is used?
  • Can you access any of these files exploiting some vulnerability?
  • Is there any interesting information in the github (solved and not solved) issues? Or in commit history (maybe some password introduced inside an old commit)?

Take into account that the same domain can be using different technologies in different ports, folders and subdomains.
If the web application is using any well known tech/platform listed before or any other, don’t forget to search on the Internet new tricks (and let me know!).

Proxies/Load balances vulnerabilities

You should look for these kind of vulnerabilities every time you find a path were a different technology is running. For example, if you find a java webapp and in /wordpress a wordpress is running.

Automatic scanners

General purpose automatic scanners

nikto -h <URL>
whatweb -a 4 <URL>
wapiti -u <URL>
W3af

CMS scanners

If a CMS is used don’t forget to run a scanner, maybe something juicy is found:

Clusterd**:** JBoss**, ColdFusion, WebLogic,** Tomcat**, Railo, Axis2, Glassfish**
CMSScan: WordPress, Drupal, **Joomla**, **vBulletin** websites for Security issues. (GUI)
VulnX**: Joomla,** Wordpress,** Drupal, PrestaShop, Opencart
CMSMap**:
(W)ordpress
, (J)oomla,** (D)rupal** or (M)oodle
droopscan: Drupal, Joomla, Moodle, Silverstripe, Wordpress****

cmsmap [-f W] -F -d <URL>
wpscan --force update -e --url <URL>
joomscan --ec -u <URL>
joomlavs.rb #https://github.com/rastating/joomlavs

At this point you should already have some information of the web server being used by the client (if any data is given) and some tricks to keep in mind during the test. If you are lucky you have even found a CMS and run some scanner.

Step-by-step Web Application testing

From this point we are going to start interacting with the web application.

Initial checks

Default pages with interesting info:

  • /robots.txt
  • /sitemap.xml
  • Some 404 error - Some interesting data could be presented here.

Check if you can upload files (PUT verb, WebDav)

If you find that WebDav is enabled but you don’t have enough permissions for uploading files in the root folder try to:

  • Brute Force credentials
  • Upload files via WebDav to the rest of found folders inside the web page. You may have permissions to upload files in other folders.

SSL/TLS vulnerabilites

Use testssl.sh to checks for vulnerabilities (In Bug Bounty programs probably these kind of vulnerabilities won’t be accepted) and use a2sv to recheck the vulnerabilities:

./testssl.sh [--htmlfile] 10.10.10.10:443
#Use the --htmlfile to save the output inside an htmlfile also

## You can also use other tools, by testssl.sh at this momment is the best one (I think)
sslscan <host:port>
sslyze --regular <ip:port>

Information about SSL/TLS vulnerabilities:

Spidering

Launch some kind of spider inside the web. The goal of the spider is:

  • Find all files and folders (gospider, dirhunt, envie). Broken link checker (lets see if you can takeover something). You can also find links using urlgrab, which supports JS rendering.
  • Find all possible parameters for each executable file. You can help yourself in this matter using ParamSpider.
  • Read the next section “Special Findings” to search for more information on each file found.
  • hakrawler can also be interesting
  • Another interesting tool to extract links from a page: https://github.com/dwisiswant0/galer

Note that anytime a new directory is discovered during brute-forcing or spidering, it should be spidered.

Brute Force directories and files

Start brute-forcing from the root folder and be sure to brute-force all the directories found using this method and all the directories discovered by the Spidering (you can do this brute-forcing recursively and appending at the beginning of the used wordlist the names of the found directories).
Tools:

  • Dirb / Dirbuster - Included in Kali, old (and slow) but functional. Allow auto-signed certificates and recursive search.
  • Dirsearch ****- Fast, it doesn’t allow auto-signed certificates but allows recursive search.
  • Gobuster - Fast, go needed, allow auto-signed certificates, it doesn’t have recursive search.
  • Feroxbuster - Fast, supports recursive search.
  • wfuzz: wfuzz -w /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt https://domain.com/api/FUZZ
  • ffuf - Fast: ffuf -c -w /usr/share/wordlists/dirb/big.txt -u http://10.10.10.10/FUZZ

Recommended dictionaries:

Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.

File backups

Once you have found all the files, look for backups of all the executable files (".php", “.aspx”…). Common variations for naming a backup are: file.ext~, #file.ext#, ~file.ext, file.ext.bak, file.ext.tmp, file.ext.old, file.bak, file.tmp and file.old

Discover parameters using brute-force

You use tools like ****Arjun and Parameth to discover hidden parameters. If you can, you could try to search hidden parameters on each executable web file.

Special findings

While performing the spidering and brute-forcing you could find interesting things that you have to notice.

Interesting files

Interesting info inside the file

  • Comments: Check the comments of all the files, you can find credentials or hidden functionality.
    • If you are playing CTF, a “common” trick is to hide information inside comments at the right of the page (using hundreds of spaces so you don’t see the data if you open the source code with the browser). Other possibility is to use several new lines and hide information in a comment at the bottom of the web page.
  • API keys: If you find any API key there is guide that indicates how to use API keys of different platforms: https://github.com/streaak/keyhacks, https://github.com/xyele/zile.git
  • S3 Buckets: While spidering look if any subdomain or any link is related with some S3 bucket. In that case, check the permissions of the bucket.

JS code

The JS code of a web application can be really interesting: It could contain API keys, credentials, other endpoints, and understanding it you could be able to bypass security measures.
It could be also very useful to parse the JS files in order to search for other endpoints: LinkFinder, JSScanner (wrap of LinkFinder), JSParser, relative-url-extractor.
Another interesting approach could be monitoring the JS files with a tool like JSMon that checks for changes.
You should also check if the application is using any outdated and vulnerable javascript library with: RetireJS****

If the javascript code is obfuscated, these tools could be useful:

In several occasions you will need to understand regular expressions used, this will be useful: https://regex101.com/

403 Forbidden/Basic Authentication/401 Unauthorized (bypass)

  • Try using different verbs to access the file: GET, POST, INVENTED

  • If /path is blocked, try using /%2e/path __(if the access is blocked by a proxy, this could bypass the protection). Try also _/%252e/path_ (double URL encode)

  • Try Unicode bypass: /%ef%bc%8fpath (The URL encoded chars are like “/") so when encoded back it will be //path and maybe you will have already bypassed the /path name check

  • Try to stress the server sending common GET requests (It worked for this guy wit Facebook).

  • Change the protocol: from http to https, or for https to http

  • Change Host header to some arbitrary value (that worked here)

  • Other path bypasses:

    • site.com/secret –> HTTP 403 Forbidden
    • site.com/SECRET –> HTTP 200 OK
    • site.com/secret/ –> HTTP 200 OK
    • site.com/secret/. –> HTTP 200 OK
    • site.com//secret// –> HTTP 200 OK
    • site.com/./secret/.. –> HTTP 200 OK
    • site.com/secret.json –> HTTP 200 OK (ruby)
  • Other bypasses:

    • /v3/users_data/1234 –> 403 Forbidden
    • /v1/users_data/1234 –> 200 OK
    • {“id”:111} –> 401 Unauthriozied
    • {“id”:[111]} –> 200 OK
    • {“id”:111} –> 401 Unauthriozied
    • {“id”:{“id”:111}} –> 200 OK
    • {“user_id”:"<legit_id>”,“user_id”:"<victims_id>"} (JSON Parameter Pollution)
    • user_id=ATTACKER_ID&user_id=VICTIM_ID (Parameter Pollution)
  • Go to https://archive.org/web/ and check if in the past that file was worldwide accessible.

  • Fuzz the page: Try using HTTP Proxy Headers, HTTP Authentication Basic and NTLM brute-force (with a few combinations only) and other techniques. To do all of this I have created the tool fuzzhttpbypass.

    • X-Originating-IP: 127.0.0.1
    • X-Forwarded-For: 127.0.0.1
    • X-Remote-IP: 127.0.0.1
    • X-Remote-Addr: 127.0.0.1
    • X-ProxyUser-Ip: 127.0.0.1
    • X-Original-URL: 127.0.0.1
    • If the path is protected you can try to bypass the path protection using these other headers:
      • X-Original-URL: /admin/console
      • X-Rewrite-URL: /admin/console
  • Guess the password: Test the following common credentials. Do you know something about the victim? Or the CTF challenge name?

  • Brute force

    {% code title=“Common creds” %}

    admin    admin
    admin    password
    admin    1234
    admin    admin1234
    admin    123456
    root     toor
    test     test
    guest    guest
    

    {% endcode %}

502 Proxy Error

If any page responds with that code, it’s probably a bad configured proxy. ****If you send a HTTP request like: GET https://google.com HTTP/1.1 (with the host header and other common headers), the proxy will try to access google.com and you will have found a SSRF.

NTLM Authentication - Info disclosure

If the running server asking for authentication is Windows or you find a login asking for your credentials (and asking for domain name), you can provoke an information disclosure.
Send the header: “Authorization: NTLM TlRMTVNTUAABAAAAB4IIAAAAAAAAAAAAAAAAAAAAAAA=” and due to how the NTLM authentication works, the server will respond with internal info (IIS version, Windows version…) inside the header “WWW-Authenticate”.
You can automate this using the nmap pluginhttp-ntlm-info.nse”.

HTTP Redirect (CTF)

It is possible to put content inside a Redirection. This content won’t be shown to the user (as the browser will execute the redirection) but something could be hidden in there.

Bypass regular login (POST or GET method)

If you find a login page, here you can find some techniques to try to bypass it:

  • Check for comments inside the page (scroll down and to the right?)
  • Check if you can directly access the restricted pages
  • Check to not send the parameters (do not send any or only 1)
  • Test manually very common passwords.
  • Check for default credentials
  • Check for common combinations (root, admin, password, name of the tech, default user with one of these passwords)
  • Check the PHP comparisons error: user[]=a&pwd=b , user=a&pwd[]=b , user[]=a&pwd[]=b
  • Create a dictionary using Cewl, add the default username and password (if there is) and try to brute-force it using all the words as usernames and password
  • Try to brute-force using a bigger dictionary (Brute force)

You should also check for:

Insert into/Create Object

Check for ****SQL INSERT INTO Injections.****

Upload Files

Check for this vulnerabilities:

User input Web Vulnerabilities list

More references for each Web Vulnerability: https://cyberzombie.in/bug-bounty-methodology-techniques-tools-procedures/
Another checklist: https://six2dez.gitbook.io/pentest-book/others/web-checklist