08-08-2023, 12:45 AM
Anyone got a writeup?
08-08-2023, 12:45 AM
Anyone got a writeup?
08-08-2023, 11:19 AM
(This post was last modified: 08-08-2023, 11:19 AM by ReBreached.)
TIPS that can help complete the AWS fortress.
INTRODUCTIONThis article is not a write-up. You will not find there any flags or copy-paste solutions. Instead, there are plenty of reference links and commands that I found helpful in the process of passing the AWS fortress. SERVICES DISCOVERYAlways enumerate every IP address you have during the engagement. MANUAL WAYFor this purpose, you can conduct the recon of the target manually using: AUTOMATIC WAYYou can also choose a more automatic way of service enumeration with: Source:https://github.com/Karmaz95/crimson#diam...n-diamondsWEB ENUMERATIONThere are many steps in the web reconnaissance phase. Ensure you do it thoroughly, so you will not miss any information. VHOST DISCOVERYIf you find any web servers, do not forget to enumerate virtual hostnames. DIRECTORY BRUTEFORCINGI found it hard to brute-force the paths and parameters because of the fortress instability, but to be sure, you can use the command below: — directory brute-forcing.Additionally, tip regard to directory brute-forcing is always to try to guess the API version number if you ever encounter the Code: /api/ — dir wordlist.WEB CRAWLINGI prepared a short script to automate this task a long time ago. I still use it today and recommend it for the web crawling process:
file_path='domains.txt'for domain in $(cat "$file_path"); do echo "[+] $domain" domain="$domain" echo "$domain" | httpx -silent | gospider -c 10 -q -r -w -a --sitemap --robots --subs -H "$cookie" >> urls.txt python3 "$HOME"/tools/ParamSpider/paramspider.py -d "$domain" --output ./paramspider.txt --level high > /dev/null 2>&1 cat paramspider.txt 2>/dev/null | grep http | sort -u | grep "$domain" >> urls.txt rm paramspider.txt 2>/dev/null get-all-urls "$domain" >> urls.txt waybackurls "$domain" >> urls.txt echo "$domain" | httpx -silent | hakrawler >> urls.txt echo "$domain" | httpx -silent | galer -s >> urls.txt donecat urls.txt | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort -u | qsreplace -a > temp1.txt mv temp1.txt urls.txt
domain=TARGET cookie='Cookie: a=1;' cat urls.txt | grep "\.js" | grep "$domain" >> js_urls.txt sort -u urls.txt js_urls.txt | getJS --timeout 3 --insecure --complete --nocolors -H "$cookie" | grep "^http" | grep "$domain" | sed "s/\?.*//" | anew js_urls.txt httpx -silent -l js_urls.txt -H "$cookie" -fc 304,404 -srd source_code/ >> temp mv temp js_urls.txtPROXY THE RESULTS TO THE BURP SUITEAfter the above steps, you should gather quite a lot of data to analyze. It is good to proxy them to the Burp Suite using httpx. AUTOMATIC WAYYou can also choose a more automatic way of web enumeration with: Source: https://github.com/Karmaz95/crimson#diam...t-diamonds
08-08-2023, 11:20 AM
HARDCODED CREDENTIALSDo not forget always to analyze your code for the plain credentials that can be hardcoded in it. The easiest way is to use grep with its own regex.
Source: https://lelinhtinh.github.io/de4js/Another way could be to pipe the js file into the js-beautify. cat $file" | js-beautifyJS ANALYSISMake sure you read every JS file source code. Source: Own study.The below command helps you extract the endpoints from the JS file: You can always fuzz those new endpoints using a file that contains the discovered domains to find if the endpoints exist on any of them: Source: Own study — combining endpoints from JS files with the discovered domain names.Moreover, you should proxy the results to the Burp Suite and use the meth0dman extension for HTTP method probing: Source: Own study — HTTP method probing.JSON ANALYSISIf you leak any JSON files, try to extract the same type of information from JavaScript files. Source: Own study — parsing JSON files.GITHUB REPOSITORY ANALYSISIt is good to download the repository using git-dumper and then analyze it using GitKraken. Source: https://github.com/arthaud/git-dumper#usage Source: Own study — checking the specific commit source code using GitKraken.DATABASES FILESUse the sqlitebrowser for viewing the files with Code: .db
08-08-2023, 01:51 PM
غالي باشه
08-08-2023, 06:16 PM
Super work, thanks
08-09-2023, 02:47 PM
good read
08-15-2023, 11:16 AM
that's sound reallly good man im 100% with the true
08-17-2023, 06:38 AM
super
08-30-2023, 07:15 PM
12-20-2023, 08:53 PM
Anyone have a cloned image or recommend an environment to replicate this in? Thanks in advance
|
« Next Oldest | Next Newest »
|