AWS Fortress
by telegramweb - 08-08-2023, 12:45 AM
#1
Anyone got a writeup?
Reply
#2
TIPS that can help complete the AWS fortress.
 INTRODUCTIONThis article is not a write-up. You will not find there any flags or copy-paste solutions. Instead, there are plenty of reference links and commands that I found helpful in the process of passing the AWS fortress.
SERVICES DISCOVERYAlways enumerate every IP address you have during the engagement.
MANUAL WAYFor this purpose, you can conduct the recon of the target manually using:
 [Image: 1*Jou0CQjl0IzdhOoyjPHc0w.png] 
AUTOMATIC WAYYou can also choose a more automatic way of service enumeration with:
 [Image: 1*860mndXiQsWewzOEwgp40Q.png]

Source:https://github.com/Karmaz95/crimson#diam...n-diamondsWEB ENUMERATIONThere are many steps in the web reconnaissance phase. Ensure you do it thoroughly, so you will not miss any information.
VHOST DISCOVERYIf you find any web servers, do not forget to enumerate virtual hostnames.
 [Image: 1*9B1HIl0iM49rC5kgCvCSXg.png]
DIRECTORY BRUTEFORCINGI found it hard to brute-force the paths and parameters because of the fortress instability, but to be sure, you can use the command below:
 [Image: 1*IFMbvA412BbHGSrKGXYTDQ.png] 
— directory brute-forcing.Additionally, tip regard to directory brute-forcing is always to try to guess the API version number if you ever encounter the 
 
Code:
/api/
 endpoint:
 [Image: 1*HOh6qc1n3G8stKYAG-YrKg.png]

— dir wordlist.WEB CRAWLINGI prepared a short script to automate this task a long time ago.
I still use it today and recommend it for the web crawling process:
  • You have to prepare 
    Code:
    domains.txt
     file with one domain for each line.
  • You can replace the 
    Code:
    Cookie
     header if you have any session IDs.
cookie='Cookie: a=1;'
file_path='domains.txt'for domain in $(cat "$file_path"); do
echo "[+] $domain"
domain="$domain"
echo "$domain" | httpx -silent | gospider -c 10 -q -r -w -a --sitemap --robots --subs -H "$cookie" >> urls.txt
python3 "$HOME"/tools/ParamSpider/paramspider.py -d "$domain" --output ./paramspider.txt --level high > /dev/null 2>&1
cat paramspider.txt 2>/dev/null | grep http | sort -u | grep "$domain" >> urls.txt
rm paramspider.txt 2>/dev/null
get-all-urls "$domain" >> urls.txt
waybackurls "$domain" >> urls.txt
echo "$domain" | httpx -silent | hakrawler >> urls.txt
echo "$domain" | httpx -silent | galer -s >> urls.txt
donecat urls.txt | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort -u | qsreplace -a > temp1.txt
mv temp1.txt urls.txt
  • As a result, you will get the URLs in the 
    Code:
    urls.txt
     file.
JS EXTRACTIONAfter gathering URLs, choose one domain and collect JS files for analysis:
domain=TARGET
cookie='Cookie: a=1;'
cat urls.txt | grep "\.js" | grep "$domain" >> js_urls.txt
sort -u urls.txt js_urls.txt | getJS --timeout 3 --insecure --complete --nocolors -H "$cookie" | grep "^http" | grep "$domain" | sed "s/\?.*//" | anew js_urls.txt
httpx -silent -l js_urls.txt -H "$cookie" -fc 304,404 -srd source_code/ >> temp
mv temp js_urls.txtPROXY THE RESULTS TO THE BURP SUITEAfter the above steps, you should gather quite a lot of data to analyze.
It is good to proxy them to the Burp Suite using httpx.
 [Image: 1*JzR2KuNPqXPBBkLbAsJNvw.png] AUTOMATIC WAYYou can also choose a more automatic way of web enumeration with:
 [Image: 1*_rIrln5fJ9GCPZXaAM-2lA.png]
Source: https://github.com/Karmaz95/crimson#diam...t-diamonds
Reply
#3
HARDCODED CREDENTIALSDo not forget always to analyze your code for the plain credentials that can be hardcoded in it. The easiest way is to use grep with its own regex.
  • An example of such a regular expression is shown below:
 [Image: 1*Pmdg_LR1mcKiMadHLi-L0w.png] Source: Own study — searching for the hardcoded credentials using grep.
  • Another way is to use open source tools:
  [Image: 1*HyVGi1FabxF-nFk_lXOabg.png] Source: Own study.JS DEOFUSCATIONFor the JS deobfuscation use de4js:
  [Image: 1*-G6RHFQAmeuV9OnsDCOz5Q.png] Source: https://lelinhtinh.github.io/de4js/Another way could be to pipe the js file into the js-beautify.
cat $file" | js-beautifyJS ANALYSISMake sure you read every JS file source code.
  [Image: 1*jxU4VliNwMrh7kknaXSvjQ.png] Source: Own study.The below command helps you extract the endpoints from the JS file:
  [Image: 1*ivfyTWP3r5v3WpBJAdsG8Q.png]You can always fuzz those new endpoints using a file that contains the discovered domains to find if the endpoints exist on any of them:
  [Image: 1*I8DmsNqqarI3x-Pk9IJrsg.png] Source: Own study — combining endpoints from JS files with the discovered domain names.Moreover, you should proxy the results to the Burp Suite and use the meth0dman extension for HTTP method probing:
  [Image: 1*o0jJD5xT3w13BgApmvV4Eg.png] Source: Own study — HTTP method probing.JSON ANALYSISIf you leak any JSON files, try to extract the same type of information from JavaScript files.
  [Image: 1*X8uXWvnDvul30Fh-kaZ0_g.png] Source: Own study — parsing JSON files.GITHUB REPOSITORY ANALYSISIt is good to download the repository using git-dumper and then analyze it using GitKraken.
  [Image: 1*a_3RIPYyGTZMM_5o27ApAg.png] Source: https://github.com/arthaud/git-dumper#usage  [Image: 1*QMIpDeRtftaYEO4vB9aCaw.png] Source: Own study — checking the specific commit source code using GitKraken.DATABASES FILESUse the sqlitebrowser for viewing the files with 
Code:
.db
 extensions.
  [Image: 1*M-XIgdsle5ykDl6_v22riA.png]
Reply
#4
غالي باشه
Reply
#5
Super work, thanks
Reply
#6
good read
Reply
#7
that's sound reallly good man im 100% with the true
Reply
#8
super
Reply
#9
(08-08-2023, 12:45 AM)telegramweb Wrote: Anyone got a writeup?
Reply
#10
Anyone have a cloned image or recommend an environment to replicate this in? Thanks in advance
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)