Finding and Reporting Your First Reflected XSS: From A-Z
Reflected XSS in Action

Finding and Reporting Your First Reflected XSS: From A-Z

This guide is for someone who knows the basics of exploiting XSS (who likely has completed about 75% of the labs for Reflected XSS available at https://portswigger.net/web-security/cross-site-scripting/reflected ). What I'm sharing is not groundbreaking or a secret, but I want to provide people aspiring to find and report vulnerabilities an easy to follow methodology to get started finding their first cut and dry vulnerability.

Install All Your Tools

I'm gonna assume you'll be using Linux.

  1. Install Go: (https://go.dev/doc/install) (Some tools are written in go and this makes installation/management of these tools convenient)
  2. Install Paramspider (https://go.dev/doc/install) (Paramspider will be vital for passive endpoint enumeration)
  3. Install Subfinder (https://github.com/projectdiscovery/subfinder) (Subfinder will be important for enumeration of additional subdomains based on the scope you're working with)
  4. Install httpx (https://github.com/projectdiscovery/httpx?tab=readme-ov-file#installation-instructions) (httpx will be used to check what subdomains actually contain a webserver)
  5. Install qsreplace (https://github.com/tomnomnom/qsreplace) (This tool will replace query string parameter values with our chosen canary)
  6. Install uro (https://github.com/s0md3v/uro) (This tool will filter out uninteresting extension types though paramspider on its own already does a good job of doing so with a hardcoded list of undesirable extensions)

Now that installation of all of these tools is done, we can begin.

Step One: Reviewing Our Scope

Imagine we are given the following scope:

  • *.domain1.com
  • *.domain2.com
  • *.domain3.com

Essentially, we are allowed to test any endpoints/locations within these domains and their respective subdomains. The first thing we need to do is identify as many subdomains as possible. We need to enumerate as much scope as possible to give ourselves the best shot at identifying exploitable reflected XSS.

Step Two: Enumerating Subdomains

Subfinder comes in handy here with the following command:

subfinder -dL domains.txt -all -o subfinder.txt        

The above command will take in a list of domains in your domains.txt file, apply all methods of passive subdomain enumeration, and output the findings into the file subfinder.txt.

With our list of results of subdomains, we need to keep in mind that not all these subdomains are guaranteed to have a web server. To gain a list of subdomains that have a web server, we can use the following command:

httpx -l subfinder.txt -o aliveSubdomains.txt        

With the -l option, we supply a list of subdomains for the tool to check. Those that have an active web server will be outputted to our aliveSubdomains.txt file.

Step Three: Enumerate Endpoints for Each Web Server

With our list of active web servers, we can now search each for endpoints. We can use Paramspider to accomplish this with the following command.

paramspider -l aliveSubdomains.txt        

Paramspider will take our list and inquire with the webarchive.org API for endpoints related to each subdomain. What I like about Paramspider is it will only add endpoints to a list that contain parameters and that are not part of a restricted list of extensions. If we care about finding XSS, we don't care for links that end in JS, txt, json extensions, etc., and we can adjust our list of restricted extensions as necessary to reflect this. Paramspider will create a results directory in the directory invoked in and will name a file after the relevant subdomain containing the endpoints discovered.

Finally, when it's done running, move into the results directory and issue the following command:

cat *.txt > all.txt        

all.txt will contain all of the endpoints we discovered. With all of this done, we can now move on to the exciting part!

Step Four: Identifying Interesting Instances of Reflection

We can now take our list of endpoints, plug in a canary value for each parameter, and check for reflection of the canary in the response to each request with the following command:

cat all.txt | uro | grep = | qsreplace "<panda92>" | httpx -silent -nc -mc 200 -mr "<panda92>" | tee potential-xss        

We will spit the contents of all.txt into uro which will remove uninteresting endpoints such as blog posts or incremental links. Afterward, endpoints are taken and will have their parameter values replaced with the canary <panda92> and run through httpx silently. Only endpoints that when visited by httpx return a 200 response code and include the canary in the response will then be outputted to our potential-xss file.

Why do we use this canary? If an application is allowing reserved characters such as angle brackets to be reflected in the response without output encoding it in the response, then odds are favorable you may have found XSS, though this will all depend on the context in which the canary is reflected.

After getting our list in file potential-xss, you want to comb over each potential instance of XSS, confirming whether or not XSS is possible. From here, it's just a matter of thoroughly checking each lead. With some diligence and luck, you will find your XSS!


We see an XSS firing with a payload that uses the alert call to print the subdomain.

Shortcomings and How to Take This Further

What I've given you is a detailed step by step guide on finding your first reflected XSS (and many more potentially). The steps here are not the end all be all though. First off, think about how the command in step 4 is determining what is interesting. We may miss out on opportunites for XSS if the response code is anything but 200. Perhaps reflection is taking place on a 404 page that leads to XSS. Perhaps it's happening on a 500 error page. Maybe the HTML reserved characters "<>" are being appropriately filtered/output encoded but other characters are not appropriately output encoded. We could be missing an opportunity for XSS there. Ultimately, you need to think long and hard about what you're doing to see what opportunities there are for you to broaden your search.

When discussing subfinder, I didn't go into detail about how you can also add API keys for subfinder to query certain services. That could also give you an edge if it helps lead you to a subdomain you would not have otherwise discovered. As well, keep in mind subfinder is only performing passive enumeration. It's not bruteforcing for subdomains against a list so you might be missing out on potential subdomains.

There exist other tools/sources besides paramspider for passive endpoint enumeration as well.

We should try to keep an idea on the limits of the tools we are using so we can figure out ways to compensate and leave no vulnerabilities on the table!

How to Patch Reflected XSS?

When you submit this finding to a bug bounty program or Vulnerability Disclosure Program, you will likely need to answer how to fix and prevent such issues. This will depend on what context the payload is being reflected in and whether there is a legitimite need for HTML to be authored/passed in the value. If the latter is necessary, then the use of HTML sanitization will be needed. Otherwise, output encoding for the appropriate context should do. For more details about this, please review https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html

Other protections include the use of WAFs and Content Security Policies, but using these methods have shortcomings. WAFs that are signature-based are frequently bypassed and do not address the root cause of the issue in the code. CSPs are easy to implement incorrectly and settings such as having the browser not render inline JS may otherwise break the application if developers used inline JS in other areas of the application.

Where Can I Apply This?

If you're new to this, it may be best to try this against Responsible Disclosure programs, also known as Vulnerability Disclosure Programs (VDPs). To search for VDPs from Synack, for instance, try the following google dork:

 site:*.responsibledisclosure.com        

These programs don't offer monetary awards, but this is a great way to apply what you've learned in a safe and legal manner, and you likely will not experience great competition to report valid findings. These programs also provide public acknowledgement for your efforts. I used to handle triage for such findings so I may even get to triage your report!

Final Thoughts

When testing, think about potentially using a VPN. People in your household may not be happy if all of a sudden your IP is getting blocked from using all sorts of sites due to an overzealous WAF!

I hope I've given someone out there easy to follow steps and an actionable way to hunt for their first bug. If you find something applying this methodology, let me know!

Appreciation goes to to the authors of the tools mentioned and people like Sergio who have mentored me along the way! I hope I've done my part now to continue to spread awareness of this tried and true method for finding valid reflected XSS. If you're an expert and want to add something I may have missed or offer insight or feedback, I welcome you to do so.

Hello sir, Can you please give me an answer. I have try in a website for reflected Xss when i put the code its run on my pc and show a popup. But its make me confused that when the owner of that website put the same code its not run. Kindly help me with that.

回复
Sergio Medeiros

Penetration Tester @ Synack | OASP | eWPTX v2 | CAPenX | CAPen | eWPT v1 | eCPPT v2 | eJPT |

7 个月

Boom XSS!

Jamonte Lee, MS, BA

Associate Technical Consultant - AHEAD | CompTIA Security + | ISC2 CC

7 个月

Great article Jordy!!

Leila Salim

Sales enabler, revenue driver, relationship builder focusing on proactive customer engagement.

7 个月

Nice!

要查看或添加评论,请登录

Jordy Tello的更多文章

社区洞察

其他会员也浏览了