Software and data integrity failures: An OWASP Top 10 risk
Software and data integrity failures are increasingly common and dangerous cybersecurity vulnerabilities, recognized as web application security risk category A08:2021 in the OWASP Top 10. These failures occur when critical application updates, dependencies, or data pipelines are not properly validated or protected, opening the door to tampering, unauthorized changes, or supply chain attacks.
Your Information will be kept private.
Your Information will be kept private.

With attackers increasingly targeting CI/CD pipelines, open-source components, and update mechanisms, it’s more important than ever to secure your web app’s code and data integrity. This article goes through the dangers of software and data integrity failures and shows how a defense-in-depth approach that also incorporates a continuous vulnerability scanning process can minimize the risk of compromise.
What are software and data integrity failures?
Software and data integrity failures happen when systems don’t adequately protect against unauthorized changes to code, configuration, or data. This could mean an application downloads a malicious software update, runs tampered scripts, or unquestioningly accepts altered input from an untrusted source.
At their core, these issues result from trusting components or processes without verifying their integrity. This makes it possible for attackers to manipulate software behavior, inject malicious code, or compromise sensitive operations.
Common causes of software and data integrity failures
- Unsigned or unverified software updates: Without digital signatures or integrity checks, software updates can be tampered with and used to distribute malicious code.
- Insecure use of plugins or modules: Relying on third-party components without reviewing their security can introduce vulnerabilities and increase the attack surface.
- Poor validation of data in transit or storage: Failing to validate or sanitize data can allow attackers to inject or corrupt sensitive information during transfer or at rest.
- Lack of integrity checks in CI/CD workflows: When code and artifacts aren’t verified throughout the pipeline, attackers can inject malicious changes that go undetected into production.
- Lack of data authenticity verification: When an application fails to confirm the origin of incoming data, it may accept and process untrusted input. This opens the door for attackers to inject malicious data into the system.
- No support for integrity validation: If the software doesn’t include mechanisms to verify the integrity of data, attackers can alter or delete information without detection, compromising system reliability.
- Use of untrusted search paths: Allowing users or external inputs to influence library or module search paths can result in malicious code being loaded instead of trusted components.
- Code download without integrity verification: Downloading external code or dependencies without checking their authenticity or integrity enables attackers to substitute or inject harmful code.
- Unsafe deserialization of untrusted input: Deserializing untrusted data without validation can lead to code execution attacks, as malicious payloads can be embedded within the serialized objects.
What’s the difference between Software and Data Integrity Failures (A08:2021) and Vulnerable and Outdated Components (A06:2021)?
While there’s a lot of practical overlap between the two risk categories and they both relate to the software supply chain, they focus on different core concerns. In short, A08:2021 is about tampering and trust issues, while A06:2021 is about exposure due to neglect or outdated components.
More specifically, software and data integrity failures involve issues where software updates, critical data, or CI/CD pipelines are not properly protected against integrity violations. This category focuses on trust boundaries being breached, such as unauthorized access to build processes, compromised dependencies, or misconfigured deployment systems that allow malicious changes.
In contrast, the Vulnerable and Outdated Components category refers specifically to the use of libraries, frameworks, or other software modules that are known to have security flaws or are no longer supported. This category is about the inherent risks of running components with known vulnerabilities (often due to lack of visibility or asset management), not necessarily a breach of integrity during their use.
Software and data integrity failure examples
Probably the most notorious example is the SolarWinds supply chain attack. After infiltrating the development pipeline, attackers inserted malicious code into a trusted software update for a network monitoring tool, which was then distributed to thousands of organizations, including government agencies. The compromised update gave attackers remote access to critical systems and went undetected for months.
More recently, the Polyfill security crisis saw a third party buy the CDN used to distribute a trusted and popular JavaScript library and use it to serve a version containing malicious code to websites that relied on that library. More general examples include:
- JavaScript dependencies injected with crypto miners via npm or CDN links
- Malicious plugins in content management systems
- Configuration drift in infrastructure-as-code pipelines
- Exploitable default settings in CI tools that allow script injection
What are the dangers of software and data integrity failure?
Unlike simple misconfigurations, these failures affect the trust model of your entire software lifecycle. If attackers can tamper with code, data, or deployment processes, they can:
- Insert backdoors that compromise users or internal systems
- Exfiltrate sensitive data under the guise of legitimate operations
- Undermine trust in software vendors and platforms
- Spread malware across customer environments via updates
- Bypass other controls by injecting malicious functionality into trusted workflows
These types of attacks are hard to detect and can have far-reaching consequences that go far beyond a “regular” data breach, especially when third-party components or automation tools are involved.
How to mitigate software and data integrity failures
The key to mitigation is reducing trust in unverified sources and ensuring robust validation across your software delivery lifecycle. This means applying security controls not just at the perimeter or in production but throughout the entire build and deployment process.
Common data integrity issues
- Lack of cryptographic signing: Software or scripts are not signed or verified before execution.
- Insecure third-party integrations: Applications pull from remote sources and APIs without verifying data contents.
- Pipeline compromise: Repositories or CI/CD tools are misconfigured, allowing unverified code to be injected into builds.
- Improper input validation: Data fields or configuration files are manipulated without detection.
These problems are especially dangerous because they’re often overlooked or simply not present during traditional testing.
How can you prevent software and data integrity failures?
To prevent integrity failures, organizations must focus on visibility, validation, and control across their code, data, and the entire software pipeline. Key security measures include:
- Implement a policy of signed updates and packages: Require digital signatures for all updates and third-party components.
- Use hash validation: Check cryptographic hashes for downloaded code or container images to detect tampering.
- Secure CI/CD pipelines: Apply access controls, secrets management, and artifact validation in your automation processes.
- Monitor and test in production: Validate behavior continuously using automated dynamic testing of production-parity environments to find and remediate any security vulnerabilities introduced after static checks.
- Review dependencies regularly: Automate vulnerability checks for all components and remove or update outdated plugins.
How taking a DAST-first approach can also help with software and data integrity
Most integrity failures surface only when applications are running. Static security tools like SAST or SCA might detect some issues, but they can’t prove whether exploitable vulnerabilities are actually present—unlike dynamic application security testing (DAST). For integrity violations that introduce security misconfigurations or other vulnerabilities to serve as backdoors, taking a DAST-first approach helps minimize risk by testing what you’re actually running, including any vulnerable components that you might not be aware of.
When used as part of a comprehensive zero-trust application security program, DAST can help you identify many cases where:
- A tampered script can execute in your environment
- An input manipulation bypasses validation
- An exposed update endpoint is vulnerable to abuse
- Insecure deserialization may allow malicious code to load
On Invicti’s DAST-first platform specifically, proof-based scanning is used to automatically confirm many exploitable vulnerabilities to cut through false positives and focus your teams’ efforts where they matter most. Instead of reacting to theoretical risks flagged in build time, you address actual threats in runtime—where attackers live.
Final thoughts on preventing software and data integrity failures
Software and data integrity failures strike at the heart of trust in modern development. As applications become more complex and reliant on automation, third-party code, and cloud infrastructure, the risk of tampering increases.
To stay ahead of threats, your application security strategy must validate not only what’s in your code but also how that code behaves in the real world. A DAST-first approach supported by software inventory best practices and extensive automation enables you to find and fix the issues that attackers can actually exploit.