Steps to Secure Your SDLC
Security through all phases of the SDLC
The software development lifecycle (SDLC) describes the six phases that software moves through between gathering of initial requirements through long-time monitoring and maintenance. However, the SDLC often focuses on software functionality and performance at the expense of security.
The Secure SDLC (SSDLC) integrates security into the SDLC. At each stage of the SSDLC, companies and developers can take steps to improve the security of their applications.
Requirements gathering
The Requirements gathering phase of the SDLC defines the functions that the software is expected to perform. This often includes descriptions of core functionality as well as other requirements, such as the performance of the software.
The process of integrating security into the software development process starts with requirements generation. At this point in the SDLC, developers should define security requirements, such as appropriately securing sensitive data or protecting against the exploitation of web application vulnerabilities.
These requirements should be based on applicable regulations, standards, and best practices. For example, regulations such as PCI DSS, HIPAA, and GDPR all specify security controls that should be in place and reflected in an application’s security requirements.
Design
During this phase of the SDLC, the requirements from the previous phase are converted into a design for the software. This includes sub-functions designed to address specific functionality requirements as well as an overall plan for integrating these sub-functions into a coherent whole.
If security requirements are developed in the previous phase, they will be integrated into the design during this phase. Often, regulations, standards, and sources of security best practices will include recommended implementations or design patterns for translating security requirements into real-world implementations.
Development
During the development phase of the SSDLC, the focus should be on developing functional, secure code. This includes ensuring all code committed to repositories is authentic and free of vulnerabilities, backdoors, etc.
In recent years, supply chain exploits have become a focus of cyber threat actors. If an attacker can successfully commit malicious code into a widely-used application or trusted library, then all users of that application or software that depends on that library are affected.
To secure the development process, organizations should take steps to prevent malicious or vulnerable code from entering the codebase, including:
- Allowing commits only from verified identities: If anyone can commit code to a repository, then an attacker may be able to insert vulnerabilities or backdoors into an organization’s codebase. Companies should only allow code commits from verified identities to prevent these attacks and to provide an audit trail if an incident occurs.
- Automated security testing: Often, testing doesn’t occur until the next phase of the SDLC when the application is complete and little time remains to fix any issues before release. Integrating automated static application security testing (SAST), software composition analysis (SCA), and other security testing into automated CI/CD pipelines can allow potential issues to be found and fixed immediately, minimizing the associated cost and delays.
Testing
During the testing phase, developers verify that the software meets the requirements created in the first phase of the SDLC. If security requirements are included in this list, then test cases should have been developed to evaluate these requirements.
At this stage, integrating security primarily requires running these tests. It is also important to validate that all code—and all test code—was digitally signed by a verified identity to protect against malicious or vulnerable code being inserted into the codebase.
Deployment
During the deployment phase, the main threat that organizations face is fake or corrupted code being distributed to customers. If an attacker can substitute their own, malicious update for a legitimate one, then customers will deploy and execute malicious code within their environments.
Digital signatures are a common solution to proving the authenticity and integrity of software and software updates; however, this assumes that the private key used to generate those digital signatures is properly secured. Compromised private keys or a deployment process that could allow malicious code to be legitimately signed by an organization—as occurred in the SolarWinds hack—could result in customers trusting malicious or vulnerable software developed by an attacker.
Maintenance
If an organization has properly implemented security into its development process, then software should be free of vulnerabilities at time of release. However, as new types of vulnerabilities are discovered and libraries are updated, new vulnerabilities may crop up within an application.
When making changes to an application’s code, both internal changes and updating libraries or dependencies, it is vital to test for vulnerabilities before deployment. Additionally, active applications should be regularly scanned with updated vulnerability scanners to ensure that new vulnerabilities are identified and remediated.
Secure your code with Beyond Identity Secure DevOps
The security threats that developers face can be divided into two main categories. Some vulnerabilities and security issues exist because legitimate developers accidentally introduce issues into the codebase. These issues can largely be addressed via regular vulnerability scanning of committed code.
A more complex issue is the possibility that an attacker might introduce malicious or vulnerable code into an application at some point within the SDLC. Managing this threat requires validating that every line of code in the codebase was written and committed by a verified identity. Learn more about securing your organization’s software against supply chain attacks with Beyond Identity Secure DevOps with a free demo.