If your organization writes code, application security is key for protecting those applications, the data they handle, and the infrastructure they run on. A breach can result in direct financial loss, loss of customer trust, and loss of compliances necessary for the business to function.
The payment card industry data security standard (PCI), in particular, requires rigorous application security practices. Failing PCI is costly, with fines running up to $100,000 per month until the merchant achieves compliance. Ultimately, credit card providers can remove your ability to process cards at all.
Breaches can be even more expensive. For example, Target said their breach of credit card data cost them over $200 million, including an $18.5 million legal settlement with 47 state attorneys general.
Many companies rush to fix vulnerabilities either after a breach, or just in time for a compliance review. This is expensive in many ways. Developers may have forgotten the code or left the company, increasing the time needed to fix the issues. It’s a huge task to do all at once, so you may need to hire consultants to come in and help. Add to these challenges the big one: a loss of business and competitive agility. When you have fires to put out, you can’t be working on new features or new products.
Instead, it’s far cheaper and faster to ensure code is secure before it goes live. Besides building actual security into your code, it allows devs to fix the code they just wrote, while it’s still fresh in their minds. When it’s time for the PCI audit, you just print out the report – no fire drill needed.
We’ve been helping companies insert security practices into their Software Development Lifecycle (SDLC), with the result of zero high-severity vulnerabilities. Our staff of developers and security experts can help even those who have failed or almost failed PCI compliance due to application security vulnerabilities.
So how do you build a pre-emptive security process into your code, for every release? Here are the best practices for setting up security touchpoints in your software development lifecycle.
Moving from SDLC to SSDLC
Nearly every SDLC has the same four phases: Requirements, code, QA, and release. Here are the security touchpoints for each phase to turn your SDLC into a Secure SDLC. A good rule of thumb is to expect 10% of total dev time to be applied to security, once they’re experienced (20% until then). And before asking developers to fix code, train them in secure coding. The remediations aren’t complex, but researching them is, so enable your developers with secure coding training for the best results.
Requirements Phase: Architecture Review
Every sprint, review the proposed architecture from a security perspective, identifying problems before any code is written. This is usually done by the security team in collaboration with the dev lead. This should build on and update an annual Threat Model of the entire application. Threat modeling is a structured approach of identifying and prioritizing potential threats to a system, and determining the value that potential mitigations would have in reducing or neutralizing those threats.
This architecture review is primarily a non-automated task, but tools such as IriusRisk can help.
Coding Phase: Scan & Fix
We recommend scanning the code every night, using Jenkins or whatever build technology your organization uses. The DevOps team can help you configure the process.
As part of the job, the scan should push new findings into your bugtracker (such as Jira), because this is the workflow that devs understand. You may need help from the scan vendor to integrate the bugtracker. Work with the dev team to ensure they’re fixing issues identified by the scan.
You should conduct two separate scans; one each for the code and libraries. Code scanners are called SAST (Static Analysis Security Testing). These vulnerabilities are fixed by adding code to the application. Examples include Coverity, Fortify, CheckMarx, and VeraCode. Note that you want a dedicated security scanner, because code quality tools like SonarQube are less thorough on security and can miss serious issues.
Library scanners are called SCA (Software Composition Analysis). They review the libraries for known Common Vulnerabilities and Exposures (CVEs). This often-missed step is actually essential: the Equifax breach (143M records stolen, $575M settlement) was caused by a 3-month-old Struts2 library. You can address these vulnerabilities by updating the libraries in the code. In some cases, refactoring is required, meaning it needs to be scheduled for a future sprint. Tools to help you scan libraries include Black Duck, VeraCode, and the free/open source tool OWASP Dependency Check.
The nightly scans identify new vulnerabilities, which should be fixed in the same sprint.
When you first scan your code, you will identify dozens or hundreds of high-severity vulnerabilities. Think of “fixing the backlog of security findings” as a product feature, and follow the same process the company follows for all other features, working with the business to prioritize the effort. After these are fixed, there will typically be fewer than 10 findings each sprint.
QA Phase: Security Gate
Before code goes live, we recommend that a security expert signs off. They will review the Architecture Review and SAST and SCA scans, and make sure the application meets security standards. This can be someone in the security organization, or a Security Champion within the dev team. In particular, look for any high-severity findings that have not been fixed.
Follow whatever process your organization uses for QA signoffs, whether that’s a checklist owned by QA, a Change Review Board, or something else. Add a “Security Signoff” as one line in that process.
Make sure you know when releases are coming up, it’s essential to always be ready for signoff and be pro-active if you will need to refuse sign-off. Set up an email alias (like appsec@) so there’s no single point of failure with signoffs.
Release: Web App Firewall (WAF)
All code should be covered by a WAF, such as Akamai or Imperva. Review the WAF settings annually to ensure they are configured securely.
By injecting security into the coding process as a regular feature, you reduce the risk of insecure code leading to breaches, compliance failures, and other expensive security failures. Your code is secure, from the start.
If you’d like some help through the SSDLC process, our team of security experts can supply training, process development, and code evaluation services.
Contact us today at email@example.com to ensure your code is safe, secure, and ready for audit.
About the Author
Mike Peters leads cybersecurity and is a part of our Security, Privacy & Compliance Practice at Unify. He’s been in security since 2008, is a Certified Ethical Hacker, and has expertise in secure development, patch management, cloud security, and more.