clock menu more-arrow no yes mobile

Filed under:

Software Security: On the Wrong Side of History

Software has become critical to our society today -- so much so that its producers can no longer say to consumers, “Trust me.”

thismodernworld.com

We are at a critical point in the history of software. As a society, we are becoming increasingly reliant on code, whether it’s managing our banking, controlling our vehicles and critical infrastructure, or operating our medical devices. Meanwhile, every business now relies on software as a source of strategic differentiation, competitive advantage and top-line revenue generation. Cyber attackers have taken note of this increasing attack surface, compromising systems at an alarming rate.

One of the leading causes of data breaches over the past two years has been vulnerable Web applications, according to Verizon’s 2015 Data Breach Investigations Report. And according to analytics from Veracode’s most recent State of Software Security report, 72 percent of commercial software fails the OWASP Top 10 standard, which is the policy used by the credit card industry in its PCI DSS standard.

Peter “Mudge” Zatko, my colleague from L0pht Heavy Industries, a security-researcher think tank started in the ’90s, had this to say on Twitter: “Those impeding progress by refusing to allow their software and systems to be critiqued and improved upon, are on the wrong side of history.” Software has become critical to our society today — so much so that its producers can no longer tell consumers, “Trust me.”

We now rely on software for everything — health, safety and well-being — and a policy of “just trust me to handle the security of our software” puts us all at risk. It is no longer acceptable to fail to demonstrate that you actually are producing secure software. There’s too much at stake, and customers are well attuned to the risks created by their software supply chain. They want assurances and independent validation that the software they procure from their software providers is compliant with their corporate security policies.

After all, many other industries — such as transportation, food and pharmaceuticals — require independent audits and assessments related to product safety. This is a common practice of checks and balances aimed at addressing product issues that would otherwise harm consumers. Why should software be any different, particularly when poorly written software can lead to injury resulting from things like medicine delivery-pump failure, or loss of control of a vehicle traveling on a highway?

This week, Mary Ann Davidson, Oracle’s CISO, lit a firestorm in the security industry when she posted a blog titled “No, You Really Can’t” (archived here). It has since been removed by Oracle because it did not “reflect our beliefs or our relationship with our customers,” according to a statement by Edward Screven, the executive vice president and chief corporate architect at the company.

Yet, it certainly did for Davidson, who runs Oracle’s security (producers of Java, as well as of critical ERP and financial applications that run many of our largest enterprises).

The gist of the piece is that there is no need for external testing of Oracle’s software, and that if you do attempt to validate its “Trust us” claim with third-party assessments or as an independent security researcher, you are breaking its end-user license agreement, or EULA. That could net you the attention of Oracle’s lawyers and a strongly worded letter demanding that you stop. I know, as I’ve received such a letter for the work my company does to provide such third-party code audits for our enterprise customers.

There was nothing surprising in the blog post to me, because I knew Davidson’s position very well. Our history dates back to nearly the beginning of Veracode, when in 2007, we asked her to advise on our approach to becoming a third-party assessor. She accepted.

So, with the counsel of Mary Ann Davidson and security leaders at other well-respected companies, we designed a solution that could be used by enterprises to evaluate the security of their software-supply chains. Our approach was based on binary static analysis, a new technology that allowed the detection of vulnerabilities without requiring access to source code, thus protecting the software provider’s intellectual property. Because we delivered this capability as an automated cloud-based service, with a self-service model accessible directly by the software provider, we were able to act as a neutral and independent assessor of software supply-chain security.

This is how it works: The enterprise customer specifies a minimum security policy that a software provider has to meet (such as the OWASP Top 10). To avoid running afoul of EULAs, the provider uploads a binary form of its software for automated analysis. We provide the results of the assessment directly to the provider so they can remediate their code. The enterprise never has access to the code itself, nor to the vulnerability report — it’s sent directly to the provider, who benefits from being alerted to critical vulnerabilities that it previously didn’t know about.

The result is that enterprises can now hold their software suppliers accountable for vulnerable code that would otherwise introduce risk to their corporate networks and sensitive data. Software providers now have to meet their customer’s expectations about supply-chain security. This was a new concept when it was first introduced. In the past, vendors were always able to say, “Trust us.” Now there is a viable third-party assessment process that ameliorates concerns about intellectual property protection and keeps sensitive vulnerability information private between the assessor and the assessed.

As we rolled out the solution, our enterprise customers began to contractually require software-provider assessments as a standard part of their purchasing processes. The system worked, and many software providers sought our help. Surprisingly, one of the first to come out against the process was Mary Ann Davidson, in a 2011 blog post.

The gist of her argument was, “We have security covered, so trust us.” I’ve yet to meet a security professional from any walk of life who doesn’t believe in the “trust but verify” motto. In fact, standard certifications like SOC 2 for cloud service providers are specifically designed to provide third-party audits and verification of a service provider’s security posture.

Davidson does make some good points about receiving raw analysis results and the confusion this can cause, resulting either from false positives or a lack of knowledge of mitigations in the code that can prevent exploitation. I agree! That is why Veracode created our third-party assessment program, where we engage directly with software providers. Our remediation consultants work directly with software providers to help them understand which results are false positives and which are mitigated by design or the environment, as well as help their developers understand secure coding practices and remediate vulnerabilities more quickly and efficiently. The system works, and Veracode has helped organizations remediate 4.7 million vulnerabilities in the last year alone.

It is now clear that digitization and interconnectedness are the single biggest factors that will drive societal progress and economic growth for the foreseeable future. In order for this to happen, however, we must be able to trust the security of the software that runs the world and our lives. And that requires everyone involved — including commercial software providers, enterprises, independent third-party assessors and security researchers — to be transparent and collaborative.

We all want the same thing: A safer world that comes from secure software.


Chris Wysopal is chief technology officer and chief information security officer of Veracode, which he co-founded in 2006. He oversees technology strategy and information security. Prior to Veracode, he was vice president of research and development at security consultancy @stake, which was acquired by Symantec. In the 1990s, Wysopal was one of the original vulnerability researchers at The L0pht, a hacker think tank, where he was one of the first to publicize the risks of insecure software. He has testified before the U.S. Congress on the subjects of government security and how vulnerabilities are discovered in software. He is the author of “The Art of Software Security Testing.” Reach him @WeldPond.

This article originally appeared on Recode.net.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.