clock menu more-arrow no yes mobile

Can CNAP Succeed Without Building on Past Lessons in Safety?

The sheer number of breaches we see every day suggests the private sector will not correct itself when it comes to security.

Michal Vitek/Shutterstock

Last week, President Obama issued an op-ed with his view that cyber security is the greatest threat we face today, stating, “Cyberthreats are among the most urgent dangers to America’s economic and national security.” He then went on to announce the newly created Cybersecurity National Action Plan.

Among the many ways CNAP aims to improve national cyber security is by increasing the number of security professionals through education subsidies, updating federal systems, raising cyber security awareness and doubling the number of advisers available to critical infrastructure companies for security resiliency testing.

The list of areas the plan will focus on is long, and the budget large, yet it falls short. What I didn’t see in the government fact sheet was any mention of setting security standards for the software underpinning our digital world or transparent information sharing which has led to great progress in other areas of technology.

We have to stop pretending we can keep the bad guys from attacking the code that protects our data.

For decades, cities were built and developed with functionality and convenience in mind. It wasn’t until the Great Chicago Fire destroyed an entire city and cost the lives of hundreds of people did cities begin creating fire codes. They realized there was diminishing returns on building more fire stations. The buildings themselves needed to become more fireproof. Like a rapidly growing city, we’ve built our applications quickly and without regard for the fact they exist in a hostile environment. Every application that holds valuable data will be attacked, just like every car will drive on a slippery road and every person will be exposed to pathogens. We have to stop pretending we can keep the bad guys from attacking the code that protects our data.

Applications run our critical infrastructure and our businesses, and as such they are a primary target for those looking to infiltrate businesses, critical infrastructure, personal devices and federal systems. We have guidelines such as the OWASP Top 10, which companies that recognize the need for security can look to, but no company is required to adhere to minimum standards of security for the applications they produce.

This is unlike any other industry in our country. The FDA regulates minimum standards for the food we eat and medications brought to market. OSHA, under the Department of Labor, creates standards to ensure safe and healthy work environments. We should not wait for our equivalent of the Chicago Fire to create standards that will secure applications, and with the inevitable course of the digitization of all parts of our society, potentially protect thousands of lives.

I am not necessarily advocating for full-on government regulations, but if the committee will be discussing recommendations for making the public and private sectors more secure, why should it stop short of creating security standards for applications? Vulnerable application code is the root cause of a majority of successful attacks.

Another approach that would improve security overall would be to create a regulatory board that oversees breach data and makes this information public. With the information made public, companies can learn from others’ breaches and improve their own security.

Vulnerable application code is the root cause of a majority of successful attacks.

Google’s transparency after the Operation Aurora attacks of 2009 exposed the tactics and goals of the attackers, and gave information that improved corporate security for many organizations. With forced disclosure, companies of a certain size will be required to report breaches and how a breach occurred within a reasonable time frame to this regulatory commission.

You can’t keep the cause of an airplane or train crash secret. Understanding the root causes of technology failures, whether errors in design, manufacturing or procedures is how we have been able to make our society continually safer. It’s my belief that this understanding can make us more secure. In addition, just like transportation manufacturers and operators are required to follow new safety guidance that is learned from investigating failures, companies would also be required to make improvements to counter newly understood weaknesses or attacks to secure their IT environments.

This way we are constantly improving security — a necessity given that cyber criminals are constantly evolving their attack methods and the rapid deployment of new technologies.

Again, this approach is not novel; we already have a National Transportation Safety Board that collects data and provides information on transportation disasters; OSHA, which creates guidelines and regulations for workplace safety; and the Consumer Product Safety Commission, which ensures the products we are purchasing adhere to minimum safety guidelines.

The sheer number of breaches we see every day suggests the private sector will not correct itself when it comes to security.

Each of these agencies was created at a time in our country’s history when it became clear that the private sector wasn’t going to correct itself to create safer transportation, work environments or products. The sheer number of breaches we see every day suggests the private sector will not correct itself when it comes to security.

In 2014, Admiral Michael Rogers reported that China and perhaps several other countries had the capability to cripple our electrical grid. Not much has changed since then — legislation to help address breaches and vulnerabilities has been proposed, yet most of these laws have languished in Congress for various reasons.

The CNAP initiative is the most comprehensive program to date, yet it still falls short. As the Commission on Enhancing National Cybersecurity is established, it will be essential for the members to set actionable goals that take this history into perspective and avoid yet another ineffective plan.


Chris Wysopal is chief technology officer and chief information security officer of Veracode, which he co-founded in 2006. He oversees technology strategy and information security. Prior to Veracode, he was vice president of research and development at security consultancy @stake, which was acquired by Symantec. In the 1990s, Wysopal was one of the original vulnerability researchers at The L0pht, a hacker think tank, where he was one of the first to publicize the risks of insecure software. He has testified before the U.S. Congress on the subjects of government security and how vulnerabilities are discovered in software. He is the author of “The Art of Software Security Testing.” Reach him @WeldPond.

This article originally appeared on Recode.net.