A $6 billion security system intended to keep hackers out of computers belonging to federal agencies isn’t living up to expectations, an audit by the Government Accountability Office has found.
A public version of the secret audit released last week — a secret version containing more sensitive findings was circulated to government agencies in November — concerns the Einstein system, formally called the National Cybersecurity Protection System and operated by the U.S. Department of Homeland Security.
The GAO found that the system has limited capability to detect attempts to attack a network. What it can do is scan for and detect attacks based on a list of known methods or signatures. Only a few of the signatures the system used to scan for the attacks were developed specially for the government; the rest are available in commercial-grade products that any consumer could access. A security system that relies on signatures is only as good as the list of signatures used.
In addition, the system relies only on signatures and doesn’t use more complex methods for detecting attacks. For instance, it doesn’t analyze anomalies or odd patterns in network traffic that might indicate an attack. Analyzing anomalies can sometimes be useful in detecting attacks using “zero-day” vulnerabilities, so called because they rely on weaknesses in systems that are completely unknown, giving defenders “zero days” to figure out how to head them off.
“By employing only signature-based intrusion detection, NCPS is unable to detect intrusions for which it does not have a valid or active signature deployed. This limits the overall effectiveness of the program,” the report reads.
Even a less-effective system is better than no protection at all. Yet the system was properly deployed at only five of the 23 non-military government agencies for which it was intended. And only one agency had deployed it to scan for possible attacks in email, a common vector for attacks.
The stinging report provides a reminder of just how bad government agencies have been in protecting their computers and the sensitive data on them. Last year, the federal Office of Personnel Management, the government’s human resources branch, disclosed a data breach that revealed information on some 22 million people who had worked for the government. The information stolen dated back decades, and included fingerprint data on nearly six million people. Private sector researchers later traced the hack to a group based in China.
It’s also the latest proof that government agencies suck at securing their systems. The main reason for this is that agencies check off a list of vague requirements created by lawmakers and regulatory agencies when putting security in place. But they tend not to account for the risk that the requirements aren’t sufficient.
None of this is exactly news in government circles. A study by the security firm Veracode last year found that after discovering security flaws in the software they use, government agencies fixed them by applying patches only 27 percent of the time versus 81 percent for private companies. Why? Because no specific laws or regulations require it.
This article originally appeared on Recode.net.