clock menu more-arrow no yes mobile

Filed under:

New York’s cyber security regulations aren’t perfect, but other states should pay attention to them

The new rules, which go into effect March 1, call for banks and insurers to scrutinize security at third-party vendors that provide them goods and services.

New York Governor Andrew Cuomo has said, “These strong, first-in-the-nation protections will help ensure this industry has the necessary safeguards in place” to protect businesses and clients “from the serious economic harm caused by these devastating cybercrimes.”
Drew Angerer / Getty Images

I have no doubt that there was a lot of back-patting and handshaking going on a while back when New York Governor Andrew Cuomo released his proposed cyber security regulations for banks and other financial institutions.

These planned regulations are pretty groundbreaking; they’re first of their kind in the nation. Going into place on March 1, they’re coming at a time when organizations are finally starting to wake up to the realities of cyber vulnerability. Breaches, both high-profile and under the radar, are an almost daily occurrence, and public and private organizations alike have started to take concrete steps toward safeguarding their systems.

New York is the first state to take this bold step, but it’s only a matter of time before other states follow suit. And yet, while we’re seeing the financial sector as the first to be regulated in this way, it’s important for us not to forget that cyberattacks are a huge threat to all industries that needs to be addressed — quickly.

So, now that they’re here, the question is: Should New York’s regulations serve as a model going forward?

Yes and no. They’re a huge step forward in this space, but I see a few critical flaws that can be improved as other states consider their own regulations.

Unfortunately, the rules are already being outpaced by the reality of business in the internet age. The regulations outline solid traditional security practices, such as limiting the distribution of personally identifiable information or requiring multifactor authentication, in addition to stipulating that organizations must test their cyber security systems in order to comply.

While good in theory, the problem lies with the cadence of cyber risk certification — the regulations only require that they’re checked once per year, which is far from what’s needed. It’s akin to checking the weather forecast once a year and hoping it holds relevance for the other 364 days.

Calling for “yearly” or “quarterly” tests fails to account for the speed at which digital systems — and their associated risks — change. In our line of work, traditional time windows in business cannot account for the rapid and accelerating pace of modern technology.

The premise of this routine compliance certification is well-intentioned, but by its very nature, it implies that systems should and will remain static for the given certification period. If your entire business were static for 12, six or even three months at a time, it would quickly cease to be a business. Top-performing companies excel in constant change and constant improvement, not in maintaining static systems.

I’ve seen this first hand pushing for DevOps practices within Australian banking institutions to improve the speed and quality of software development. Working with clients and customers that my company and I have gotten to know — business in this age is about understanding, measuring and managing change. The frequency and magnitude of changes happening within IT organizations are only increasing, bringing along varying risk potential for each application, server and business as a whole.

Simply put, the implicit safety these certifications provide will not be able to keep up with an organization that is out to innovate.

In addition, the proposed regulations only call for the regular testing of “Information Systems” that inform the design of the cyber security program — but what about regular testing of all systems?

Misconfigurations are so dangerous because everyone assumes they don’t exist until they have been exploited. Consider the following: “Through 2020, 99 percent of all firewall breaches will be caused by misconfigurations, not firewall flaws” (Gartner, One Brand of Firewall Is a Best Practice for Most Enterprises, Feb. 18, 2016).

But how can you protect against this? Only regular testing of those systems against an accurate policy creates trust in the systems themselves. Without that kind of visibility and proactive notification, misconfigurations linger until an attacker discovers them.

What do the regulations have right? Regular testing to verify resiliency and measure risk. What do they have wrong? The scope and cadence. Technological change is accelerating, and staying competitive in the digital marketplace requires a similar acceleration of business processes to keep up. Any attempts to shoehorn legacy practices with quarterly or yearly turnaround times are bound to bottleneck, stagnate and make business processes less resilient.

New York has stepped up to lead the charge. As other states and industries adopt similar regulations, they can glean insight from them, but there’s still room for improvement.

Mike Baukes is the co-CEO of UpGuard, the world’s first cyber resilience platform, which offers operational awareness for complex IT environments. An entrepreneur at heart, he is as comfortable finessing technical details as he is with developing a vision for strategic outcomes. Reach him @mikebaukes.

This article originally appeared on

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.