A version of this essay was originally published at Tech.pinions, a website dedicated to informed opinions, insight and perspective on the tech industry.
We’re on the edge of a precipice in the enterprise-tech industry, and most people don’t even realize it.
The problem? The technology developments that have driven some of the greatest advancements in business — things like software as a service (SaaS), virtualization and analytics — have come with costs. Not just monetary costs, but also what I call complexity costs.
Many of the tools used to create these capabilities in both local and offsite data centers (or private and public clouds, to use the popular vernacular) are now so specialized and so complex that it’s getting harder and harder to find people with the skill sets necessary to run and/or manage them.
It’s not just the individual tools. It’s the fact that most IT operations now consist of numerous complex tools that are tied together in even more complex webs of connection.
Examples abound. Want to create an app for employees’ smartphones so they can check the status of a client’s order while visiting that client? Well, it’s likely that the initial order is kept in a sales-management tool based in the cloud, and that needs to be linked to an inventory tool managed internally, which, in turn, has multiple connections both to a supplier’s database at an external location, as well as an internal shipping tool. Plus, once the results have been found, they have to be translated and delivered in a mobile-friendly format.
If you don’t want the performance of that app to suffer, you’ll need to deliver the results from a site outside the corporate firewall, like a co-located data center or cloud exchange with speedy connections to a service provider. Oh, and if you’re delivering it via a virtualized app to maintain security, you’ve got to deal with desktop and/or app virtualization and connection broker software as well. Finally, if you also want to provide insight into how the customer’s orders have arrived over a period of time versus an agreed-upon standard, you’ll need to pull in data from a separate analytics engine so that it can fill out the chart in the mobile app’s dashboard UI.
Throw in the very real possibility of a merger and an acquisition or two, and the need to tie one company’s system into another company’s often completely different set of systems, and you have all the ingredients of an IT disaster.
In theory, there are tools that are supposed to help solve these problems. However, unless you’re willing to throw out every relevant system that you already own and start from scratch, you will likely have to deal with a complex web of connections. Instead, many companies end up outsourcing these kinds of projects, or at least a portion of them, to dedicated consulting firms or the services arms of tech hardware and software vendors.
Not surprisingly, many new IT projects in this kind of environment move at a very slow pace because of the enormous range of potential issues that have to be accounted for and tested. Because of this slow movement, we’ve started to see many business managers in organizations of all sizes start to take issues into their own hands and both fund and bootstrap solutions of their own. This creates the dreaded shadow IT.
Shadow IT is essentially defined as skunkworks projects that provide some of the services or capabilities that IT traditionally offers, but that are done without the permission, or even knowledge in some cases, of the IT department. For example, a shadow IT project may leverage a cloud-based service to put together a simplified version of a mobile application like the one described earlier that delivers only, say, 80 percent of the functionality, but in significantly less time.
How is this happening? Well, ironically, in a world where the previously described complexity has become the norm, a number of large established players as well as nimble startups have created intriguing solutions dedicated to solving some real-world problems. Companies like HP, Dell, Lenovo, VMware and Citrix, as well as Pivot3, Nutanix, NetApp and more, are creating data center appliances and cloud-based services of various types that can be set up by relatively sophisticated end users, without the help of IT. The end result is that non-IT portions of the business are starting to enable their own IT solutions.
Not surprisingly, many in the IT world are horrified at the mere thought of this. Think of the potential security, privacy, regulatory and other issues that could conceivably get created in these kinds of scenarios. Yet, at the same time, as non-IT business leaders have grown more comfortable with some of the basic cloud computing principles that are behind many of these new products, and as vendors have worked hard to make their new tools accessible, there’s an obvious crossing point between these two trends. Many business leaders are eager to exploit this convergence, particularly in light of the almost ridiculous levels of complexity that now surround them within their own IT organizations.
Many established IT vendors as well as IT departments themselves painted themselves into this complexity corner over the last 10 to 15 years, and the big question now is, how do they get out of it? The truth is there is no easy answer, and as long as companies continue to depend on any older, legacy systems, these kinds of complexity challenges will continue to exist.
But forward-looking CIOs who are willing to take some risks and re-architect some of their systems can potentially benefit from these new simplified data appliances in a number of ways. Only then can they step back from the edge that’s looming before them.
Bob O’Donnell is the founder and chief analyst of Technalysis Research LLC, a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. Reach him @bobodtech.
This article originally appeared on Recode.net.