Throughout the past 15 years that I’ve been working as an independent project recovery consultant and interim CIO, I have observed executives’ frustration – even exasperation – with information technology and their IT departments generally. Some of the more common refrains are:
“I don’t understand IT well enough to manage it.”
“Although they work hard, my IT people don’t seem to understand the very real business problems we’re facing.”
In fact, the complaint I hear most often from CEOs, COOs, CFOs, and other high-ranking officers, is that they haven’t reaped the business value of their high-priced technology. Meanwhile, the list of seemingly necessary IT capabilities continues to grow, and IT spending consumes an increasing percentage of their budgets.
So why is this happening, and what can you do to prevent it?
Though it may come as a surprise, one of the most effective measures you can take is to ensure a senior business executive plays a leadership role in a handful of key IT decisions. I say this because when business executives hand over their responsibility for these decisions to IT executives, disaster often ensues. You need look no further than my project failure case studies to see the sheer number of botched adoptions of large-scale customer relationship management (CRM) and enterprise resource planning (ERP) systems.
It would be easy to assume that the CRM and ERP disasters resulted from technological glitches. However, the problems generally occurred because senior executives failed to realize that adopting the systems would create business challenges – not just technological ones.
To be clear, IT executives are the go-to people for numerous managerial decisions, including choosing technology standards, advising on the design of the IT operations center, providing the technical expertise the organization needs, and developing the standard methodology for implementing new systems. But an IT department should not be left to their own devices to determine the impact these choices and processes will have on a company’s business strategy.
In an effort to help executives avoid IT disasters, and, more importantly, generate real value from their IT investments, I have made a list that outlines the measures they should take and the decisions they should oversee. Whereas the first three bear on strategy, the latter items relate to execution. At the risk of a spoiler: IT people should not be making any of these decisions, because, in the end, that’s not their job [or their area of expertise].
1) How much should we spend on IT?
Given the uncertain returns on IT spending, many executives wonder whether they will reap the benefits of their investment. So the thinking goes: If we can just get the dollar amount right, the other IT issues will take care of themselves. For this, they look to industry benchmarks to determine “appropriate” spending levels.
In my experience, they should be approaching the question very differently. First, executives should determine the strategic role that IT will play in their organization to establish an organization-wide funding level that will enable technology to fulfill their objectives. After all, IT goals vary considerably across organizations – from streamlining administrative processes to feeding a global supply chain, providing flawless customer service, or cutting-edge research and development.
Clearly, fulfilling these objectives requires different levels of spending, planning, and administrative oversight.
2) Which projects should be funded?
I’ve seen relatively small companies with 100 IT projects underway. Despite the fact that they are not equally important, executives are often reluctant to make choices between the projects that will likely have a significant impact on the company’s success, and those that will provide some benefits but aren’t essential.
Leaving such decisions in the hands of the IT department means that they will be the ones prioritizing business issues, or, just as troubling, they will attempt to deliver on every project a business manager regards as important. When presented with a list of approved and funded projects, most IT units will do their best to carry each of them out. But this typically leads to a backlog of delayed initiatives, not to mention overwhelmed and demoralized staff.
3) Which IT capabilities need to extend company-wide?
Business leaders are increasingly recognizing the significant cost savings and strategic benefits that come with centralizing IT capabilities and standardizing infrastructure throughout an organization. Leveraging technological expertise across a company enables cost-effective contracts with software suppliers, just as it facilitates global business processes. On the other hand, standards can restrict the flexibility of individual business units, limit the company’s responsiveness to diverse customer segments, and give rise to strong resistance from managers.
When IT executives are left to make decisions about what will and will not be centralized and standardized, they typically take one of two approaches. Depending on the company’s culture, they either insist on standardizing everything to reduce costs, or they grant exceptions to any business unit manager who raises a stink (recognizing the importance of business unit autonomy). Whereas the former approach restricts flexibility, the latter is expensive, limits business synergies, and drains human resources.
It’s worth keeping in mind that, in some instances, using different standards can be counter-productive – resulting in a corporate IT infrastructure whose total value is less than the sum of its parts. Knowing this, executives must play a lead role in weighing these crucial trade-offs.
4) What IT services do we really need?
I’ll get right to the point: An IT system that doesn’t work is useless. Reliability, responsiveness, and data accessibility come at a cost, but that doesn’t mean every system must be wrapped in gold. Ultimately, executives must decide how much they are willing to spend on various features and services.
For some companies, top-of-the-line service is non-negotiable. As a case in point, investment banks cannot afford to engage in a debate over how much data they would be willing to lose if a trading system crashes. They require 100% recovery. Similarly, Cloud providers cannot compromise on response time or allow for any downtime, because their contracts penalize them when their system becomes unavailable. This not only incentivizes the provider to ensure that their services will continue to run despite floods, tornadoes, power outages, and telecommunications breakdowns, it gives the client peace of mind and justifies higher costs.
Granted, not every company is a Google or a Goldman Sachs. Most can tolerate limited downtime or occasionally slow response times, and they must weigh the problems this creates against the costs of preventing them. Once again, decisions concerning the appropriate levels of IT service need to be made by senior business managers. Left to their own devices, IT units are likely to opt for the highest levels – i.e., Ferrari service when that of a Ford will do – because the IT unit will be judged on such things as how often the system goes down.
5) What security and privacy risks are we willing to accept?
Like reliability and responsiveness, companies must weigh the level of security they want against the amount that they are willing to expend. In doing so, there’s another trade-off to consider: Increasing security involves not only higher costs but also inconveniences users. As global privacy protections are increasingly mandated by governments, security takes on a new level of importance because well-designed privacy protections can be compromised by inadequate system security. Executives must assess these trade-offs.
Bear in mind that many IT units will adopt the philosophy that absolute security is their responsibility, and thus deny access anytime safety cannot be guaranteed. They would do well to float that idea by a bank’s marketing executives, who are counting on simplified online transactions to attract new customers.
6) Can we assign blame when an IT project fails?
Finger-pointing often ensues when teams fail to benefit from new systems. There must be something wrong with the IT function in our company, they presume. More often than not, there is something wrong with the way that non-IT executives are managing IT-enabled change in the organization.
I invite you to reflect on the well-publicized examples of ERP and CRM initiatives that failed to generate quantifiable value. Invariably, the failures resulted from assumptions that IT units or consultants could implement the systems while business managers went about their daily tasks. The bottom line is that new systems have no value; they derive their value from new or redesigned business processes.
Quite simply: To avoid disasters, executives must take responsibility for realizing the business benefits of an IT initiative. These “sponsors” need the authority to assign resources to projects and the time to oversee the creation and implementation of their projects.
This includes scheduling regular meetings with IT personnel, organizing training sessions, and working with the IT department to establish clear metrics for determining the initiative’s success. Such sponsors can ensure that new IT systems deliver real business value.
In a nutshell: one of the most effective measures you can put in place is giving senior business executives a leadership role in a handful of key IT decisions.
Posted on Sunday, October 18, 2020 by Henrico Dolfing