Application rationalization is a crucial aspect of the business and IT lifecycle, and a topic we have covered previously.
Ferroque Systems has teamed up with Stephen O’Grady; a trusted colleague to our team, application rationalization expert, and IT management veteran from the UK for a series of guest posts. These articles deal with application rationalization and its importance to the modern organization. This is article is the first in a five-part series covering application rationalization in greater depth.
Application rationalization and application portfolio management is an imperative, and has been a keystone to business and technology operations within an organization for decades. The spirit of the concept has changed little over this time, while the mechanics have. The effectiveness of managing and maintaining an organization’s applications are also constantly subject to the whims of ever-changing corporate policy, shifting priorities, conflicting ideologies, and the negative results can often undermine IT and business strategy.
Over the next few weeks, I shall be looking at where we are in the world of application rationalization, how we got here, what the issues are, and what we can do about it.
I shall look at how fragmentation was allowed to happen in the first place, and what the consequences of that are. I shall also consider how reorganizations, in particular, due to mergers and acquisitions, have played and continue to play a part in this, and how one might avoid the pitfalls.
Finally, I shall look at some of the approaches to avoiding the trap in the heady future of an ideal IT landscape.
All of this is a very personal view, based on nothing more than three decades of trying to get it right. I should also stress that no former employer or client should feel singled out for criticism. The examples I give are real and come from personal experience, whether I was directly or tangentially involved, or merely took an interest to see what the problems were, but all organizations suffer from these problems to a greater or lesser extent and so I am not holding up any one organization, or even several, as a bad example to the rest of the industry. Ultimately, we are all to blame, and all need to work on getting to a better place.
A little history
A problem that has dogged the world of IT, almost since its inception, is an unwillingness to maintain applications in a “current” state. By current, I mean kept-up-to-date in terms of functionality and kept on a current version. Ideally, applications also need to be kept rationalized across businesses so that for any given function there is one solution.
As we will see below, with the shift away from mainframes toward client-server architecture and thereafter application virtualization, the ability to keep applications in a current state became more challenging, time-consuming, and costly. Some may argue that the evolution to software-as-a-service applications has solved some of these challenges by shifting ownership to an external party, however, this model is not a possibility for all applications and has its own share of caveats that may prevent their adoption.
So, how did we get here?
Historically, this was far less of a problem. Back in the steam-powered days of mainframes IT was much more closely governed, partly due to the sheer cost. In general, line of business applications were almost always hosted as a single version in a single place. All users across the enterprise connected remotely to that single instance and used exactly the same application as everyone else in the business to perform the same function.
New names for old ideas?
At this point, I’ll float the idea that this ancient world of mainframes has obvious, perhaps even remarkable, similarities to cloud solutions, Software as a Service, Platform as a Service, and/or other labels you may wish to apply to modern single hosted instances. I’ll leave that thought there for now, but we’ll come back to it in a while.
Progression to fragmentation
As time progressed, applications moved away from mainframes to what was then called “departmental systems”, which were often UNIX based, and then to PC fileservers, before Windows Server emerged as a de-facto standard.
These systems often ran locally on servers tucked under desks and in cupboards and were part of a pattern of fragmentation of IT services. Departments within businesses began to make unilateral decisions as to what applications they wished to use in “their” business, often without reference to corporate IT, or sometimes to “onsite” IT professionals at all. In many instances, these business groups would adopt their own “shadow IT” teams.
In some more regulated sectors, where greater governance remained in force for longer, this was less often the case. Here, corporate standards were commonly followed even where systems were implemented locally, however, over time this fragmentation became more the norm in almost all businesses.
With this increasing fragmentation there began to be an accompanying divergence, wherein different departments requiring similar functionality would select and implement different solutions within the same business. They would also do so with apparently little thought for mutual compatibility or the effect on the corporate bottom line.
Causes of fragmentation
Fragmentation has occurred for a variety of reasons, such as cost or personal preference.
In some cases, it was down to cost, which often ignored the fact that a single corporate solution could cost less (i.e. have an overall lower TCO) on a per-seat basis than two departmental systems.
That could be especially true when support costs were ignored, as it was often more expensive in staff costs to support two similar systems vs. one system, but this was not quantified separately as it was just “so-and-so’s job”. This staff cost was easy to go unnoticed in the general salaries bucket.
In other cases, it could be simply a preference for one application over another for personal preferences. This could be something as simple as the look and feel of the screen or the colour palette used.
More seriously, and surprisingly perhaps more commonly, it could be the case that someone wanted to introduce the application they had used at their previous place of work with no thought as to how well it fitted into their new environment, or simply that the supplier was a friendly, familiar face. We’ll come back to this comfort factor as well, in a bit.
There were, of course, instances where the reason for the selected solution was the ultimate supplier.
Many of us recall certain advertising of the 1980s and 1990s where a few companies would push themselves as the only answer to an executive’s IT problems. They would win business on that basis with no thought by either party as to whether they could actually meet the users’ needs.
This was not unique to departmental systems, but was a method used to gain a foothold in a business, and was often achieved by selling into one department to use as a springboard for a sales assault on the wider organization.
Of course, this seldom achieved total dominance and just reinforced the problem of fragmentation by having various competitors supplying similar functionality to separate parts of the same organization.
The situation today
So, here we are now in a fragmented world, both in business and IT terms, which is something we are all familiar with, but the question is how do we deal with it?
The key issue is the unwillingness of businesses to maintain currency in their systems, which is exacerbated to some extent by IT being unwilling or unable to enforce currency. This can be due to a lack of backing from executive leadership, although it can often also be because no-one has asked for the necessary executive support.
Why aren’t systems kept up-to-date?
To my mind, this problem of a lack of willingness to maintain currency can be for several reasons, not always all of them, but surprisingly often most of them. These “reasons” include
- Kicking the can down the road – leaving rationalization to be somebody else’s problem as it feels too difficult to achieve.
- “Change is bad” – resistance to change is innate in people and organizations.
- “Too expensive” – only looking at the cost of making the change, whilst ignoring escalating support costs, and declining expertise and knowledge.
- Departments’ own choice – leaving decisions on IT at the local/individual level, even where the same need exists across the entire business in multiple departments.
- The increasing fragility of systems makes rationalization increasingly unattractive (risky) but increasingly vital – this is a real conundrum but can be managed.
- Departments being unwilling to invest as it impacts “their” bottom line, without consideration for the broader organizational impact or the overall organizational bottom line.
How can we change the mindset?
As in most things, there are numerous, often opposing viewpoints, and a tendency of business (like politicians and families) is to lurch from one extreme to the other.
The problem is that neither extreme works well. Imposed decisions from the ivory tower are rarely well received and usually betray a lack of understanding of what really happens at ground level. Decisions made locally have the opposite effect, of working well in the local bubble but largely ignoring the big picture and completely ignoring the problems of duplication of functionality and licensing.
What is needed is something in between, where the local voice is not just heard but heeded, and governance is retained at the centre to keep costs under control and avoid unnecessary duplication.
The key, as ever, is communication. Departments need to be listened to, and to be able to express their requirements, and corporate leaders need to deliver on these requirements whilst ensuring that they are met efficiently and at a reasonable cost.
Governance needs to be in place to ensure that applications are kept current, at no more than one version behind current, and with a target of always being current.
The business needs to adopt this approach to currency, and corporate IT needs to have the right to assign escalating support charges on parts of the business that fail to confirm.
This will have the effects of both offsetting the increased costs of support within IT and discouraging the lapsing of support and continuation of obsolescent or obsolete applications in an effort to save costs.
If the choice for a part of the business is either to pay a vendor for an up-to-date application or pay the same or more to corporate IT to support a clunky legacy system, it is fairly obvious which path is the most attractive.
So, what can we do about it?
In upcoming blog posts, we shall examine in more detail where this leaves us, and how we might navigate our way to where we need to be.
We shall look at approaches to the rationalisation that can lead enterprises away from this fragmented world and onto the sunlit uplands of a well-managed rationalized application portfolio.
Ferroque Systems is well-positioned to assist enterprises to move through this rationalization process, through supporting the analysis and management of change, and supplying the technical expertise to assure a smooth, well-planned migration to a rationalized state.
Stephen O’Grady is a UK-based IT professional with over three decades’ experience across numerous market sectors and geographies, and has variously been a business change specialist, a technical architecture consultant, and now a programme manager in business and technology change. He has a fine appreciation of the foibles of the IT market and the way that businesses use and abuse IT, and some clear thoughts as to how they could do it better if only anyone would listen.