Skip to main content

Foreword

Application rationalization is a crucial aspect of the business and IT lifecycle, and a topic we have covered previously.

Ferroque Systems has teamed up with Stephen O’Grady; a trusted colleague to our team, application rationalization expert, and IT management veteran from the UK for a series of guest posts. These articles deal with application rationalization and its importance to the modern organization.

In the previous post in this series (Part 1), we took a brief journey through the background as to why we have ended up even needing to think about application rationalization. It outlined some of the common challenges around why little is generally done to remedy matters, and in this post, we will dig a little deeper into those challenges.

“Change is bad”

There is a well-known issue that all of us in IT have met repeatedly: managers don’t want change because they don’t want change.

Change feels uncomfortable and unfamiliar, and most people will do almost anything to maintain the status quo. As someone once put it to me, babies are often quite comfortable sitting in a dirty nappy (diaper) because it’s warm, comfortable, and familiar – it’s only a problem for everyone else.

You can rationalize the benefits of change, demonstrate how the business can be helped and how everything will be better once the change is made (whatever it may be), yet still, people resist change quite strenuously. If I had the perfect solution to this issue, I would be writing this piece on my private island in the Caribbean; however, people can be “taken on the journey” to change.

It may not be simple or comfortable, and there is no “one size fits all” solution. However, if a sense of proprietorship can be fostered, wherein people feel that the change is their choice and decision, maybe even their idea, change can be made and made for the better.

In general, people do understand where the failings are in their business and are keen to make a change, but it has to be their change. Change cannot be imposed; it needs to feel as though it comes from within. That, however, is for another chapter!

“Too expensive”

Probably the one reason heard most often for not upgrading or rationalizing systems is cost. To be uncharitable, this is a specious argument, as costs escalate over time, and the high cost is frequently a product of changes or upgrades being repeatedly put off.

To be fairer, with constrained budgets and resources there will always be the temptation to capture in-year savings whenever possible. Unfortunately, this ignores the longer-term issues that can be created as a result, and the escalating cost or even the viability of future rationalization or upgrades. This is why it is crucial to understand an application’s overall TCO, to ensure hard and soft ownership costs are taken into consideration (perhaps a topic for another day).

It is, of course, in software vendors’ interests to tie customers into maintenance and support agreements, and it is always an attractive option for customers to capture savings there. After all, software doesn’t need oiling, does it? So how can it go wrong?

However, in making that supposed saving, the need to be able to maintain currency is all too easily overlooked. Especially in this modern world of “Evergreening” (not a term that seems to have continued in use, but is still there in practice), there is an inherent assumption that all components will be kept up-to-date. Because the various platform components may change over time, there is no guarantee that an unmaintained application will continue to function as expected on the platform, as there may be a dependency that is no longer met.

Now, in the relatively short term that is unlikely to make a huge difference, as it is in the platform vendors’ interests to maintain a degree of backward compatibility so as not to alienate customers or make life unnecessarily difficult. However, in the longer term that may constrain the development of the platform, so ongoing compatibility can never be assumed.

If a customer has an application that was last updated over a decade ago and is used in multiple instances as key systems across an organization, this could be considered to be a threat. There may be no support for the application at all, let alone on the platform that is having to be held back to host that application.

In this case, the option to upgrade directly to a newer version will long since have disappeared. The possibility to migrate to a competitor’s product using supplied tools designed to assist in this will also be waving at you in the rear-view mirror. This is, incidentally, an entirely real example I have seen in more than one organization, which to date remains unresolved for any of them.

The only option here is to run an expensive and time-consuming project that involves a great deal of analysis and custom code to migrate data from a legacy application to a new one, all as a result of “capturing savings”. Needless to say, these only happen when the situation becomes critical, the problem is considered urgent by senior management, and a lot of blame gets thrown around as to why it wasn’t addressed earlier. This blame game can often carry on for a while, whilst the problem remains unaddressed and the issues continue to escalate, yet how often do we see this simple lesson forgotten in the same organization for the next business-critical system in the same state?

Glaringly, this deferral of updates and failure to maintain currency contributes to “technical debt”; I am sure many of us are familiar with this term and the consequences of accrued “interest”. Most organizations ignore this; however, I am aware of organizations in which IT managers maintain a “Technical Debt Register”, a simple spreadsheet of applications and their update state, and any time a decision is made to impose a workaround, bolt-on a new component (rather than applying a properly designed change), etc., a new entry is made in the register, and periodically this register is reviewed to see how much “interest” had accrued.

Departments’ own choice

I realize that the next statement will make me sound like a mainframe-obsessed dinosaur, but leaving IT procurement decisions to departments with no reference to corporate IT or senior management was and is a recipe for disaster.

This can be the case even where there is some true IT expertise in the department, if the corporate strategy is not made clear and enforced, or if the local IT experts have an axe to grind about a better way of doing things or a pet preference for or aversion to particular technical solutions.

It will always be attractive and feel empowering to “do your own thing”. Who hasn’t thought “I could do a better job than them”, meaning corporate IT, higher management, or whoever is in control? And, to be fair, that can be the case; you may well be able to make better decisions locally as you very likely understand your business better. Another common challenge is time: there are limited IT resources available, so some departments may not want to wait for the appropriate IT resources for their pet project, and so they may take it upon themselves to run a project of their own with limited consideration of the wider consequences.

However, and it is a massive “however”, by not knowing the bigger picture it is very easy to make an inadvertent poor decision. For example, at the local level, there may be limited knowledge of corporate licensing agreements. I have seen cases where departments have spent their own budget on software licenses, which duplicated functionality that was already being paid for on a corporate basis and effectively resulted in paying twice with no option to back out of the overarching agreement. This may not seem like a problem at the local level, but has a direct impact on the overall bottom line.

I am not saying that departments should have no say in their own IT, far from it. They are often the only persons who truly understand the “coalface” business, and should rightly be considered to be the experts in terms of what is required. In cases wherein they have a truly unique requirement this could well be catered for best at the local level. However, this would still need to be visible at the corporate level as this requirement may not be unique forever, and this local capability may need to be reused elsewhere.

The corollary to that is the “unique” local requirement that is in fact commonplace. A truth that software vendors would rather not address is that there is remarkably little variation in functionality between many systems. After all, all competitors’ products exist in the same regulatory frameworks and have to conform to identical functionality and reporting requirements. The way they compete is through the “nice to haves”, which may be of more or less relevance to individual departments’ and/or users’ needs.

Pro Tip: consider using a MoSCoW model to capture business requirements by organizing into “Must Haves”, “Should Haves”, “Could Haves”, and “Won’t Haves” (this time), to enable prioritization to deliver the greatest and most immediate benefits early.

This requires proper engagement between departmental level users and corporate IT, so that users’ needs are met, but all functionality is known and understood. This way, when “new” requirements arise (which may not be new at all) the complete inventory of what is available is known. It will often be the case that these requirements can be met from something that is already available elsewhere in the business, and these existing applications can be deployed for new users at little or no cost and with minimal delay.

The increasing fragility of systems

Of course, as I have said above, software doesn’t wear out. There is no need for routine maintenance on an application itself, as it will not change in the slightest unless someone does something to it.

Unfortunately, that “something” will either be changes imposed from outside, for example regulatory updates in HR or accounting systems, or changes to the platform on which it resides. These imposed changes can, over time, cause problems to occur.

There is no option with regard to regulatory changes, but applications receiving those kinds of updates will normally be fully supported, although they may not be on a current version. There is, however, an option for platform changes, albeit a poor one: you can opt not to accept changes to the platform so the application can continue to function as-is with no need for updates.

This appears to be an attractive option as it means cost savings now, but it ignores the fact that at some point this will cease to be viable, and the cost to upgrade will likely be higher than the aggregate cost of maintaining currency would have been. It is also probable that the disruption of a major upgrade or change will be more significant than incremental change would have been. As mentioned above, this is an example of “interest” that accrues due to “technical debt”.

Departments unwilling to invest         

Who among us has not used the phrase “If it ain’t broke, don’t fix it”? This is an admirable sentiment, and highly applicable in a fairly static, mechanical world.

In IT we have a somewhat different situation. Whilst the “it” in question – the application – may not be “broke”, it exists in an ever-changing environment. At the business level, requirements will change, both internally and those imposed from outside, and at the technical level the platform on which it resides, both hardware and software, will continue to evolve.

Departments are often reluctant to invest in IT changes or even ongoing support, as in their view the application is fine as it is so why spend the money?

From a short-term perspective this is understandable, as there are always other things to do with the money, or simply savings to be had. But as time progresses and IT in the wider world changes, the time will almost certainly come when something happens that forces a change, and for which the business may not be prepared.

For example, many businesses clung onto Windows XP, using applications that were only supported on XP, past the point at which XP was end-of-life. This was often to save costs, as businesses had higher priority spending to deal with.

We all remember the WannaCry virus, which had enormous impacts for certain businesses who had left themselves exposed by sticking with unpatched, elderly operating systems. They did this for various reasons, but the most commonly discussed were to save money by not paying for upgrades to Windows, and the need to keep using older applications that would not run on newer Windows versions.

This apparent savings became a higher cost for many when the virus struck, as the costs associated with removing the virus and upgrading systems to prevent further attacks were almost certainly higher than the cost of continuous support and upgrade to maintain currency. And that is ignoring the cost of paying the ransom, which some businesses also did before a cure was found.

So what do we do?

This is the biggie!

IT and business need to work together on this, and again communication is the key.

IT needs to make it clear to the business what the risks are of not maintaining application currency, and of keeping numerous applications that fulfill the same business need and/or numerous versions of the same application (including legacy versions that are no longer supported by the vendor). Whilst there can be a risk of “having all your eggs in one basket” (i.e. maintaining one application version), at least you know where and what the basket is, and probably have a good idea about the basket’s strengths and weaknesses. With a limited range of applications to support, your IT staff or support supplier should develop far higher levels of expertise in the specifics of each application as they will be spread less thinly.

Business needs to express clearly what its needs are in terms of its requirements rather than its solutions. All too often the business will make a decision on an IT application that may well appear to be the ideal solution to their needs, but take no account of the environment in which it needs to exist and the other applications across the business with which it needs to interface.

To my mind, the simple key focuses here need to be:

  • The business expressing clearly what its requirements, not leaping straight to a solution.
  • IT defining an application architecture to serve as an approved framework within which all applications (and all other components) must exist.
  • A clear inventory being produced for all applications and supporting infrastructure (including designated application owners for all applications).
  • A roadmap being defined jointly between IT and the business for moving from the current situation to the sunlit uplands of a rationalized portfolio of applications on a properly supported infrastructure.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Redefine Your Approach to Technology and Innovation

Schedule a call to discover how customized solutions crafted for your success can drive exceptional outcomes, with Ferroque as your strategic ally.