Skip to main content

Foreword

Application rationalization is a crucial aspect of the business and IT lifecycle, and a topic we have covered previously.

Ferroque Systems has teamed up with Stephen O’Grady; a trusted colleague to our team, application rationalization expert, and IT management veteran from the UK for a series of guest posts. These articles deal with application rationalization and its importance to the modern organization. 

Series Articles:

In the previous post (Part 3) in this series, we looked at the common pitfalls of fragmentation that can be brought about by reorganizations and M&As, and outlined a few basics that may help to mitigate those pitfalls. In this post, we shall examine a few approaches to dealing with fragmentation, and the advantages and serious drawbacks of application containerization.

How should we deliver in this brave new world?

Probably the three principal ways of delivering functionality to the user’s desktop in the modern world are desktop installation, virtualized environments, and applications delivered directly via a Web interface.

There are of course nuanced versions of Web Interfaces, and one could debate where Software as a Service, Platform as a Service, or Infrastructure as a Service fit into these definitions, and how they are similar or different in terms of delivering functionality to users, but how a user gains access to functionality seems to me to boil down to the three ways outlined above.

At this point, I shall again raise the notion that virtualized environments and web-delivered applications can be viewed as remarkably similar to the old mainframe days when applications were built and installed in a centralized way, and users logged on remotely. Whilst the User Experience (UX) may be very different, as it often feels like they are using a local application or simply a website, the control mechanisms are far more akin to the “old school” approach.

In terms of application rationalization, any of the above three approaches is a possibility, each with different advantages and drawbacks. However, the key point remains application currency.

Whichever approach is taken, to maintain usability, compliance, and supportability, a current or very-nearly-current version of each application should be in use. This maximizes supportability, gives a higher likelihood of good interoperability (allowing for early gremlins, which should be identified and resolved during testing before delivering to real users), and crucially gives technical control to the IT professionals who will be able to make sure that it delivers as intended.

The key thing here is not the platform or model used for delivery; rather, it is the consistency, the currency, and the required level of integration. Only by ensuring good integration will the necessary enterprise-level quality of delivery actually be assured. For a business to function optimally, in IT terms, it follows that optimal integration between related applications is enabled.

This doesn’t presuppose any particular technology, reliance on a single vendor, or even reliance on a single solution for a given requirement (subject to cost constraints as I have discussed elsewhere).

What it does mean is that when applications are selected, they need to be supportable and they need to integrate or interface properly with other applications in the business. It is a given that the most important thing is that applications meet users’ requirements, but if applications only do that and cannot be supported over their lifetime and do not integrate as required with other systems, they are never going to work well in the enterprise sense.

On that note, it is also important that what results is a best-of-breed total solution, which works together in an integrated sense, rather than a set of best-of-breed applications which work splendidly in isolation but which do not provide a well-integrated set of end-to-end functionality. An example of this could be a set of reporting or monitoring tools, for business or technology, each of which was the absolute best available but which taken together did not provide complete end-to-end monitoring due to the gaps and overlaps resulting from poor integration and coverage.

As I have said elsewhere, rationalization at the point of migration need not be seen as a bad thing, or even a major challenge, however, it is imperative that it is properly planned and executed.

Some of you may recall the days of Total Quality Management, and the slogan “Quality is Free”. The point that was made there was that quality is free in terms of the bottom line, and should even improve it, however, that you need to invest in quality in order to reap the rewards, it’s not just a freebie.

Rationalization during migration is in the same boat: you must invest in proper planning and execution to reap the rewards. It can be a very effective way of rationalizing the application landscape, moving from legacy to current applications, and sweeping away venerable, hard-to-maintain applications. However, if done badly, almost as an afterthought, it will just create a world of pain and frustration for both the business and IT communities, as they struggle to make changes to both process and technology on-the-fly with no idea of the effort or resources required.

Once again, communication is key and in order to inform the plan there is a need for a full inventory of applications, owners, and users, and the rationalization and migration plan needs to take account of every functional requirement and every bit of data, as well as all human factors such as knowledge elicitation and transfer, and of course training.

Containerization as a solution

Containerisation is often viewed as an answer to the plea for a way out of the application maintenance problem.

For those who aren’t familiar with the technology, in very simplistic terms a container in this sense is a bubble in which you can install an application. As far as that application is concerned, it is in a legacy environment with all of the old bells and whistles, but the bubble actually resides on a modern platform and handles the interactions between the applications inside the container and the platform outside the bubble.

This form of containerization is distinct from application virtualization using tools such as App-V or MSIX, which are a standard application delivery approach, but instead uses tools which are specifically intended to enable the preservation of legacy applications and make them run on modern platforms, rather than deal with the foibles of slightly different versions of current or near-current platforms.

It should also be noted that containerization can come at a price in terms of performance, as there is so much going on between the application, the container, and the platform that performance will suffer. It is true that you can throw more machine resources at the problem to overcome that, but this reinforces the argument that more resources will be required, in this case, more than would have been for an up-to-date application running on an up-to-date environment.

Nonetheless, containerization often feels like a good option. After all, parking a legacy application inside a container deals with the problem of an application that will not work on a new platform; however, there are several drawbacks to the approach.

Firstly, it merely moves the problem: instead of the application now having to be made to work with this up-to-date, ever-changing “evergreen” platform, the container now has to do so. Granted, the container vendor will work to assure this, but can you really rely on that being the answer forever?

Secondly, and to my mind, more importantly, it reinforces the behavior of not maintaining currency in systems, and just making everyone’s lives harder. This is because the divergence between the application in the “old world” inside the bubble and the new platform outside the bubble will simply increase over time, making support of the container increasingly more difficult.

Containerisation vendors and suppliers are not generally charitable institutions, and they will be looking to make revenue out of this approach. Inevitably, the harder maintenance becomes and the longer it occurs the more you will pay. It is also almost certain that at some point, whether soon or some years in the future, the approach will cease to work for any given case. This is because the handoffs between the application, the container, and the platform can no longer be easily sustained as these components will have drifted too far apart.

At this point, the migration will probably be more painful than would have been the case earlier, as the option to exploit vendor-supplied migration tools, to support the move between applications or versions, will probably have disappeared. In this case, you will be looking at custom migration and all of its inherent costs and risks.

So what does this mean? What is the best approach?

There is, of course, no single best way to approach and deliver rationalization. So many factors will come into play, such as the state of legacy systems, the state of the physical infrastructure, the business landscape, and its stability (that is to say, is it in a steady-state or in a state of change due to some form of reorganization).

That said, there are some basics that must always be borne in mind.

As usual, communication is key (and I know I keep saying that, but it is the most important point and one I shall keep reinforcing). The business requirements need to be understood by IT, and IT needs to make its voice heard in terms of the art of the possible.

Rationalization means just that. According to Lexico.com, the definition of rationalization is “the action of making a company, process, or industry more efficient, especially by dispensing with superfluous personnel or equipment” or “the action of reorganizing a process or system so as to make it more logical and consistent”. We’ll forgive the use of “equipment” thereby ignoring our modern “soft” world, but the intent is clear. Rationalization in our terms is increasing efficiency by dispensing with superfluous or redundant applications. It does not mean that we come down to a tiny core and do away with necessary applications along the way, as that does not breed efficiency. It also does not mean clinging on to outdated and hard-to-maintain applications when modern alternatives are available, as that is also inefficient. What it does mean is that we should have a convergent roadmap for applications that over time (which can be short) we decompose into a core set of applications, with necessary exceptions for truly special cases, which are controlled centrally by the business and IT jointly and are kept under constant review to maintain currency. Even where there are exceptions, for instance, a small application offering some critical piece of functionality which only serves one or two users, the ongoing cost of maintenance, and even the long-term maintainability, needs to be balanced against the cost of the change to move this small requirement for these few users onto a different, common, maintainable platform. It is very rare that a requirement is so unique that it cannot be dealt with in this way, but admittedly it can happen; this is not a perfect world after all.

Centralized control is important. This does not necessarily mean that all IT is controlled absolutely in one place, nor that the business can have no say in its applications, but it does mean that everything needs to sit within an agreed architectural framework so that there is a single “guiding mind” for all IT decisions. It also means that locally purchased and installed applications should generally have no place in enterprise IT, as this makes it impossible to assure the necessary levels of integration and support, and the necessary ongoing maintainability.

Containerisation simply to preserve legacy applications is not a great idea. It has its uses, and in some cases can be the only alternative, but should never be seen as anything other than a short-term solution, and really, really should only be allowed as part of a longer-term plan that will lead to the replacement of the legacy application as soon as possible (which admittedly may not be all that soon).

And just to get this one in again, all of this centralized approach to IT governance and maintenance will be entirely familiar to aged practitioners like me, who will recall that in the mainframe days this was far less of a challenge as IT really did have control. The move to virtualized and web-based applications is moving us in this direction once again, which can only be a good thing as it allows for greater governance and control, and less uncontrolled deviation, but this must be delivered in partnership between IT and the business so that the business gets what it needs with IT ensuring efficient delivery.

  • Stephen O'Grady
    Stephen O'Grady

    Stephen is a UK-based IT professional with over thirty years of experience, currently a programme manager focused on business and technology change. He has a keen understanding of IT market dynamics and offers valuable insights on how businesses can better utilize IT.

Redefine Your Approach to Technology and Innovation

Schedule a call to discover how customized solutions crafted for your success can drive exceptional outcomes, with Ferroque as your strategic ally.