Skip to main content

Series Articles:

In his comprehensive blog series on application rationalization, special guest star Stephen O’Grady reviewed the merits of application rationalization to address the challenges of fragmentation that plague many organizations.  Stephen’s insightful series reviewed the history of fragmentation from the early days of mainframes to the modern era, common causes of fragmentation and why it can be so difficult to eradicate, and the importance of correcting fragmentation and several methods for doing so.

Several methods for application defragmentation and application rationalization are sprinkled throughout the blog series, including approaches such as lift-and-shift vs. rationalized migrations, as well as application containerization.  After completing the blog series, we put our heads together and decided that summarizing all the recommendations into one post would make a nice epilogue.  And as always, we decided to take it one step further, by summarizing all the recommendations sprinkled throughout Stephen’s blog series and analyzing in the context of various risk management strategies.

What is the best approach?

We’ll start with the question we imagine is top-of-mind for everyone:  of all the recommendations included in the blog series, what is the best approach?

As always, the answer is it depends.

It depends upon several factors, such as volume and types of applications, in-house technologies and IT skillsets, the current level of technical debt, risk appetite, and risk tolerance. I will point out, that above all of these, the single largest factor remains the willingness of business to face up to the fact they have a problem to begin with.

For example, if an organization has already accrued a fairly high level of technical debt, then that organization may choose to deal with the problem right then and there.  However, if the same organization also has a high-risk appetite, the decision may be made to follow the quickest path and deal with the problem later.  As well, if that same organization also has a robust application virtualization and delivery platform, it may make certain options, such as containerization, much easier than if the organization were installing everything directly on endpoint desktops (i.e. no virtualization).

Weighing the options

The blog series shares numerous methods and approaches for dealing with the challenge of application fragmentation.  Some of these methods have worked well, and some have gone very badly.

Above, I mentioned that risk appetite and risk tolerance play a factor in determining an organization’s best options.  Let’s continue on this path to organize the methods of dealing with fragmentation, according to the particular risk strategies of risk avoidance, tolerance, transference, mitigation, and acceptance.

Risk avoidance

Risk avoidance seeks to prevent the risk from occurring in the first place.  In a perfect world, this is the best strategy; however, factors such as time, money, and resources may not make this feasible.  Risk avoidance strategies tend – at best – to focus on establishing and adhering to the proper processes (and in the worst cases focus on doing nothing to avoid doing the wrong thing, but that is a whole other argument for another day); the following should be included as core principles of such processes:

  • Establish regular business-IT forums wherein business groups are able to express their requirements (avoid expressing solutions in the form of requirements), and IT leaders’ gameplan for delivering on these requirements in an efficient manner and at a reasonable cost. This reinforces the two-way trust that business groups will have their needs met in a fashion that maintains IT architecture and technology standards.
  • Establish IT standards regarding application currency (g. no more than one version behind current and with a target of always being current) and a standard application architecture as an approved framework into which all applications must integrate. The business should adopt this approach to application currency, and corporate IT must have the right to allocate support costs to business groups that fail to conform.
  • Foster a sense of proprietorship within the groups for whom change is sought, wherein people feel that accepting a proposed change is their choice, and perhaps even their idea. In general, people do understand where the failings are in their business and are keen to make a change, but often need to feel it is their  Change cannot be imposed; it needs to feel as though it comes from within.
  • Map user roles and requirements to application functions; this necessitates proper engagement between departmental users and corporate IT, so that users’ needs are met and all functionality is known and understood by IT. This way, when “new” requirements arise (which may not necessarily be new at all) the complete inventory of requirements and applications is known; note that this inventory should also include a designated owner for each application.  Pro tip:  consider using a Kano model to capture/model business requirements by organizing into must-haves, satisfiers, and delighters.  This can provide context for business needs and help prioritize decisions.

Risk transference

Risk transference seeks to transfer the risk to another entity, such as when deciding to engage a SaaS solution, in which case the risk is transferred to the vendor that is providing the service.  Risk transference strategies typically involve some sort of fee-for-service contract wherein ongoing operational responsibilities are managed by a third-party provider; following are a few of the more popular solutions:

  • Engaging a SaaS solution can be attractive since it transfers all responsibility (and risk) to the SaaS provider to keep their application(s) consistently up-to-date and patched, the SaaS provider will always support their application, and most SaaS applications are accessible on any device on any network (i.e. always available). Potential challenges are interoperating with other in-house applications, concerns over data security/sensitivity, and potential concerns around the ongoing availability of new features.
  • Engaging a Managed Services provider (MSP) allows organizations to keep their own systems in-house while transferring all operational responsibilities (and risk) to the MSP; in this case, all data remains in-house, and the customer controls the roadmap for their IT applications and landscape. Potential challenges are that the organization is still responsible for architecture/engineering elements (e.g. selecting and implementing new applications) and the quality of MSPs can vary significantly.
  • Engaging a cloud service provider (CSP), such as Microsoft Azure or Amazon Web Services (AWS), transfers all responsibilities (and risk) for the underlying hardware infrastructure to the CSP. Note that this only transfers responsibilities and risks of the hardware to the CSP, but the customer may still remain responsible to support and manage the operating system, platforms (middleware), and applications; it depends on the provider and nature of the engagement terms.  Potential challenges can occur if the organization hosts some applications/systems in-house and some applications/systems in the cloud, depending upon the connectivity between the cloud and the data center in which the in-house applications/systems reside.

Risk mitigation

Risk mitigation seeks primarily to decrease the probability of the risk occurring and/or decrease the impact if it does occur, such as when employing high-availability (HA) and/or fully redundant solutions.  Following are a few techniques we have seen employed at various organizations:

  • Allow departments to chart their own paths, based upon their understanding of their own unique needs, to ensure all business groups’ needs will be met. In this model, IT plays a consultative role as well as maintains the underlying infrastructure and platforms in what is essentially an IaaS and PaaS model to the business.  The decision-making authority regarding applications and versions rests with the business groups, and ongoing funds and resources necessary to maintain the applications come from the business groups.
  • “Containerize” the application, wherein the application resides in a legacy environment (container) but the container actually resides on a modern platform (OS) and manages the interactions between the applications inside the container and the platform outside the container. This allows legacy applications to remain yet does not prevent supporting platforms from being kept up-to-date.  Note that this strategy is often satisfactory for quite some time (years); however, eventually, a point in time is reached wherein the legacy application must be upgraded.  Nevertheless, the use of containers often provides more than enough time for a proper solution to be planned and implemented.
  • Conduct periodic application lifecycle management initiatives, wherein (for example) every 3-4 years IT and business groups meet to review new business needs and applications available to meet those needs, identify applications to retire based upon outdated business needs, identify applications to rationalize for those that are providing duplicate functionality, and determine how unfulfilled requirements will be met, including potential procurement of a new application(s). This periodic initiative should also include a roadmap being defined jointly between IT and the business for moving from the current situation to a rationalized portfolio of applications on a properly supported infrastructure; the roadmap timeframe should encompass the timeframe between such initiatives.

Risk acceptance

Risk acceptance makes a conscious decision to accept the risk, typically because the risk’s probability and/or impact is perceived as being very low, such as when a decision is made to maintain all systems in only one data center, with no DR data center.  It may also be that the probability and/or impact may be high enough to be of concern, but do not outweigh the hard and soft costs that another risk response strategy would entail.

  • Continue to maintain legacy applications to avoid challenges such as a new end-user learning curve, investing time and money on upgrading to a new system, and other disruptions. This includes opting not to accept changes to the hosting platform so the application can continue to function as-is with no need for updates.  Although typically not desirable, this is a valid solution if the need to avoid any potential disruptions outweighs the gradual escalation in funds, time, and resources to support legacy applications.
  • Adopt a chargeback model wherein legacy applications continue to be supported, and the costs of supporting those applications are charged to the cost center of the business groups that are using the legacy application, rather than being charged to IT. This requires IT to have a system in place to track/capture ongoing usage of compute resources by legacy application users, and to assign a monetary value (cost) to compute resource usage, and that (for example) on a quarterly basis a report is provided to relevant business groups to detail these costs.  This may also necessitate that dedicated hardware and/or virtual machines be allocated to these business groups (depending upon the granularity of the chargeback tracking system), rather than multi-tenant hardware/VM’s.  As well, this goes hand-in-hand with keeping track of technical debt and understanding an application’s overall TCO, to ensure hard and soft ownership costs are taken into consideration.

Organizing risk management strategies in this fashion can help an organization identify the strategies that are the best fit their organization, as well as to adopt consistent risk management strategies that align with the organization’s risk appetite and risk tolerance, as well as with in-house technologies, skillsets, and process maturity.

Ferroque Systems has dealt with these sorts of challenges many times and addressed them via some of the solutions outlined above; if your organization is facing similar challenges, we are confident we have the expertise to help you understand the problem and define and/or implement a solution that is most appropriate for your organization.

  • Patrick Robinson
    Patrick Robinson

    Pat is a veteran of Citrix Systems with over 20 years of technology services experience, having served as Director of Citrix Managed Services and designed IT structures and processes servicing global corporations to SMBs. Now at Ferroque, he oversees service delivery, ensuring positive outcomes for customers in every engagement.

Redefine Your Approach to Technology and Innovation

Schedule a call to discover how customized solutions crafted for your success can drive exceptional outcomes, with Ferroque as your strategic ally.