Application rationalization is a crucial aspect of the business and IT lifecycle, and a topic we have covered previously.
Ferroque Systems has teamed up with Stephen O’Grady; a trusted colleague to our team, application rationalization expert, and IT management veteran from the UK for a series of guest posts. These articles deal with application rationalization and its importance to the modern organization.
In the previous post (Part 2) in this series, we dived into some of the common causes for creating or not addressing application fragmentation. In this post, we will review the fragmentation pitfalls often brought about by reorganizations and mergers and acquisitions (M&A’s).
The consequences of Reorganizations, and Mergers & Acquisitions
Once IT found itself in the new world of fragmented departmental applications, the added fun of Reorganisations and of Mergers & Acquisitions began to play a part.
There was, and is, always a conundrum when a business is reorganized, or new business is added to an existing enterprise: should you force this new element to migrate all of their applications onto existing platforms within their new business, or should you leave them intact and interface with them as if they were a subsidiary organization?
I have seen it done both ways, and in the first instance, it tends to end in tears and resignations as the new business is stripped of its identity and forced into the new corporate monolith. This is not just down to merging IT systems, but it definitely plays a major part. In the second case, you end up wondering what “merger” is supposed to mean, as the new business simply carries on as before, and adopts a sort of parallel existence with the enterprise of which it is supposed to be a part.
Neither way works!
The “hands-off” approach to reorganizations and mergers can introduce additional challenges. If you will permit me a brief anecdote at this point, there was one instance I observed where the decision was initially made not to force the acquired business into the existing enterprise; instead, they were left to run their own systems. Ignoring the costs involved of double-licensing practically everything and maintaining parallel sets of hardware, it introduced other challenges that proved hard to overcome.
For example, there was a serious issue of something as mundane as serial numbers: the serial numbers the new business used were too long to be accommodated by the existing enterprise systems, which would be needed to provide support once products had been sold. When analyzed, the length difference was explained by the reason that embedded in each serial number was the factory number at which each device was produced. However, there was only one factory involved in the acquisition so this was redundant data, but nonetheless they insisted that it had to be retained. The other factories, of course, remained with the original company and were at this point completely unrelated.
Unfortunately, within the new parent organization, there had been another acquisition in the recent past that used a “hands-on” approach. This other business had been forced into the corporate straightjacket, causing massive fallout as people left in droves, and the new business declined to be almost worthless.
For this reason, it was perceived as better to treat this later acquisition with kid gloves and leave them almost intact as a standalone unit, which had the opposite but almost as damaging an effect. The business plowed on in an almost ungovernable fashion, and from an IT perspective, it effectively meant doubling costs for everything. This second merger was less of a failure than the first but nonetheless was far from an optimal way of doing things.
So is there a better way?
There is, and it is relatively simple: there needs to be a recognition that for the wider business to operate efficiently, and to keep bottom-line costs under control, there is a minimum level of integration required in a reorganization or merger.
There is a compelling need to rationalize onto a core set of applications that can operate across the entire organization, providing supportable uniform functionality at the most achievable price.
This does not mean that the new business has to be forced into the new organization in all elements. As part of the change, the duplications and overlaps need to be considered, and the most pragmatic solution needs to be found for each. This may mean retaining some legacy functionality that fulfills a specific purpose, in the case of apparently similar yet sufficiently different functionality in the acquiring business. It may even mean the business adopting applications from the newly acquired element across the legacy business if what is being brought to the party is better than what they had.
The key point is that this needs to be analyzed and planned as part of the change, not after the event, and sensible, fact-based decisions need to be made early on.
Rationalization during migration
It is often viewed as a bad idea to rationalize applications during migration, for example when transferring support services between departments or suppliers, or when bringing together two different businesses or parts of a business.
It can seem more comfortable to lift-and-shift applications as-is, because then the investment in the legacy application is not lost, and the knowledge transfer appears (on the face of it) to be easier. Migrations often have time constraints, such that there is an emphasis to complete the migration as quickly as possible (i.e. as-is) and worry about “clean up” after the new system is in place; unfortunately, more often than not (if I’m being generous), “cleanup” never happens.
I would argue that lift-and-shift is not always easier. In lifting-and-shifting legacy applications, there is a clear opportunity to move to a current platform and to rationalize to a slimmer set of applications.
To look at a very simple example, if migrating multiple departments onto a new, single platform, it may be that there are numerous identical utilities in use that do exactly the same job. This is, therefore, the ideal time to come down to a single instance. I have seen this in several very different environments for, of all things, FTP clients. In each case, there had been a culture of different parts of each of these businesses selecting their own FTP client, sometimes in the face of a corporate standard that they either chose to ignore or of which they were unaware.
Come migration time, arguments were had about why they needed this specific FTP client, which always came down simply to a matter of comfort, as they all did EXACTLY the same thing. In these cases, corporate IT was able to prevail as it was such a simple argument with an obvious saving.
Yet in other cases, it was not seen to be so simple. In another example, there were numerous legacy finance systems, mainly multiple instances of the same application but run separately for different parts of the business. The choices were lift-and-shift, or a rolling migration to a single, new, up-to-date platform. The lift-and-shift was the obvious choice for the business, as it felt like easier knowledge transfer because the departing staff could do that before they left and transitioned to the new support service.
However, as is so often the case, this ignored the human factor: people who are losing their jobs may not be overly co-operative, and there is little that can be done to make them so. Ignoring that human factor (just as the business did), there is, to my way of thinking, a logical fallacy here.
The two options were:
- Lift-and-shift everything as-is to a new set of users, who will have to learn all of this from scratch, and will then have to contend indefinitely with a disjointed set of out-of-support legacy applications.
- Migrate each of these legacy applications to a single, properly maintained, up-to-date platform, which the new set of users will have to learn from scratch.
You will see that the common theme here is that the new users would have to learn it all either way, and the lift-and-shift preserves all of the legacy issues, whereas the rolling application rationalization onto a single modern platform may be a bit harder for users to pick up, due to the supposed absence of subject matter experts to assist (who may well not assist anyway as they are losing their jobs), but will be far easier going forward. The only real effort in this example is therefore in the data migration, as there are plenty of users of the new platform already who will be able to assist the incomers with knowledge transfer.
Of course, it would probably not have been quite that straightforward in reality, but it would certainly have been simpler to move to a new platform with a few teething issues than preserving the legacy at all costs.
So, rationalization during migration should not be seen as a no-no. As long as it is properly planned and executed, and sufficient time and resources are allowed, it can be a very effective way of streamlining the number of applications. To do so, an inventory is required not just of applications but also:
- The requirements each application meets.
- Each application’s full set of functionality (even if some of it is not currently in use).
- A complete list of all user (never done and always an issue after the event, as both users and functional requirements will be missed)
- The business functions the users fulfill (i.e. user roles and responsibilities).
- The application support arrangements.
- The application licensing arrangements, and what each license covers in terms of who can use it, where, and how.
- The platform upon which the application resides (and what platforms does each license cover).
- Any plans for the application’s future.
- Application support/management needs, such as update frequency.
This then needs to be considered at the enterprise level, to assess what overall application coverage looks like, and where there are gaps, overlaps, or duplications. Some of the overlaps and duplications may be valid, such as where there are genuine special requirements, but each should be considered so that an appropriate level of application convergence can be achieved whilst allowing for unique requirements.
So where does that leave us?
You will see from what I have said above (and previously) that mergers and acquisitions, business reorganizations, and migrations share common themes, and once again the key is communication.
The business needs to express its requirements, and of course, have the final say, but IT must be allowed to define the optimal approach in technology terms and must push back when it is suggested that a suboptimal path be followed.
Ensuring that someone is in place in the middle, to act as the go-between, who understands both business and IT, and can act as the arbiter when difficult decisions need to be made, is crucial to the success of any of these kinds of projects (or even to ANY IT project, but that is a separate theme).
It’s not as though the tools don’t exist to support all of this, nor that these issues haven’t been seen before, but I shall leave you to ponder that a common product for an IT project is the “Lessons Learned”; I have frequently seen the lessons fully documented and carefully filed, but far less frequently do they seem to have been learned…
Stephen O’Grady is a UK-based IT professional with over three decades’ experience across numerous market sectors and geographies, and has variously been a business change specialist, a technical architecture consultant, and now a programme manager in business and technology change. He has a fine appreciation of the foibles of the IT market and the way that businesses use and abuse IT, and some clear thoughts as to how they could do it better if only anyone would listen.