Legacy Application Rewrites: The Siren Song of IT
When it comes to modernization, momentum is great, but holistic, strategic thinking is better

Legacy Application Rewrites: The Siren Song of IT

No alt text provided for this image
mLogica Group V.P. of Marketing and Strategy Dale Vecchio

It’s not uncommon for many organizations to see the solution to their legacy application problems as simply (if not quickly) rewriting it from scratch in a modern language and a modern computing platform. External service providers that are in the business of selling billable bodies are keen to push this approach as well.

And why not? The meter starts running when they start to rewrite the application and continues right up to the point the implementation fails! They get paid and you get stuck with failure! In fact, some of the biggest failures in the history of IT are arguably large-scale rewrites of a legacy system, or even transformation to a packaged solution. But that’s a problem for another article.

Studies have shown that the success rate of application rewrites can be relatively low. Standish Group, a project management research and consulting firm, reports only about 16% of application rewrites are successful, while the majority (around 53%) are challenged, and 31% fail outright. Size matters here. Small projects are more successful, but large projects succeed only 6% of the time!!

Even Gartner, Inc. reports rewrite projects as being unsuccessful a whopping 70% of the time. Another study by CAST, an IT analysis and measurement company, found that around 75% of application rewrites result in cost overruns or delays, while only 25% are successful!

Why are application rewrites so problematic? There’s a long list of stumbling blocks, each of which can affect an organization differently:

1)????The complexity of legacy systems

2)????The critical need for business process continuity

3)????Large volumes of data requiring migration/transformation

4)????A lack of documentation of existing systems

5)????Open-ended project costs and timelines

6)????Compatibility of business applications with the destination system

7)????Smooth adoption of new systems by your legacy workforce

Things Are Never as Easy as They Look on Paper

It’s common for an application development department, or a third-party external service provider, to promote the complete rewrite of the application that supports a legacy business process or government agency service. The logic goes that the legacy system is built on old technology and doesn’t represent current best-practice business processes, so why not write a new one that uses the latest technology and reflects current needs?

These “needs” of course vary by industry. But organizations in each of these sectors have a mandate to use new technologies that will fulfill current and future business process requirements.

But how do they achieve that? Why—by writing one from scratch, either alone, or more commonly in conjunction with an external service provider. But what about the existing system? Oh, we can ignore that! We’re building a NEW system with NEW business processes that reflect modern best practices!

Well okay…but that legacy system is doing SOMETHING right now that reflects current and ongoing needs, regulations or third-party integrations. Do we just ignore those existing tasks and functionalities?

One of the reasons so many rewrites fail is precisely because application development organizations or external service providers DO ignore the intricacies of the existing system, then struggle to solve these problems AFTER the new system is written. This introduces even greater risk, complexity and delays to an already-complex project.

All That Changes is Not Code

Internal Resistance

What many organizational leaders need to remember is that their legacy application system doesn’t just represent the technological implementation decisions and business processes of the past. It has ingrained behaviors in your workforce that have been entrenched in their very beings!

Your staffers have developed work habits based on how they interact with your technology, and tribal knowledge has accrued around the resolutions created to overcome its inevitable shortcomings. Entire “shadow IT” systems have been developed to encompass these work-arounds. All of that has to be assessed and documented so it can be appropriately modified as the new system is developed.

Forget for a moment the risks of getting the new system developed and implemented on time—you STILL have an entire workforce to retrain, which goes against their very beliefs. “We’ve always done it this way, and that’s what I know, so it MUST be right!”

Hidden Obstacles

Rewriting an application may change the language and the architecture of implementation, but it also changes the data that has been used by the legacy system. Not only are the data structures a reflection of past data storage models, but they may utilize different data encoding mechanisms. This is particularly true for mainframe-based applications, which use their own proprietary data encoding mechanism that is not employed by any other modern application platform in use today.

Furthermore, existing applications may contain the data validation rules rather than the storage system itself. Many relational database management systems (RDBMS) can control data formats, but pre-relational DBMS systems may not, which means it’s contained in the application code itself.

Consequently, when migrating the data from the legacy system to the new implementation, data changes can be opaque and even hidden within the code. This leaves you with a data cleansing exercise that will need to be undertaken to resolve these problems, presuming one even knows these exist, as they are NOT evident from the data itself.

The Keepers of the Tribal Knowledge Are Leaving…or Gone

Many legacy systems preserve little to no documentation, and even the little they may have is most likely woefully out of date. So, who knows how the system works? Well, the code knows—and hopefully the subject matter experts that have been working on your system for years or even decades. But these people have either (a) retired, (b) are about to retire, or (c) aren’t involved in the rewrites since they may not be familiar with the technologies of the new implementation!?

How can any organizational leader, or outside technical provider, expect a successful rewrite project when they don’t understand the data, weren’t privy to the original implementation and have little to no critical documentation of the modifications over the life of the system? It’s frustrating to see the sheer arrogance of external service providers who insist they KNOW how to build your new system because “they’ve done it before” for someone else!

A Business Process is MORE than the Sum of its Parts!

Is it any wonder that cost and time overruns during a modernization can seem inevitable given the above-mentioned challenges? And these overruns can become a “death of a thousand cuts.” No single problem seems life-threatening, but cumulatively they can all add up to project failure!

And the problems of successfully replicating a legacy system by rewrite are not limited to the actual code and data implementation. These systems also require SIGNIFICANT testing time and end-user training as well. You can’t depend on your existing workforce to test the new system as they have no idea yet how it works! Moreover, their knowledge is entrenched in decades-old work habits that take time to overcome.

Finally, compatibility and user adoption represent the last two challenges that work against successful application rewrites. Almost always, application rewrites move to dramatically different technology than the legacy system. Sure, applications that are rewritten in COBOL on a mainframe don’t have this problem, but those are few and far between. Even rewriting an application from COBOL to Java ON THE MAINFRAME introduces compatibility issues.

Existing users are not only resistant to change to this new way of doing business, but they may also not be skilled enough to work in the new environment. Granted, people are becoming more knowledgeable about modern distributed database and internet technology, but these issues can’t be simply ignored.

Application modernization is a continuum. Trying to solve it in one massive rewrite may seem like the right approach, but you need to consider ALL the risks mentioned in this article. It took your organization decades to dig this legacy application hole and one can’t simply step out quickly and easily.

Modernization is a continuum, so take the time to assess the array of options that support such an evolutionary model. Fortunately, this seemingly monumental process can take place in phases that gradually develop your application portfolio to your optimal end state.

Bibek Nagpal

| BITS PILANI |

1 年

I appreciate the focus on the precision conversion of RDBMS data.

回复
Srinivasa Rao Ponduri

Mainframe Practice SBU Head at HCLTech

1 年

Thoughtful and well written article ??

回复

Good article Dale!

回复

要查看或添加评论,请登录

mLogica的更多文章

社区洞察

其他会员也浏览了