Being Disrupted Can Be Good For You!
Photo courtesy Martin Naumann

Being Disrupted Can Be Good For You!

The word “disrupt” typically carries a lot of negative connotations. Dictionary definitions include “to break apart”, “to throw into disorder”, “to interrupt the normal course or unity of”, or “to cause upheaval”. In business, disruption puts us into a reactive mode, and often forces us to abandon plans and chart new courses (unless, of course, we’re the upstart who is disrupting an industry). Change can be painful and unpleasant, especially if that change is forced upon us by circumstances outside of our control.

But like so many things in life, disruption isn’t innately bad or good; our impression of it depends on how we look at it. We can grumble about how unfair fate is, or we can view it as an opportunity for change and growth that might never happen without the impetus of a crisis. (See my related article Every Problem is an Opportunity.)

In this article, I share a story of a major disruption that challenged my software team, and how confronting it made us much stronger and better positioned for continued success.

 

Onset of the Storm

The year was 2004, and my software team was in a pretty good position. Our chromatography data system product was fast becoming a market favorite in laboratories, and our market share was growing. We had overcome challenges of merging together software from divisions in California and Germany, and released important capabilities that competitors had not matched.

No alt text provided for this image

Then we got news that would shake our foundations: Microsoft? announced that it would be discontinuing support for the MFC/C++ Development Environment on which our software had been built, and putting all its effort into its new .NET Framework.

Choosing a path forward would not be easy. We had a code base of some 6 million lines that had been created by some 40 developers over about 15 years, some of it rather fragile and insufficiently documented. We had many thousands of customers who had invested lots of money and time into our software, and would demand ongoing support. Migrating our software to another platform would be a huge undertaking that, in and of itself, would not deliver any new value to customers.

 

Decisions, Decisions…

No alt text provided for this image

In essence, we were faced with 3 unappealing choices:

  • Keep the existing code base as it is, on the MFC/C++ platform
  • Migrate the code to the .NET Framework
  • Migrate the code to a different platform

Given that that our customers were all running Windows, all our team’s expertise was on Windows-based software, and that .NET was going to be the most relevant and best-supported platform for Windows applications going forward, we quickly ruled out the possibility of an alternate platform.

We also came up with compelling reasons that we couldn’t stay on MFC/C++:

  • Microsoft would be discontinuing support of the platform, so we’d no longer get any new capabilities, and we’d be on our own to deal with any future problems we encountered with it.
  • Tools and libraries for the platform would become increasingly scarce.
  • Competitors using .NET would likely outpace us in new feature development.
  • Customers would eventually require us to use the new platform (even though the platform isn’t directly visible to end users, their IT departments often imposed requirements for operating systems, development platforms, etc.).
  • We would be unable to hire top developers, because they want to work with the latest tools and technologies, not dying platforms.

It was clear that we needed to migrate to .NET, which would be a monumental undertaking that, in our initial estimation, would require some 3-5 years for our team to accomplish. Adding to that challenge was that we wouldn’t be able to keep up our regular cadence of major releases, which had reliably come out every 12-18 months cycles for many years. There was a risk that customers would develop concerns about our commitment to support of our software.

 

Getting Management Buy-In

Naturally, corporate managers were not enthusiastic about the idea of investing millions into rebuilding our software if doing so would not generate substantial incremental business. They accepted our arguments about why the migration was necessary, but they challenged us to get the job done in 3 years rather than 4 or 5, and also insisted that we deliver tangible, measurable incremental customer value.

No alt text provided for this image

We pointed out that we could not be sure that we would be able to meet the timeline target, given the immaturity of the .NET platform, the huge number of unknowns we would encounter as we started working with it, the unknown ways in which our existing code base might break, and other factors. Nevertheless, we agreed to aim for the 3-year target, and to ensure that the new software would deliver substantial incremental value.

We also proposed a way to continue releasing new capabilities while we built out the new software, while keeping most of our software developers focused on the latter. We would continue to release Service Packs with new device drivers and a few minor enhancements, and Product Management would release Application Packs containing pre-defined configurations that would enable customers to get common applications up and running more quickly.

 

Outlining Strategies to Attack the Problem

Our first task was to take inventory and start prioritizing. In Product Management, we reviewed documents like design specifications, functional specifications, and user guides and build a list of capabilities (with hundreds of items) that would need to be preserved. We also added new feature requirements from drafts of development projects we had been planning before we found out about the impending discontinuance of MFC/C++.

I defined a simple 3-level scheme for prioritizing the items in our feature inventory:

  • Priority 1: Software won’t be shipped without it
  • Priority 3: Software won’t be shipped with it in this development cycle
  • Priority 2: Item will be reclassified as Priority 1 or Priority 3 as project progresses

Designating some items as Priority 2 gave us flexibility to adapt, based on how well our development efforts were going.

No alt text provided for this image

At the same time, Software Engineering assessed the code base, and designated how various code modules would be handled:

  • Wrap: Leave the existing code module as it is, and wrap it in a shell that allows it to operate on the new platform
  • Migrate: port the existing code to the new platform
  • Rebuild: discard the existing code, and re-create the functionality with clean new code on the new platform
  • Abandon: discard the existing code and its functionality altogether

Of course, there was little functionality that could be abandoned, and the Wrap strategy made sense only for certain modules, so most of the code base came down to a choice between Migrate (if the code was reasonably clean and tight) and Rebuild (if it wasn’t).

 

Managing Rapidly Changing Information

One of the biggest challenges we faced was that the .NET Framework was neither mature nor stable. It promised to make application development faster and easier, because you could take advantage of libraries to perform common tasks like display management, data exchange, security, inter-machine communication, and so forth. By making library calls, you could implement functionality using just a few lines of code, rather than the hundreds you’d need if you weren’t using libraries.

The problem was that, for any given type of functionality you wanted to implement, .NET could offer multiple libraries that could do it, and figuring out which approach was optimal required research and testing. Furthermore, new functionality was being released frequently, so the best option now might not be the best option a few months later.

To stay on top of the rapidly-evolving information about .NET, we created a Wiki that enabled software engineers to easily organize and share knowledge. The Wiki also became a useful tool for organizing other project-related information, such as early feedback we gathered on prototypes and alpha releases.

 

Preventing the Problems of Big-Bang Integration

No alt text provided for this image

With so much functionality to implement, we needed a way to divide the work into manageable pieces. We started by mapping out an architecture, defining major functional components and how they would need to communicate with each other.

Rather than follow old practices of having individual software developers thoroughly build out functionality of separate components before combining the components—which typically resulted in multiple compatibility problems when the pieces were integrated together—we started by creating a vertical ‘slice’: a very lightweight, feature-sparse skeleton of the software that had end-to-end functionality. Our developers then built up the functionality to put ‘flesh on the skeleton’, with all checked-in code immediately integrated. Any incompatibilities were immediately visible, and the appropriate developer got notified that their work had ‘broken the build’.

No alt text provided for this image

The ‘slice’ approach enabled us to put working software into user hands early, and gather feedback that we needed to optimize the user interface and ensure that we were satisfying user needs. Though the users couldn’t do much with the software at first, it wasn’t too many months before they were able to use the new software for their regular work.

Rethinking the User Interface

No alt text provided for this image

Since we were being forced to rebuild our software’s foundations, we took the opportunity to re-assess the fundamental structure of the user interface (UI). The existing UI consisted of a monolithic Windows program with menus and toolbars, and a Multiple Document Interface (i.e., users could open many child windows within the main parent window). Many users had found the software somewhat overwhelming; it offered over 100 menu commands offered at the top level, and no clear path for navigating through the software to complete desired tasks. Sometimes users were unaware of how many child windows they had opened, and had no idea why they experienced declines in performance.

No alt text provided for this image

Our challenge was that we needed to provide fast, easy access to a lot of functionality and options without overwhelming users. As it turns out, Microsoft had faced similar issues with its Office suite, and with its Office 2007 release, it introduced a new solution called the Ribbon, which is a special type of toolbar near the top of the window that provides contextual access to functionality. By adopting a similar design, we could address our complexity issues, while giving customers a user interface whose behavior would be familiar to Office 2007 users (which most of our customers would eventually be).

No alt text provided for this image

By itself, the Ribbon design would not solve our UI issues; we still needed to provide an easy-to-understand solution for navigating a laboratory’s instruments and data. After much deliberation and exploration, we decided to retire the monolithic model of our existing software, and implement the new UI as two main components: a single Console window for navigation, and one or more Studio window instances (launched from the Console) for analyzing data. The Console would provide an always-available central navigation point, and Studio instances could be opened and closed as appropriate, without getting lost in the background as the old child windows could. Both the Console and the Studio were designed with Category Bars in the lower left, which provided easy context switching for focusing on different tasks.

Users who were accustomed to the old design took some time to get used to the new UI, but after working with it for a while, they reported that it made more sense and was easier to use. New users who had never seen either design quickly learned how to use the software, so we were confident that redesigning the UI was the right decision.


Improving Software Development Practices

While switching to a new development environment and new development tools (such as a central source code repository, a build server, and an automated code compliance checker), our team also took the opportunity to adopt some industry best practices, such as automated unit tests and formal code review sessions. The additional discipline of these practices made a notable reduction in defects, and also improved the readability and maintainability of the code base, such that modifications could be made more quickly and with less risk.

No alt text provided for this image

Another practice we instated was cross-departmental software design reviews, attended by representatives from Product Management, R&D, and Customer Service. Getting feedback and inputs well before the software was finished enabled the team to resolve non-obvious issues before the software was distributed for beta testing.

One more improvement we made was in how we managed defects found during testing of an impending major release. Every code change carries a risk of breaking something that’s working, so the very process of fixing defects can introduce new defects. Thus, when code was frozen and tested, we needed to be judicious about which defects that might get discovered made sense to fix.

We defined a series of tests of critical functionality; any failure of such tests would represent a Class A defect that would require remedy and retesting before software could be released. Tests of secondary functionality would be also performed, but the corresponding Class B defects would only be fixed if discovered before final builds, and would otherwise have to wait for the next development cycle to be resolved. Other minor defects, designated Class C, might or might not be fixed during a development cycle, depending on their impact and on the availability of developer time; Class C defects would never hold back a release.

And finally, we implemented formal Project Retrospective meetings to note what went well, what problems came up, and how we could improve our practices to increase our chances of success in future endeavors.

 

Outcomes of the Project

By any measure, our replatforming project was a resounding success. We released the new software to the market in 3 years, with a completely refreshed look, a much better user interface, and several major new time-saving capabilities. The improvements were appreciated by customers, most of whom upgraded to the new software within a couple of years. We filed 4 patent applications (see related article Patent Awarded for MiniPlots), and greatly strengthened our position against entrenched competitors.

No alt text provided for this image

This is not to suggest that the project was easy, or that everything went smoothly. We had to work through many design challenges, and we frequently had to adjust our priorities based on progress toward our goals. We had to overcome setbacks, manage expectations, and keep management, employees, and customers believing that we were doing the right things and that we would succeed. We also had to resolve conflicts between different groups in the greater software team, and keep everyone aligned.

Ultimately, the payoff was worth all the effort. We ended up with a much better (and more maintainable) product, and a much better set of practices around prioritization, estimation, development, and testing. Within a few years, we became widely recognized as the market leader, with software far superior to any other in the industry, and our software became an even more important means of creating a beachhead into laboratories that had not considered buying our instrumentation.

Our competitors noticed, too. One ended up buying our whole 1200-person, $500 M company at 20% over its market value, citing the software as one of the 3 main motives for making the premium purchase.

 

Summing Up: Disruption Can Be a Good Thing

Disruption can be scary, intimidating, and painful, but it can also present great opportunities to implement changes that can put you in a better position than you were before the disruption occurred.

When your status quo gets disrupted, consider how you might do more than merely survive the disruption. Find ways to use the disruption as a force to make change for the better—you’ll seldom find a more powerful one!

 

Jim Schibler leads product management teams that deliver software experiences customers love, and he coaches professionals on job search and career management. He writes on a broad range of topics; see more of his articles at his website.

Copyright ? 2021 Jim Schibler — All rights reserved

Image credits: Lakeside Erosion courtesy NOAA Great Lakes Lab; Choose Direction courtesy Rama Krishna Karumanchia at pixabay.com; Boardroom courtesy ING Group at flickr.com; Code Review courtesy piqsels.com.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了