25 Reasons Why Monolithic Commerce Platforms Are Obsolete

25 Reasons Why Monolithic Commerce Platforms Are Obsolete

At the enterprise-level, Cloud Native and specifically Microservices are clearly now the default approach to building new commerce applications, or extending existing commerce applications. But what's so bad about the old, monolithic approach to building commerce applications? Having spent my career (and many long nights) architecting, building, deploying and troubleshooting monolithic applications, I would like to share some observations on why the old approach has outlived its usefulness.

  1. Releases to production occur monthly or even quarterly because every time a change is made to the monolithic application, the entire application and its tightly coupled dependencies (ERP, CRM, WMS, ESB, etc) must also be re-tested and deployed at the same time. A seemingly harmless change to one part of a monolithic application can have disastrous consequences to another part of the application. This makes it nearly impossible to do iterative development. Iterating four times to get a feature right could take a year with a legacy monolithic application whereas it could take a day if you deploy to production four times a day.
  2. It's hard to do agile with monolithic application because each team is working on exactly one (nearly) indivisible application. All of the developers write code and have to check it in at the same time, so that QA can begin their work. Then QA does their job. Then ops takes over to deploy the code to production. Yes you can branch but that introduces a lot of extra complexity. Agile works best when you can break apart an application into small, loosely coupled but tightly aligned pieces (a la microservices). Waterfall, whether in name or not, is what nearly always naturally occurs when developing monolithic applications
  3. Monolithic commerce applications, especially larger ones, almost require the use of supporting commercial software, like application servers and databases. A big monolithic application might have 5 or 10 million lines of code, which exercises the supporting software in very unique ways. Commercial software can be expensive and harder to deploy than open source. A 10,000 line microservice is unlikely to ever exercise the underlying platform that much, which allows you to use just about anything
  4. Monolithic applications do not scale well to handle super extremely heavy traffic - think Super Bowl, Word Cup, Olympics, primetime commercials, re-Tweets from world-famous celebrities, etc. With a monolithic application, you have to horizontally and vertically scale a single stack. It often means one datacenter, one database, one network, etc. While you can employ tricks like sharding, using expensive commercial software, etc, you're still left with the fact that every single stack has its limits. For example, monolithic databases cannot scale indefinitely and most databases don't auto-scale at all. Instead, it's best to break up the application into small vertical pieces (microservices) and then scale each of those vertical stacks independently
  5. Individual parts of a monolithic application cannot be scaled independently. Why scale the entire application when just search/browse is getting taxed, for example? 
  6. Auto-scaling a monolithic application hard to do because it often takes tens of minutes to start up a big monolithic application server and then run a big monolithic application. Loading all of those libraries takes a long time! Smaller microservices can be started in a second or two, allowing you to better react in real-time to customer traffic
  7. Monolithic applications require that everyone use the exact same tech stack, which often leads to the lowest common denominator (Java or .NET, relational database, etc). While the lowest common demonimantor is often acceptable, there are parts of an application that would significantly benefit from different, more specialized tech stacks. For example, you may want to use Erlang to build the chat part of your application, due to its inbuilt clustering capabilities. Maybe you want to use C++ to write the HTTP request-handling pipeline. Maybe you want to deploy some image processing code to GPU-based compute instances. With a monolithic application, you must standardize on one stack
  8. Packaged monolithic commerce platforms, like Demandware, ATG, WebSphere Commerce, etc, require ownership of all of the commerce data - products, customers, orders, etc. You can't develop orders, for example, on your own. They have to own everything. This makes you dependent on their functionality and their roadmap. It's all or nothing
  9. It is exceedingly difficult to have more than one version of a monolithic application deployed at one time because data (say a shopping cart) can be touched by dozens or even hundreds of different methods within an application. You can't have versions 1.1, 1.2, and 2.0 deployed at the same time, for example because they would all be trying to read/write the same data but using different versions of code. Not supporting versioning causes all sorts of problems with "real" omnichannel clients where you have dozens of different clients all hitting the same APIs. It means you have to upgrade all of your clients (web, mobile, IoT, POS, etc) at the same time you update your monolithic application. Versioning (or evolvable APIs) is the default with microservices which allows the dozens of clients to be upgraded independently. Amazon.com has 36 versions live of their product catalog, for example. While there are clearly trade-offs, in 2017 it is absolutely necessary to allow clients to have their own release cycles and not be 100% coupled to the release cycle of the monolithic application on which it depends
  10. It's hard to containerize big monolithic applications because the artifact is a big EAR or WAR file, meant to be deployed to a running application server. Newer smaller applications are meant to be containerized, where the artifact that's produced via a Dockerfile is simply a container that can be run. Without containers, you can't take advantage of all the new interesting things in the container ecosystem - service discovery, higher infrastructure utilization, organizational primitives (labels in Kubernetes, groups in Marathon, etc). Over the past few years, there's been a few billion dollars of venture capital investment in this space
  11. Monolithic commerce applications naturally attract "state" - shopping cart contents, login status, pages viewed, etc. With one application and an HTTP session object to code to, developers naturally do it. It's easy - it's like putting a Post-it note above your desk when you have a big stack of them. You can't do that with microservices - instances are stateless by nature. State is attached to a customer object and persisted to a cluster-wide data grid. In a containerized world, it's assumed a container could live for just a few seconds. You can't "cheat" and dump the state into an HTTP session object in the application server
  12. You can't really cache HTTP responses for calls into a monolithic application. Every single HTTP request into a monolithic application requires that a unique HTTP response be generated because every response is for the entire page, not just a piece of it. With smaller more API-based microservices, it's pretty easy to cache responses. HTTP GET /product-service?productId=12345 will always generate the same HTTP response - so why not cache it in an intermediary layer? Doing so saves huge amounts of internal processing when you can cache discrete request/response pairs like that
  13. If a monolithic application is down, the whole application is down. With a more distributed microservice-based commerce platform, you can have individual services fail and the system overall can generally gracefully handle the failures. But monolithic apps are written in a way that they often entirely fail if there's a problem
  14. Security is harder with monolithic applications. Say there's a SQL injection vulnerability on the product page. That allows someone to hit the customer data too because there's only one database behind the scenes. But if someone breaks into the product microservice database, they only have access to the product data and nothing else
  15. Monolithic applications are developed in horizontally-focused teams. There's a development team. There's a QA team. There's a QA team. And so on. The application is similarly tiered. See Conway's Law. While this may build competency within each team, this comes at the expense of communication between each team. Rather than working together, teams end up endlessly ticketing each other to do things. This slows down releases while increasing errors due to the communication overhead
  16. Business flows that require coordination across different systems (say a customer returning a product) lead to top-down, tightly-coupled workflows. This is generally known as orchestration. Orchestration requires that every actor in the workflow be tightly coupled to all of the upstream and downstream system. This results in brittleness, as a change to one method within the application could impact many different workflows that rely on that function working a certain way. This can have huge, cascading consequences that are unknown by the developer making the change. Instead, it's better to have smaller, discrete APIs, with bottom-up choreography
  17. Developers writing monolithic applications don't really own anything. Instead, they're simply working on project. Next week they could work on a different project. There's no ownership. This leads to low employee morale, increases technical debt (Do you take your shoes off at home? Do you take your shoes off in hotel rooms?), disincentivizes long-term investments, and leads to riskier architecture choices (You're not the one to get woken up in the middle of the night when something goes wrong...why do you care?)
  18. Outsourcing development becomes harder because freelancers and system integrators must be part of the development team, with full access to all of your development-related systems and access to all of your intellectual property. With microservices, you can define an API specification and hand it off to a 3rd party to build in isolation. They don't have to know anything about your other systems or even how you build software. They just hand over code that conforms to the API that was defined at the start of the engagement
  19. Monolithic applications are hard to deploy because the applications tend to be large and complicated. There's this configuration parameter and that. Environment variables all over the place. This complexity is exacerbated by the fact that the people who add the complexity (developers) are often not the people who deploy the application (ops). This leads to infrequent deployments, which leads to integration issues, and often availability problems in production. It's far easier when each developer can build their own application in a few seconds and run it locally
  20. Monolithic applications suffer from a homogenization of technology. You typically have one technology used for each layer. For example, many use Java for the middle tier. But every day there's new innovation. If everyone is forced to use the same technology at each layer, there's no real opportunity for organizations to innovate and really try out something. Not tinker but actually run something in production. If a team is competent in Go, why not let them build a new microservice using Go? If it works, it could and should catch on as a go-to programming language. But if everyone is forced to use just Java in perpetuity, there's no room for experimentation
  21. Developer on boarding becomes challenging because monolithic applications become extremely complex over time. It may take five or 10 VMs to set up a development environment. There's a complicated system for logging. There's another complicated system for messaging. Within the application, there's probably a highly customized object relational mapping system for accessing data. With most monolithic applications having millions of lines of code, it's easy for things to get complicated very quickly. Onboarding a developer can take months and cause a lot of frustration for all involved
  22. Monolithic applications tend to be extremely complex on the inside, but expose very little in the way of APIs to the outside world. APIs are often an after-thought because the application itself deals with the business logic. It is exceptionally rare for APIs to be defined first, and then have the code actually implement the nicely defined APIs. This makes it hard for clients to consume the APIs. Most microservices, on the other hand, start out as an API specification, with the implementation adhering to the specification from the very beginning
  23. Because a monolithic application's APIs tend to be relatively sparse, the messaging layer tends to introduce additional business logic to compensate. The multi-billion dollar Enterprise Service Bus industry is built around this concept. Martin Fowler famously stated that monolithic applications have "Dumb endpoints and smart pipes." This additional business logic in the "pipes" basically creates another application that is tightly coupled to the monolithic application. Change the monolithic application and you have to then re-test the Enterprise Service Bus as well
  24. Teams working on monolithic applications are often unable to build competency around solving specific business problems because they are focused on their horizontal layers. DBAs build competency in database administration, for example. Developers are shuttled from project to project within the monolithic application. They're not owners. If you build small vertical teams of owners, each team can own a single business function and develop competency in that one area. For example, a team building out inventory for a telco will develop deep expertise on the business requirements for inventory after having seen the past few iPhone launches. In addition to business logic, this extends down to technology. Ops supporting an inventory microservice, for example, have deep expertise around the locking strategies of various datastores
  25. Refactoring a monolithic application becomes challenging because refactoring one part of the application could have very negative unintended consequences elsewhere in the application. As a developer, you often have no idea who else is relying on your code or why. It all becomes spaghetti code over time. Refactoring monolithic applications becomes nearly impossible over time as complexity increases

This is not to say that Cloud Native and Microservices are perfect. But clearly there are problems with monolithic applications that cannot be ignored, given all of the exciting alternatives on the market.

I have spent the past few years researching and building alternatives to the antiquated world of commerce platforms as we've known them. Take a look at the other work I've posted to LinkedIn, as well as my new book: Microservices for Modern Commerce (O'Reilly 2016)

Good article Kelly. However, i'm not sure what it has to do with Commerce platforms. What you've written is true for any platform today. An article focused on Microservices for Commerce would be great, or I could just read your new book :-)

Clemens Utschig-Utschig, MBA

Head of IT Technology Strategy / CTO at Boehringer Ingelheim | ex-Oracle

7 年

rename to ' why monolithic apps can't be as customer centric as ones based on microservices'.. It reminds me to the vision we had with soa as paradigm. . The key piece however is governance and true container architecture otherwise you just build a new fat apo without recognizing - because of all the interlinks between those microservices :-)

Alexandra Huff

High Tech and Aerospace Consulting

7 年

Great article Kelly. I was going to say "you should write a book" and now I see that you have already. I am buying it now, and will borrow some of your thoughts from this excellent piece in upcoming events and trainings! With your permission and due credit given, of course!

Tommy Tynj?

Senior Engineering Manager at Spotify

7 年

A good summary! I've been working with microservices for the past few years, and I thought I should add a few thoughts. I think that even a 10,000 lines of code service is too large, which is the size the biggest service we have in our team. It does too much. Our other services typically tend to be around 1,000 lines. We strive for our services to do just one thing and to do it well. They should be small enough so they're easy to reason about. The best services are those that we develop once and don't need to touch ever again (until decommissioned). Auto-scaling of a small service might be faster than of a bigger application, but if you for instance run VMs on AWS, auto-scaling still takes time as VM creation and bootstrapping is slow. If you need instant scaling you might need to consider other strategies as well. Regardless of whether you're developing a monolith or a small service, stable API's are key. You do not change existing APIs. In both cases supporting several APIs are possible, but one has to pay attention to both backward and forward compatability. The beauty of microservices is that if you already have version 1.x, you're probably better off developing 2.0 as a completely new service than to bundle them together. Going for microservices requires more maturity in the software development process. Things like logging frameworks and monitoring should be centralized and in place and creating new services should require minimal effort in terms of setup cost. I would also argue that having fully automated continuous delivery pipelines in place are vital for succeeding with a microservice style architecture together with a great engineering culture. Thanks for a good read!

Nunzio Esposito

value creator \ chief Design officer \ professor

7 年

This is an excellent article and a lot of the question my design team is asking itself on a daily bases. Specially given the flexibility from an end-user experience we are designing to have. Sharing this out today!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了