API Management Possible Scaling Problems
I was looking into Apigee and I wanted to get some feedback from people that have looked into this as well
What I found is a major objection to going with Apigee is the actual API payload needs to run through their infrastructure which is why the install tends to grow and also why it tends to have difficulty scaling. Therefore, it tends to work well for a POC and in early projects, but as the API velocity increases the issues start to be more prevalent, which usually does not happen until there has been a heavy investment (capital and time) in the solution which makes it more difficult to change.
Here is an example of the installation topologies recommended by Apigee.
Based on these recommendations this does not appear to scale well, without continued significant investment. The key component in the Apigee topology is the message router and it never gets beyond 2 in a single cluster. Notice that 12 host install option is across two datacenters which means that those 4 nodes are not clustered together. This looks like the message router is actually active-passive.
What I am interested in knowing is:
Has anyone scaled to more than 1 active message router in a single cluster?
How are you dealing with traffic when there is potential to overwhelm the server, especially if it is processing the payload?
Is the synchronous communication giving you any problems because you need to wait until every thread of execution blocks are done before you get a response? Have you found a work around for that?
Considering Apigee uses a home built traffic manager has anyone run any analysis against something like NGinx as a baseline (since it is proven at scale), to see the performance differences as scaling is required?
What has your experience been when comparing to other API management solutions like 3Scale that check policy and do not require the full payload to follow through the infrastructure?
Todd, thanks for sharing!