NetScaler Packet engines
Andrew Scott
NetScaler Pre-Sales Specialist - Trying to make the complex stuff accessible to all. Talks about #NetScaler #Application Security #Loadbalancing #Cloud
I have a small lab system, and in the lab I have a copy of NetScaler Console (of course) with a local 20G of Pooled capacity to assign to the NetScaler’s in the lab. I was just looking around on the cli on a VPX1000 (with the pooled capacity) and happened to run this command. I know, I should get a life, but I was curious…:-)
The output was something of a surprise to me, I had only two NSPPE cores plus a management core. It got me thinking:
Who stole my packet engine!
I should have had more cores than that, based on this support doc CTX139485( which seems to be unavailable). Here are the important bits...
It said three cores, plus a management core for a VPX1000. Naturally, I don’t do performance testing on this NetScaler at the moment, however the VPX does all its traffic processing on the assigned vCPU’s. As such the number of packet engines is important for the SSL offload performance. Checking in the ‘shell’ and running ‘top’ showed me this:
I can see two NSPPE, plus the ‘CPU0’ actually running the top command. So why did I only have two, a VPX1000 should have more?
In this case, the memory was too low, I had 7G of memory allocated, which isn’t really enough to let the NetScaler fire up all the cores at boot time. I had some more memory available so shut it down and added some more. How much should I add? 20G seemed like a good place to start. Also, maybe more cores would be handy too, so that was switched to 12 cores(from 6).
What did I get?
This surprised me. I had eight packet engines plus the management core on a VPX1000. Then I remembered that one of the options with ‘Pooled licensing’ was that cores were not actually limited. This eight core VPX was still limited to 1Gbps of traffic, I could spread the load across the cores. I could also do some dynamic allocation of throughput capacity and process more traffic without a reboot.
Why is that significant?
Cores are assigned at boot time, you can autoscale based on a template of some sort, but this would allow you to assign cores and flex up the running instance instead of adding instances. This could be handy based on a certain workload, some are core heavy and some are throughput heavy. Others are a bit of both..
Two things came to mind.
1. Fixed licensing on a VPX.
I pulled out a VPX5000 premium licence and allocated some quite ridiculous amounts of memory and processing cores from the hypervisor. I used Hyper-V, other hypervisors are available!
The fixed license looks to be limited by a set number of cores, this seems to be one core less than the support document suggests. However, no amount of additional resources gave me more packet engines. I would expect that, sticking with eleven gig of memory and five cores for processing traffic would give a good balance for this 5Gig license.
Going Large - how big?
I thought I would try 48G and 20 cores assigned at the hypervisor, to see what my VPX1000 (pooled) would come up with.
Top shows a lot of cores! 18 NSPPE’s.
Summary
What does this show? The amount of memory assigned to a VPX instance can have a big impact on the cores you actually get to process traffic. Naturally, you do need to assign in the cores for them to get picked up.
I had made assumptions previously based on the support doc and everyone know that assumptions are not always a good practise. Test it out yourself.
The way Pooled and Flexed capacity work is a bit different from fixed capacity licenses, keeping this behaviour in mind might save a reboot if you keep a few cores ‘in your back pocket’.
Also, I have raised this internally, as it seems that the support doc might need to be updated.
I hope that helps someone.
Have a good weekend.
System Specialist at Alm Brand
3 个月The linked support doc does not exist anymore.
CEO @ Ferroque Systems | Masters of End-User Infrastructure
7 个月Good catch! Would love for there to be a calculator for that article.