Tracking your queue in Openstack

Tracking your queue in Openstack

If you grow your openstack environment to many compute nodes, and multiple users and tenants, you need to pay attention to the default sockets allowed to be open, or your cluster will stop accepting new requests.

Wherever rabbitMQ is running, execute this command:

[root@controller ~]# rabbitmqctl status | grep sockets_     

{sockets_limit,892},

{sockets_used,506}]},

[root@horizon-3-27 ~]#

I had run into a situation, that a tenant user said they could no longer connect to the VNC console on their windows instances, had trouble deleting instances, and doing other trivial tasks in the Dashboard.

The socket limit had been reached, and the queue had jammed up.

Restarting rabbitMQ, will only cause the existing non finished requests, to be dropped, and cause problems with openstack.  However, what you should do is stop the openstack services on the controller, then stop rabbitMQ and peform the following, to avoid future issues.

This is on CentOS 7, which uses systemd

cat >> /etc/security/limits.d/rabbitmq.conf <<EOF
# rabbitmq
# Increase maximum number of open files from 1024 to 4096 for RabbitMQ

#<domain> <type> <item> <value>
rabbitmq soft nofile 4096
EOF

To check if this setting is correct, run:

# su - rabbitmq -s /bin/sh -c 'ulimit -n'
4096

vi /usr/lib/systemd/system/rabbitmq-server.service

[Service]
Type=notify
User=rabbitmq
Group=rabbitmq
WorkingDirectory=/var/lib/rabbitmq
ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server
ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl stop
LimitNOFILE=32768

Now go ahead and restart RabbitMQ, and the associated openstack services.

You should see a much higher number:

[root@controller ~]# rabbitmqctl status | grep sockets_     

{sockets_limit,29399},

{sockets_used,506}]},

[root@horizon-3-27 ~]#

要查看或添加评论,请登录

Wil B.的更多文章

社区洞察

其他会员也浏览了