Notes on haproxy.cfg and mTLS
Henrik Holst
Ethology researcher spending most of his time deep down in the comments section searching for intelligent life
I have two applications, Grafana and a custom app, that I wish to expose via a load balancer. In this case i choose HAProxy for the task. If I were less sensitive to K8s complexity, and I would fancy doing this again (third time), I would probably have opted to use Nginx and their ingress controller.
Grafana has built-in support for OAuth2, which makes it relatively straightforward to set up secure access. With Grafana, simply exposing a Let's Encrypt TLS certificate is enough for my needs.
For the custom application, however, it is much more sensible to use mTLS (Mutual TLS) for user authentication. HAProxy can handle this combo setup easily, as I will show below.
Although I initially attempted to use the HAProxy Ingress Controller, I ran into issues. Specifically, it seems challenging (or perhaps not yet possible) to expose the CA certificate globally with verify optional in the configuration and then enforce verification as required in the ingress annotations. Since I couldn't make this approach work, I decided to skip the ingress controller and instead used a standard LoadBalancer service, pointing it to a traditional HAProxy deployment.
Here are the relevant parts of the haproxy.cfg to achieve this setup.
1) crt-store is a new feature that possible requires HAProxy 3.0.x. It does however make life in K8s a bit easier because certs and keys do not have to be baked together in a single file, or named very particular.
2) the 'ca-file [...] verify optional' is key for the mTLS aspect. This is where the valid certificates are exposed to which a client will be validated (or not).
3) TLS SNI ("vhost for TLS") is used to identify which backend app to connect to.
领英推荐
4) Kubernetes srv records are used to expose the named service ports. Yes, they have to be named so don't forget. The great benefit here is that the app itself does not need a clusterIP. HAProxy will dynamically figure out which pods and IPs to expose at the backend. (It is called a "headless" service)
And yes, you can have multiple frontend sections, but you cannot bind to the same IP and port in both. It will work, but it will not do what you think it does. The effect as I see is some kind of of load balancing between the requests. It is not a pattern matching like feature that will match to the corresponding hostname.
crt-store
load crt "/etc/haproxy/server-cert/tls.crt" key "/etc/haproxy/server-cert/tls.key"
load crt "/etc/haproxy/grafana-cert/tls.crt" key "/etc/haproxy/grafana-cert/tls.key"
frontend https
bind *:8443 ssl crt /etc/haproxy/grafana-cert/tls.crt crt /etc/haproxy/server-cert/tls.crt ca-file /etc/haproxy/client-cert/tls.crt verify optional
mode http
# Define SNI-based ACLs
acl host_grafana req.ssl_sni -i grafana.example.com
acl host_ui req.ssl_sni -i app.example.com
# Require a client certificate for the UI domain only
http-request deny if host_ui ! { ssl_c_used }
# default_backend grafana_backend
use_backend grafana_backend if host_grafana
use_backend app_backend if host_app
backend app_backend
mode http
balance roundrobin
server-template pod 10 _http._tcp.app.default.svc.cluster.local check resolvers kube-dns resolve-prefer ipv4
backend grafana_backend
mode http
balance roundrobin
server-template pod 10 _http._tcp.grafana.default.svc.cluster.local check resolvers kube-dns resolve-prefer ipv4
Resources