The reliable, high-performance TCP/HTTP load balancer.
Simple admin listener:
haproxy:
proxy:
enabled: True
listen:
admin_page:
type: admin
binds:
- address: 0.0.0.0
port: 8801
user: fsdfdsfds
password: dsfdsf
Simple stats listener:
haproxy:
proxy:
enabled: True
listen:
admin_page:
type: stats
binds:
- address: 0.0.0.0
port: 8801
Sample pillar with admin:
haproxy:
proxy:
enabled: True
mode: http/tcp
logging: syslog
maxconn: 1024
timeout:
connect: 5000
client: 50000
server: 50000
listen:
https-in:
binds:
- address: 0.0.0.0
port: 443
servers:
- name: server1
host: 10.0.0.1
port: 8443
- name: server2
host: 10.0.0.2
port: 8443
params: 'maxconn 256'
Sample pillar with custom logging:
haproxy:
proxy:
enabled: True
mode: http/tcp
logging: syslog
maxconn: 1024
timeout:
connect: 5000
client: 50000
server: 50000
listen:
https-in:
binds:
address: 0.0.0.0
port: 443
servers:
- name: server1
host: 10.0.0.1
port: 8443
- name: server2
host: 10.0.0.2
port: 8443
params: 'maxconn 256'
haproxy:
proxy:
enabled: true
mode: tcp
logging: syslog
max_connections: 1024
listen:
mysql:
type: mysql
binds:
- address: 10.0.88.70
port: 3306
servers:
- name: node1
host: 10.0.88.13
port: 3306
params: check inter 15s fastinter 2s downinter 1s rise 5 fall 3
- name: node2
host: 10.0.88.14
port: 3306
params: check inter 15s fastinter 2s downinter 1s rise 5 fall 3 backup
- name: node3
host: 10.0.88.15
port: 3306
params: check inter 15s fastinter 2s downinter 1s rise 5 fall 3 backup
rabbitmq:
type: rabbitmq
binds:
- address: 10.0.88.70
port: 5672
servers:
- name: node1
host: 10.0.88.13
port: 5673
params: check inter 5000 rise 2 fall 3
- name: node2
host: 10.0.88.14
port: 5673
params: check inter 5000 rise 2 fall 3 backup
- name: node3
host: 10.0.88.15
port: 5673
params: check inter 5000 rise 2 fall 3 backup
keystone-1:
type: general-service
binds:
- address: 10.0.106.170
port: 5000
servers:
- name: node1
host: 10.0.88.13
port: 5000
params: check
haproxy:
proxy:
enabled: true
mode: tcp
logging: syslog
max_connections: 1024
listen:
mysql:
type: mysql
binds:
- address: 10.0.88.70
port: 3306
servers:
- name: node1
host: 10.0.88.13
port: 3306
params: check inter 15s fastinter 2s downinter 1s rise 5 fall 3
- name: node2
host: 10.0.88.14
port: 3306
params: check inter 15s fastinter 2s downinter 1s rise 5 fall 3 backup
- name: node3
host: 10.0.88.15
port: 3306
params: check inter 15s fastinter 2s downinter 1s rise 5 fall 3 backup
rabbitmq:
type: rabbitmq
binds:
- address: 10.0.88.70
port: 5672
servers:
- name: node1
host: 10.0.88.13
port: 5673
params: check inter 5000 rise 2 fall 3
- name: node2
host: 10.0.88.14
port: 5673
params: check inter 5000 rise 2 fall 3 backup
- name: node3
host: 10.0.88.15
port: 5673
params: check inter 5000 rise 2 fall 3 backup
keystone-1:
type: general-service
binds:
- address: 10.0.106.170
port: 5000
servers:
- name: node1
host: 10.0.88.13
port: 5000
params: check
Sample pillar with port range and port offset:
This is usefull in listen blocks for definition of multiple servers that differs only by port number in port range block. This situation can be result of multiple single-thread servers deployed in multi-core environment to better utilize the available cores.
For example, five contrail-api workers occupy ports 9100-9104
.
This can be achieved by using port_range_length
in the pillar,
port_range_length: 5
in this case.
For skipping first worker (worker_id 0
), because it has other
responsibilities and to avoid overloading it by http requests
use the port_range_start_offset
in the pillar,
port_range_start_offset: 1
in this case, it will only use ports
9101-9104 (skipping 9100).
port_range_length
parameter is used to calculate port range endport_range_start_offset
will skip first n ports in port rangeFor backward compatibility, the name of the first server in port range
has no pN
suffix.
The sample will result in the following output:
listen contrail_api
bind 172.16.10.252:8082
balance leastconn
server ntw01p1 172.16.10.95:9101 check inter 2000 rise 2 fall 3
server ntw01p2 172.16.10.95:9102 check inter 2000 rise 2 fall 3
server ntw01p3 172.16.10.95:9103 check inter 2000 rise 2 fall 3
server ntw01p4 172.16.10.95:9104 check inter 2000 rise 2 fall 3
server ntw02 172.16.10.96:9100 check inter 2000 rise 2 fall 3
server ntw02p1 172.16.10.96:9101 check inter 2000 rise 2 fall 3
server ntw02p2 172.16.10.96:9102 check inter 2000 rise 2 fall 3
server ntw02p3 172.16.10.96:9103 check inter 2000 rise 2 fall 3
server ntw02p4 172.16.10.96:9104 check inter 2000 rise 2 fall 3
server ntw03 172.16.10.94:9100 check inter 2000 rise 2 fall 3
server ntw03p1 172.16.10.94:9101 check inter 2000 rise 2 fall 3
server ntw03p2 172.16.10.94:9102 check inter 2000 rise 2 fall 3
server ntw03p3 172.16.10.94:9103 check inter 2000 rise 2 fall 3
server ntw03p4 172.16.10.94:9104 check inter 2000 rise 2 fall 3
haproxy:
proxy:
listen:
contrail_api:
type: contrail-api
service_name: contrail
balance: leastconn
binds:
- address: 10.10.10.10
port: 8082
servers:
- name: ntw01
host: 10.10.10.11
port: 9100
port_range_length: 5
port_range_start_offset: 1
params: check inter 2000 rise 2 fall 3
- name: ntw02
host: 10.10.10.12
port: 9100
port_range_length: 5
port_range_start_offset: 0
params: check inter 2000 rise 2 fall 3
- name: ntw03
host: 10.10.10.13
port: 9100
port_range_length: 5
params: check inter 2000 rise 2 fall 3
Sample pillar with a custom and more complex listener (for Artifactory and sub-domains for docker Registries):
haproxy:
proxy:
listen:
artifactory:
mode: http
options:
- forwardfor
- forwardfor header X-Real-IP
- httpchk
- httpclose
- httplog
sticks:
- stick on src
- stick-table type ip size 200k expire 2m
acl:
is_docker: "path_reg ^/v[12][/.]*"
http_request:
- action: "set-path /artifactory/api/docker/%[req.hdr(host),lower,field(1,'.')]%[path]"
condition: "if is_docker"
balance: source
binds:
- address: ${_param:cluster_vip_address}
port: 8082
ssl:
enabled: true
# This PEM file needs to contain key, cert, CA and possibly
# intermediate certificates
pem_file: /etc/haproxy/ssl/server.pem
servers:
- name: ${_param:cluster_node01_name}
host: ${_param:cluster_node01_address}
port: 8082
params: check
- name: ${_param:cluster_node02_name}
host: ${_param:cluster_node02_address}
port: 8082
params: backup check
You can also use multiple certificates for one listener, for example, when it is bind on multiple interfaces:
haproxy:
proxy:
listen:
dummy_site:
mode: http
binds:
- address: 127.0.0.1
port: 8080
ssl:
enabled: true
key: |
my super secret key follows
cert: |
certificate
chain: |
CA chain (if any)
- address: 127.0.1.1
port: 8081
ssl:
enabled: true
key: |
my super secret key follows
cert: |
certificate
chain: |
CA chain (if any)
The definition above results in creation of /etc/haproxy/ssl/dummy_site
directory with files 1-all.pem
and 2-all.pem
(per binds).
Sample pillar with a custom listener with HTTP-check options specified:
haproxy:
proxy:
enabled: true
forwardfor:
enabled: true
except: 127.0.0.1
header: X-Forwarded-For
if-none: false
listen:
glance_api:
binds:
- address: 192.168.2.11
port: 9292
ssl:
enabled: true
pem_file: /etc/haproxy/ssl/all.pem
http_request:
- action: set-header X-Forwarded-Proto https
mode: http
options:
- httpchk GET /
- httplog
- httpclose
servers:
- host: 127.0.0.1
name: ctl01
params: check inter 10s fastinter 2s downinter 3s rise 3 fall 3
port: 9292
Sample pillar with a custom listener with the tcp-check
options specified
(for Redis cluster with Sentinel):
haproxy:
proxy:
listen:
redis_cluster:
service_name: redis
health-check:
tcp:
enabled: True
options:
- 'send PING\r\n'
- expect string +PONG
- 'send info\ replication\r\n'
- expect string role:master
- 'send QUIT\r\n'
- expect string +OK
binds:
- address: ${_param:cluster_address}
port: 6379
servers:
- name: ${_param:cluster_node01_name}
host: ${_param:cluster_node01_address}
port: 6379
params: check inter 1s
- name: ${_param:cluster_node02_name}
host: ${_param:cluster_node02_address}
port: 6379
params: check inter 1s
- name: ${_param:cluster_node03_name}
host: ${_param:cluster_node03_address}
port: 6379
params: check inter 1s
Front-end for routing between exists listeners via URL with SSL an redirects. You can use one backend for several URLs.
haproxy:
proxy:
listen:
service_proxy:
mode: http
balance: source
format: end
binds:
- address: ${_param:haproxy_bind_address}
port: 80
ssl: ${_param:haproxy_frontend_ssl}
ssl_port: 443
redirects:
- code: 301
location: domain.com/images
conditions:
- type: hdr_dom(host)
condition: images.domain.com
acls:
- name: gerrit
conditions:
- type: hdr_dom(host)
condition: gerrit.domain.com
- name: jenkins
conditions:
- type: hdr_dom(host)
condition: jenkins.domain.com
- name: docker
backend: artifactroy
conditions:
- type: hdr_dom(host)
condition: docker.domain.com
Enable customizable forwardfor
option in the defaults
section:
haproxy:
proxy:
enabled: true
mode: tcp
logging: syslog
max_connections: 1024
forwardfor:
enabled: true
except:
header:
if-none: false
haproxy:
proxy:
enabled: true
mode: tcp
logging: syslog
max_connections: 1024
forwardfor:
enabled: true
except: 127.0.0.1
header: X-Real-IP
if-none: false
Sample pillar with multiprocess multicore configuration:
haproxy:
proxy:
enabled: True
nbproc: 4
cpu_map:
1: 0
2: 1
3: 2
4: 3
stats_bind_process: "1 2"
mode: http/tcp
logging: syslog
maxconn: 1024
timeout:
connect: 5000
client: 50000
server: 50000
listen:
https-in:
bind_process: "1 2 3 4"
binds:
- address: 0.0.0.0
port: 443
servers:
- name: server1
host: 10.0.0.1
port: 8443
- name: server2
host: 10.0.0.2
port: 8443
params: 'maxconn 256'
Implement rate limiting, to prevent excessive requests.
This feature only works if using format: end
:
haproxy:
proxy:
...
listen:
nova_metadata_api:
...
format: end
options:
- httpchk
- httpclose
- httplog
rate_limit:
duration: 900s
enabled: true
requests: 125
track: content
servers:
...
type: http
Implement HAProxy configuration without specifying certain type or with
type='None'
. This approach allows you to set all major HAproxy parameters
manually. Sample pillar:
haproxy:
proxy:
listen:
manila_api:
type: None
mode: tcp
balance: roundrobin
timeout:
check: 10
client: 20
http_request:
- action: "add-header X-Forwarded-Proto https"
condition: "if { ssl_fc }"
options: ${_param:haproxy_https_check_options}
capture:
- cookie ASPSESSION len 32
- request header Host len 15
compression:
- algo gzip
- type text/html text/plain
declare_capture: request len 50
email_alert:
- myhostname myserver
- from server@localhost
- level warning
errorfile:
file_500:
code: 500
file: /tmp/error_500.log
file_404:
code: 400
file: /tmp/error_400.log
max_keep_alive_queue: 100
maxconn: 10000
reqadd:
- X-Proto:\ SSL if is-ssl
reqirep:
- ^Host:\ www.mydomain.com Host:\ www
modify_headers:
- reqallow ^Host:\ www\.
- reqdel ^Host:\ .*\.local
- reqdeny ^Host:\ .*\.local
- reqiallow ^Host:\ www\.
- reqidel ^Host:\ .*\.local
- reqideny ^Host:\ .*\.local
- reqipass ^Host:\ .*\.local
- reqpass ^Host:\ .*\.local
- reqitarpit ^Host:\ .*\.local
- reqtarpit ^Host:\ .*\.local
retries: 10
stats:
- enable
- auth admin1:AdMiN123
rate_limit_sessions: 1000
Implement rate limiting to prevent excessive requests
using format: listen
:
haproxy:
proxy:
...
listen:
nova_metadata_api:
...
rate_limit:
duration: 3s
enabled: true
requests: 60
track: connection
servers:
...
Implement rate limiting to prevent excessive requests
using format: listen
and acls/request/backend
stick list:
haproxy:
proxy:
listen:
nova_metadata_api:
options:
- httplog
rate_limit:
enabled: true
type: string
len: 36
size: 10m
duration: 60s
acls:
101:
enabled: true
value: acl too_many_requests_3 sc0_gpc0_rate() gt 3
102:
enabled: true
value: acl mark_seen sc0_inc_gpc0 gt 0
110:
enabled: true
value: acl x_instance_id hdr(x-instance-id) -i 4777e8e0-16e8-46ce-a3fe-0a1ad9b3ebdc
111:
enabled: true
value: acl x_instance_id hdr(x-instance-id) -i ca2395dd-f73f-4d43-8fe7-f7078a0920af
201:
enabled: true
value: acl too_many_requests_6 sc0_gpc0_rate() gt 6
202:
enabled: true
value: acl mark_seen sc0_inc_gpc0 gt 0
210:
enabled: true
value: acl x_tenant_id hdr(x-tenant-id) -i 2b76cc56a437404bb8cb6cb20dbb0ea4
tcp_request:
001:
enabled: true
value: tcp-request inspect-delay 5s
101:
enabled: true
value: tcp-request content track-sc0 hdr(x-instance-id) if ! too_many_requests_3
201:
enabled: true
value: tcp-request content track-sc0 hdr(x-tenant-id) if ! too_many_requests_6
use_backend:
101:
enabled: true
value: use_backend nova_metadata_api-rate_limit if mark_seen too_many_requests_3 x_instance_id
201:
enabled: true
value: use_backend nova_metadata_api-rate_limit if mark_seen too_many_requests_6 x_tenant_id