I've been running two HAProxy boxes with Keepalived for years. Classic active/standby setup: one does the work, one sits there waiting for disaster. Works fine, but it always bugged me that half my capacity was just... idle.
Last week I finally dug into better options. Turns out you can do proper BGP anycast with a UniFi UDM Pro Max. No expensive Cisco gear needed. Here's how I got both HAProxy instances handling traffic simultaneously.

Where I Started
The original setup was dead simple:
Internet → VIP (Keepalived) → HAProxy01 (active)
→ HAProxy02 (sitting idle)Keepalived does its job well. HAProxy01 goes down, VIP floats to HAProxy02 in a couple seconds. But here's the thing: HAProxy02 has zero clue what sessions existed on HAProxy01. Users get kicked out, sticky sessions break. Not great.
First Fix: Sync Those Sessions
Before going crazy with BGP, there's a quick win. HAProxy can sync stick-tables between instances. When failover happens, the backup already knows about existing sessions.
Add this to both configs:
peers haproxy_cluster
peer haproxy01 192.168.0.210:10000
peer haproxy02 192.168.0.211:10000Then hook it into your backends:
backend myapp_backend
balance roundrobin
stick-table type ip size 200k expire 30m peers haproxy_cluster
stick on src
server app01 10.0.0.10:8080 check
server app02 10.0.0.11:8080 checkNow sessions survive failover. Users don't get logged out. This alone made a huge difference.
Keeping Configs in Sync
Two HAProxy boxes means two configs to maintain. I got tired of SSH'ing into both machines every time I changed something. HAProxy Data Plane API fixes this. Push changes via REST:
# Grab current config
curl -u admin:password "http://haproxy01:5555/v3/services/haproxy/configuration/raw"
# Push updated config
curl -u admin:password -X POST \
"http://haproxy01:5555/v3/services/haproxy/configuration/raw?version=1" \
-H "Content-Type: text/plain" \
--data-binary @haproxy.cfgI wrote a quick script that pushes to both nodes. Config drift problem solved.
The Real Upgrade: Both Boxes Working
Active/standby is fine, but why waste half your hardware? The goal was getting both HAProxy instances handling traffic at the same time.
Option 1: Two VIPs, DNS Round-Robin
VIP1 (192.168.0.200) → HAProxy01 (primary)
VIP2 (192.168.0.201) → HAProxy02 (primary)
DNS: lb.example.com → both IPsEach HAProxy owns one VIP, backs up the other. DNS returns both, clients pick randomly. If one node dies, its VIP moves over. Works, but you're depending on DNS TTLs for failover.
Option 2: BGP Anycast on UDM Pro Max
This is what I actually went with. Both HAProxy boxes announce the same IP to the router. The router (my UDM Pro Max) sees two paths and load-balances between them.
I figured this needed fancy network gear. Nope. Turns out UniFi added BGP support in UniFi OS 4.1.13. If you've got a UDM Pro Max, UDM Pro, UDM-SE, or UXG-Enterprise, you can do this right now.

How It Looks
┌─────────────────────┐
│ UDM Pro Max │
│ AS 65000 │
│ 192.168.0.1 │
└──────────┬──────────┘
│
BGP (eBGP) │ ECMP Load Balancing
│
┌──────────────────┼
│ │
┌──────▼──────┐ ┌──────▼──────┐
│ HAProxy01 │ │ HAProxy02 │
│ AS 65010 │ │ AS 65010 │
│ .210 │ │ .211 │
└──────┬──────┘ └──────┬──────┘
│ │
└────────┬─────────┘
│
Anycast VIP
192.168.0.200Both HAProxy nodes announce 192.168.0.200. The UDM sees two equal paths, splits traffic between them. One goes down, BGP withdraws the route, traffic flows to the survivor. No DNS delays.
Setting It Up
Step 1: FRRouting on the HAProxy Boxes
# On both HAProxy servers
apt update && apt install -y frr frr-pythontools
# Turn on BGP
sed -i 's/bgpd=no/bgpd=yes/' /etc/frr/daemons
systemctl restart frrStep 2: FRR Config (HAProxy01)
# /etc/frr/frr.conf
frr version 8.5
frr defaults traditional
hostname haproxy01
!
router bgp 65010
bgp router-id 192.168.0.210
no bgp ebgp-requires-policy
!
neighbor 192.168.0.1 remote-as 65000
neighbor 192.168.0.1 description UDM-Pro-Max
!
address-family ipv4 unicast
network 192.168.0.200/32
neighbor 192.168.0.1 activate
neighbor 192.168.0.1 soft-reconfiguration inbound
exit-address-family
!HAProxy02 is identical, just change the router-id to .211.
Step 3: Add the VIP to Loopback
The anycast IP needs to exist on both boxes:
# /etc/network/interfaces.d/anycast
auto lo:0
iface lo:0 inet static
address 192.168.0.200/32
# Bring it up
ifup lo:0Step 4: UDM Pro Max BGP Config
Create a text file with this and upload it via UniFi Network → Settings → Routing → BGP:
router bgp 65000
bgp router-id 192.168.0.1
!
neighbor 192.168.0.210 remote-as 65010
neighbor 192.168.0.210 description HAProxy01
!
neighbor 192.168.0.211 remote-as 65010
neighbor 192.168.0.211 description HAProxy02
!
address-family ipv4 unicast
neighbor 192.168.0.210 activate
neighbor 192.168.0.210 soft-reconfiguration inbound
neighbor 192.168.0.211 activate
neighbor 192.168.0.211 soft-reconfiguration inbound
maximum-paths 2
exit-address-family
!Step 5: Open the Firewall
BGP runs on TCP 179. In UniFi Network, add a firewall rule:
- Type: LAN In
- Source: 192.168.0.210, 192.168.0.211
- Destination: Gateway
- Port: TCP 179
- Action: Allow
Did It Work?
Check BGP status on the HAProxy boxes:
# Should show Established
sudo vtysh -c "show ip bgp summary"
# Check what you're advertising
sudo vtysh -c "show ip bgp neighbors 192.168.0.1 advertised-routes"On the UDM Pro Max (SSH in):
# Should show two nexthops for the VIP
ip route show 192.168.0.200
# Expected output:
# 192.168.0.200 proto bgp
# nexthop via 192.168.0.210 weight 1
# nexthop via 192.168.0.211 weight 1What I Run Now
- HAProxy Peers for session sync
- Data Plane API for config management
- BGP anycast via UDM Pro Max
Both boxes handle traffic. One dies, the other picks up everything automatically. Sessions survive because of peer sync. It's proper active-active without any DNS hacks or slow failovers.
Quick Reference
| Approach | Effort | When to Use |
|---|---|---|
| Keepalived only | Low | Simple setups, you don't mind idle hardware |
| Dual VIPs + DNS | Medium | Router doesn't support BGP |
| BGP Anycast | Medium | You want real active-active with fast failover |
Handy Commands
# HAProxy peer status
echo "show peers" | socat stdio /run/haproxy/admin.sock
# BGP summary
sudo vtysh -c "show ip bgp summary"
# What routes am I advertising?
sudo vtysh -c "show ip bgp neighbors 192.168.0.1 advertised-routes"
# Reload HAProxy without dropping connections
systemctl reload haproxy
Bonus: AI-Powered HAProxy Management
Here's something I didn't expect to love this much. I deployed the HAProxy Data Plane API on both load balancers. It exposes the full HAProxy configuration via REST endpoints.
The fun part? I connected it to my local AI assistant. Now I manage HAProxy in plain English / French:
- "Drain the ghost backend server for maintenance"
- "Show me the stats for all backends"
- "Add a new server to the gitlab backend"
- "Block external access to the admin panel"
The AI translates my request into the right API calls, executes them, and confirms the result. No more digging through haproxy.cfg or remembering curl syntax.
Me: "Put the blog server in maintenance mode"
AI: Done. Drained docker01 in ghost_backend.
Active connections will finish, new requests
go to other servers. Want me to re-enable it later?It's like having a junior sysadmin who never sleeps and knows the entire HAProxy documentation by heart. The Data Plane API handles the heavy lifting. The AI handles the translation between human intent and API calls.
For critical actions, it asks for confirmation. For read-only queries, it just answers. The cognitive load of managing load balancers dropped significantly.
If you want to try this yourself, the Data Plane API is straightforward to deploy. Pair it with any LLM that can make HTTP requests, give it the API docs, and you're set.
- UniFi BGP docs: help.ui.com/hc/en-us/articles/16271338193559
- HAProxy Peers: haproxy.com/documentation
- FRRouting: frrouting.org