Although this helps for some cases, its a bit disingenuous, if an attacker has more bandwidth than you (easily achievable witch amplification attacks), its game over.
The majority of DDoS protection is all about raising the bar so the people attacking you will have more fun attacking someone else and/or being happy they're attacking your www instead of your service (because taking down www requires no research to find the target and provides more lols). If you upset the wrong people, you may attract more determined attackers though, you still want a high bar, because that makes it more likely to be trackable.
There's a few classes of effective DDoS:
A) volumetric traffic/packet counts: the only effective thing here is to have tons of bandwith or use a service that does. Some of the UDP reflectors out there have a very high amplification rate. Null routing can absorb the bandwidth, but smarter attackers will notice when you move your service and change the target.
B) application related, but not application specific. Things like slowlaris to hold connections, or just https floods to use all your handshakes/second. Some of these you can filter, but you sometimes just need more machines to process everything. Apparently HAProxy can help with slowlaris by detecting and dropping slow connections.
C) expensive requests / processing at the application level. If you have a public endpoint that takes 10 seconds to process a request that takes effectively no time to generate, that's definitely a potential thing to be attacked, and that's something HAProxy can definitely help with.
D) Issues in the IP/tcp stack. Sometimes there's gray areas or infrequently used corners of the processing that are very expensive if used frequently (ex: IP fragment reassembly). HAProxy won't help there.
If HAProxy (or whatever) can help you with low bandwidth DDOS, I think that's still pretty useful.
Let's say you have an office on a Gigabit fiber connection going to a network switch capable of processing 10gbe (assume 10x speed). Hypothetically, would the switch be able to respond quickly enough such that even if the gigabit WAN connection was saturated it would degrade performance? It's it just a matter of computing power to deal with the network traffic or do you actually need to have bigger network pipes than the attack bandwidth?
Potentially, your equipment could be incapable of handling packets at line rate (which is more difficult if the packets are small). That's fairly easy to solve though, especially if you're only looking at gigE -- get better hardware and/or software
The problem is if your attacker is sending you more traffic than your incoming bandwidth. Packets will be dropped, and in most cases you won't be able to control which ones. Depending on how the other side is configured, what packets you do get could be highly delayed. That means actual connections to you are likely going to see a lot of retransmits to you as well as from you. It's possible to still make some progress in these conditions, but not very much, processing power won't help.
I've done very effective application-layer DDoS attacks in the past with a meager ADSL over Tor. Completely free. But of course you can also spend thousands in a botnet to flood someone off the planet. That's not the point.
Booters are cheaper than $thousands. And while it may cost someone $thousands to keep you offline for days, it probably doesn't. And even if it did, you probably want a better response than "guess we're offline now, lol, l8r" when someone chooses to spend $dozens shutting you down for a couple hours.
If DDoS protection isn't the point, what is?
I don't think it's discussed enough in our circles the various aspects of the internet that are more or less broken. Like how easy it is for anyone to take you offline. How easy it is to spoof IP addresses. How useless IP address blocking is. How we demand infinite bandwidth for a low, fixed, monthly price yet don't want to be on the hook when our toaster is DoSing our neighbor and causing real financial damage.
But at the same time we share these little haproxy/fail2ban tips that don't work under actual threat, and then we lament that people use services like CloudFlare instead of talking seriously about how we depend on the free services of large companies, whether it's CloudFlare's DDoS protection or Google's reCaptcha, to prevent real abuse.
I don't think they use haproxy (or at least they don't heavily rely on it). But once you start with properly scalable tools, you "just" need to have a high bandwidth and many machines, and everything becomes easy. Think about it for a second, put a 40 GbE NIC into an single-socket haproxy 1U pizza box, you get this for $800. Take 25 of these in a rack, connect this to an L3 switch doing ECMP and you have 1 Tbps of DDoS absorption capacity. For $16K. I know pretty well I'm oversimplifying the problem, but it always starts this way, and after this you adjust for various aspects (small packets, reflection using tools like PacketShield, TLS handshakes using more CPU cores, large connection counts using more RAM) and that's about all.
The heaviest and hardest to maintain features in these environments are the fat stuff that customers want (WAF, monitoring, UI, config versioning, etc). But basic protection is trivial if you can afford the bandwidth.