So I was scratching my head for a good couple of minutes trying to figure out how this works, being familiar only with HTTP Response Splitting/HTTP Cache Poisoning.
So it seemed that somewhere along the years while I haven't been paying any attention to websec, it became a common practice to send requests from different clients through the same TLS connection. And due to the non conforming way HTTP 2/1.1 interop was implemented by these webservers/load balancers request boundaries were not delimited correctly, making it possible to inject requests on behalf of followup clients.
Fine I get the issue. Sounds like another "good enough" optimization that backfired.
What is the solution aside from playing the patching wack-a-mole game? Should maybe a HTTP2.1 protocol work without a strict TLS requirement so that protocol downgrades aren't necessary to squeeze out extra performance (unless I misunderstand why the HTTP2/HTTP1.1 bridge was in place). Or is the problem that some application servers still don't support HTTP2 out of the box?
It's worth knowing that this is an extension of a very widespread attack on HTTP/1.1; many (maybe most?) 1.1 implementations were recently broken by desync just a couple years ago.
Out of the ecosystems I’m familiar with, Python application servers have terrible http2 support: neither gunicorn nor uwsgi supports it, and even new hotness like uvicorn is pretty far from it.
I don’t think Ruby is doing much better? Correct me if I’m wrong.
But why would you need HTTP/2 perfect support in real world application server? They are never going to terminate the client traffic, they will speak with a load balancer which can speak HTTP/1.1 with them. Sure, if you are at webscale or even less you want everything on HTTP/2 for optimization sake. But in the rest of cases, even if you are in a solo project, you can easily enough put an nginx before it, or a cloud native solution, or haproxy or whatever.
The whole point of this article is that proxies speaking HTTP2 with clients and HTTP1.1 with servers introduce new vulnerabilities. The author found such vulnerabilities in AWS ALB, several WAF solutions, F5 BigIP, and others.
Yeah but serving traffic from an application server directly is probably even worse in plethora of other failure modes.
EDIT: and yes I understand that you should use http/2 on the LB and http/2 on the backend to get the best of both worlds.
EDIT2: anyway my opinion is that the general reaction to a security discovery like this one shouldn't be "let's stop using this tech immediately" but "let's get this patched ASAP"
I forwarded this discussion to the lead maintainer of HAProxy and he confirmed that HAProxy is not impacted by this. It doesn't surprise me. He implements things to the strictest interpretation of the specs.
I think the main problem is indeed missing HTTP2 support in backend servers. This is often just a case of people not being willing to upgrade for various reasons, even if the technology they are using would support HTTP2 in newer versions.
The problem is that HTTP/2 pretty much forces encryption. Most people don't want to deal with certificate management/rotation on every single microservice's application server.
In practice HTTP/2 forces encryption. For example Amazons ALB docs say "Considerations for the HTTP/2 protocol version: The only supported listener protocol is HTTPS." [1]
That assumes your application can connect to the internet and can be accessed from the internet. There is a vast array of offline-only kubernetes clusters.
You can still of course use self-signed certificates (or setting up your own "CA"), but you'll hit other problems related to runtime certificate reloads and so on. It's still a lot of work to enable SSL for fully offline services.
On kubernetes, cert-manager might get certificates for you, but you'll still need to make sure the application correctly reloads those certs (many application frameworks have no way to reload a certificate at runtime).
So it seemed that somewhere along the years while I haven't been paying any attention to websec, it became a common practice to send requests from different clients through the same TLS connection. And due to the non conforming way HTTP 2/1.1 interop was implemented by these webservers/load balancers request boundaries were not delimited correctly, making it possible to inject requests on behalf of followup clients.
Fine I get the issue. Sounds like another "good enough" optimization that backfired.
What is the solution aside from playing the patching wack-a-mole game? Should maybe a HTTP2.1 protocol work without a strict TLS requirement so that protocol downgrades aren't necessary to squeeze out extra performance (unless I misunderstand why the HTTP2/HTTP1.1 bridge was in place). Or is the problem that some application servers still don't support HTTP2 out of the box?