Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So I was scratching my head for a good couple of minutes trying to figure out how this works, being familiar only with HTTP Response Splitting/HTTP Cache Poisoning.

So it seemed that somewhere along the years while I haven't been paying any attention to websec, it became a common practice to send requests from different clients through the same TLS connection. And due to the non conforming way HTTP 2/1.1 interop was implemented by these webservers/load balancers request boundaries were not delimited correctly, making it possible to inject requests on behalf of followup clients.

Fine I get the issue. Sounds like another "good enough" optimization that backfired.

What is the solution aside from playing the patching wack-a-mole game? Should maybe a HTTP2.1 protocol work without a strict TLS requirement so that protocol downgrades aren't necessary to squeeze out extra performance (unless I misunderstand why the HTTP2/HTTP1.1 bridge was in place). Or is the problem that some application servers still don't support HTTP2 out of the box?



It's worth knowing that this is an extension of a very widespread attack on HTTP/1.1; many (maybe most?) 1.1 implementations were recently broken by desync just a couple years ago.


Out of the ecosystems I’m familiar with, Python application servers have terrible http2 support: neither gunicorn nor uwsgi supports it, and even new hotness like uvicorn is pretty far from it.

I don’t think Ruby is doing much better? Correct me if I’m wrong.


But why would you need HTTP/2 perfect support in real world application server? They are never going to terminate the client traffic, they will speak with a load balancer which can speak HTTP/1.1 with them. Sure, if you are at webscale or even less you want everything on HTTP/2 for optimization sake. But in the rest of cases, even if you are in a solo project, you can easily enough put an nginx before it, or a cloud native solution, or haproxy or whatever.


The whole point of this article is that proxies speaking HTTP2 with clients and HTTP1.1 with servers introduce new vulnerabilities. The author found such vulnerabilities in AWS ALB, several WAF solutions, F5 BigIP, and others.


Yeah but serving traffic from an application server directly is probably even worse in plethora of other failure modes.

EDIT: and yes I understand that you should use http/2 on the LB and http/2 on the backend to get the best of both worlds.

EDIT2: anyway my opinion is that the general reaction to a security discovery like this one shouldn't be "let's stop using this tech immediately" but "let's get this patched ASAP"


No one was talking about serving from application directly. The issue is in scenario you are describing. Please read the article.


Multiplexing API requests could be a thing if only HTTP2 pass through from proxies would be more popular.


I forwarded this discussion to the lead maintainer of HAProxy and he confirmed that HAProxy is not impacted by this. It doesn't surprise me. He implements things to the strictest interpretation of the specs.


The building blocks are there. In Python we have the wonderful (and wonderfully sans-io) h2[1] by Cory Benfield. E.g. here's a Twisted h2-using implementation: https://python-hyper.org/projects/hyper-h2/en/stable/twisted...

h2: https://github.com/python-hyper/h2


I think the main problem is indeed missing HTTP2 support in backend servers. This is often just a case of people not being willing to upgrade for various reasons, even if the technology they are using would support HTTP2 in newer versions.


The problem is that HTTP/2 pretty much forces encryption. Most people don't want to deal with certificate management/rotation on every single microservice's application server.


Why does HTTP/2 force encryption? The HTTP/2 RFC (RFC 7540) also defines how to run HTTP/2 over plaintext.

Terminating HTTP/2 over TLS on a web frontend and then HTTP/2 over plaintext to the application servers sounds like a viable model.


In practice HTTP/2 forces encryption. For example Amazons ALB docs say "Considerations for the HTTP/2 protocol version: The only supported listener protocol is HTTPS." [1]

[1]: https://docs.aws.amazon.com/elasticloadbalancing/latest/appl...


My Apache server is fine speaking HTTP/2 over port 80:

  curl -v --http2 --http2-prior-knowledge http://localhost
  * Connected to localhost (::1) port 80 (#0)
  * Using HTTP2, server supports multi-use
  * Connection state changed (HTTP/2 confirmed)
  * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
  * Using Stream ID: 1 (easy handle 0x559a7c6545c0)
  > GET / HTTP/2
  > Host: localhost
  > user-agent: curl/7.74.0
  > accept: */*
  > 
  * Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
  < HTTP/2 301 
  < date: Fri, 06 Aug 2021 11:16:05 GMT
  *snip*
Sadly none of the services that I reverse proxy through Apache support HTTP/2..


It make sense that internet facing clients and servers only support HTTP/2 over TLS. But that is different for internal connections or debug tools.


If you're running microservices, won't you be running them on a platform?

If on Kubernetes, just install cert-manager. Or if using FaaS, your platform will already do TLS termination, no?


That assumes your application can connect to the internet and can be accessed from the internet. There is a vast array of offline-only kubernetes clusters.

You can still of course use self-signed certificates (or setting up your own "CA"), but you'll hit other problems related to runtime certificate reloads and so on. It's still a lot of work to enable SSL for fully offline services.


On kubernetes, cert-manager might get certificates for you, but you'll still need to make sure the application correctly reloads those certs (many application frameworks have no way to reload a certificate at runtime).


Your reverse proxy can handle the TLS termination. Nginx or Traifik or whatever.

Besides, if you're doing microservices, whatever is managing them should be able to restart them gracefully. No need for reload as such.


this kind of terrible implementation is a lot of why encrypted quic exists.


Though I think these same problems would happen over http3/quic.

Http3 is almost exactly http2 but over quic. This means the requests should be possible to force the same desyncs talked about in the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: