Bugzilla – Bug 1216181
VUL-0: haproxy: Rapid reset attack impact (CVE-2023-44487)
Last modified: 2024-04-19 10:05:18 UTC
The haproxy upstream team posted the following statement about the impact of the "HTTP/2 Rapid Reset Attack" vulnerability. https://www.mail-archive.com/haproxy@formilux.org/msg44136.html ... So I was bored by wasting my time trying to harm the process from scripts, I finally wrote a small program that does the same but much faster. For now it's boring as well. In short: - the concurrency issue was addressed 5 years ago with the commit above so all maintained version are immune to this. The principle is that each H2 connection knows both the number of protocol-level streams attached to them but also application-level streams, and it's that one that enforces the limit, preventing from processing more requests until the number of active streams is within the limit again. In the worst case (i.e. if the attacker downloads and its window updates cannot enter anymore), the streams will simply time out then the connection, like on a single non-multiplexed connection so nothing new here. It also means that no more than the configured limit of streams per connection will reach the hosted application at once. - the risk of CPU usage that was also mentioned is not much relevant either. These aborted requests actually cost less CPU than completed ones, and on my laptop I found that I would reach up to 100-150k req/s per core (depending on CPU thermal throttling) which is perfectly within what we normally observe with a standard h2load injection. With less streams I could even reach 1 million requests per second total, because they were aborted before being turned into a regular stream, so the load was essentially between haproxy and the client. Here's the reference to the 2018 mitigation commit: f210191dc ("BUG/MEDIUM: h2: don't accept new streams if conn_streams are still in excess") https://github.com/haproxy/haproxy/issues/2312 We are tracking all "HTTP/2 Rapid Reset Attack" related bugs within bsc#1216123.
I notified Willy about this and opened https://github.com/haproxy/haproxy/issues/2312 for tracking.