Bugzilla – Bug 843234
VUL-0: CVE-2013-2014: openstack-keystone: no limitation for requests and headers size which can cause a crash
Last modified: 2013-12-06 13:23:52 UTC
Via Red Hat Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=957028 Bug 957028 - (CVE-2013-2014) CVE-2013-2014 OpenStack keystone: no limitation for requests and headers size which can cause a crash Yaguang Tang reports: concurrent requests with large POST body can crash the keystone process. this can be used by Malicious and lead to DOS to Cloud Service Provider. The OpenStack project has confirmed: Concurrent Keystone POST requests with large body messages are held in memory without filtering or rate limiting, this can lead to resource exhaustion on the Keystone server. External references: https://bugs.launchpad.net/keystone/+bug/1098177 https://bugs.launchpad.net/ossn/+bug/1155566
bugbot adjusting priority
The following "Best-Practice" to get around this issue was summarized by Robert Clark. https://bugs.launchpad.net/ossn/+bug/1155566/comments/14 HTTP POST limiting advised to avoid Essex/Folsom Keystone DoS --- ### Summary ### Concurrent Keystone POST requests with large body messages are held in memory without filtering or rate limiting, this can lead to resource exhaustion on the Keystone server. ### Affected Services / Software ### Keystone, Databases ### Discussion ### Keystone stores POST messages in memory before validation, concurrent submission of multiple large POST messages can cause the Keystone process to be killed due to memory exhaustion, resulting in a remote Denial of Service. In many cases Keystone will be deployed behind a load-balancer or proxy that can rate limit POST messages inbound to Keystone. Grizzly is protected against that through the sizelimit middleware. ### Recommended Actions ### If you are in a situation where Keystone is directly exposed to incoming POST messages and not protected by the sizelimit middleware there are a number of load-balancing/proxy options, we suggest you consider one of the following: Nginx: Open-source, high-performance HTTP server and reverse proxy. Nginx Config: http://wiki.nginx.org/HttpCoreModule#client_max_body_size Apache: HTTP Server Project Apache Config: http://httpd.apache.org/docs/2.4/mod/core.html#limitrequestbody ### Contacts / References ### This OSSN Bug: https://bugs.launchpad.net/ossn/+bug/1155566 Original LaunchPad Bug : https://bugs.launchpad.net/keystone/+bug/1098177 OpenStack Security ML : <email address hidden> OpenStack Security Group : https://launchpad.net/~openstack-ossg
So for cloud-1, the solution would be twofold. If SSL is enabled, we're using Apache, this would be sufficient: diff --git a/chef/cookbooks/keystone/templates/default/keystone-apache-ssl.conf.erb b/chef/cookbooks/keystone/templates/default/keystone-apache-ssl.conf.erb index 7cc59cc..44ac20a 100644 --- a/chef/cookbooks/keystone/templates/default/keystone-apache-ssl.conf.erb +++ b/chef/cookbooks/keystone/templates/default/keystone-apache-ssl.conf.erb @@ -37,6 +37,8 @@ Listen 5000 Order allow,deny Allow from all </Directory> + + LimitRequestBody 102400 </VirtualHost> @@ -67,6 +69,8 @@ Listen 35357 Order allow,deny Allow from all </Directory> + + LimitRequestBody 102400 </VirtualHost> </IfDefine> 100k should be more than enough, even with large'ish tokens. If SSL is disabled, we're using keystone's default WSGI server. For that we would have to backport / use the sizelimit middleware. For Cloud-2, all WSGI pipelines behind root API routes (/, /v2.0, /v3) have the sizelimit middleware enabled.
Cloud 1 is dead and for Cloud 2 we use the (native) Python WSGI servers.