Bug 1205025 - (CVE-2022-3854) VUL-0: CVE-2022-3854: ceph: possible DoS issue in ceph URL processing on RGW backends
(CVE-2022-3854)
VUL-0: CVE-2022-3854: ceph: possible DoS issue in ceph URL processing on RGW ...
Status: NEW
Classification: Novell Products
Product: SUSE Security Incidents
Classification: Novell Products
Component: Incidents
unspecified
Other Other
: P3 - Medium : Normal
: ---
Assigned To: E-Mail List
Security Team bot
https://smash.suse.de/issue/347076/
CVSSv3.1:SUSE:CVE-2022-3854:6.5:(AV:N...
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2022-11-04 07:52 UTC by Alexander Bergmann
Modified: 2023-02-01 06:13 UTC (History)
6 users (show)

See Also:
Found By: Security Response Team
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Alexander Bergmann 2022-11-04 07:52:32 UTC
rh#2139925

Hello.

We have discovered issue in URL processing on RGW backends. According to
our tests, the inputs are not properly sanitized. When we tried use URL
with tenant, but without bucket, RGW backend will crash.

For example:
curl -vk "https://s3.endpointname.cz/aaa:"
This will result in the following one.
{
"archived": "2022-10-19 12:19:29.252786",
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f97376d4ce0]",
"(rgw::ARN::ARN(rgw_bucket const&)+0x42) [0x7f9742366312]",
"(verify_bucket_permission(DoutPrefixProvider const*,
perm_state_base*, rgw_bucket const&, RGWAccessControlPolicy*,
RGWAccessControlPolicy*, boost::optional<rgw::IAM::Policy> const&,
std::vector<rgw::IAM::Policy, std::allocator<rgw::IAM::Policy> > const&,
std::vector<rgw::IAM::Policy, std::allocator<rgw::IAM::Policy> > const&,
unsigned long)+0xa2) [0x7f97423b6b62]",
"(verify_bucket_permission(DoutPrefixProvider const*,
req_state*, unsigned long)+0x83) [0x7f97423b7993]",
"(RGWListBucket::verify_permission(optional_yield)+0x12e)
[0x7f974258b53e]",
"(rgw_process_authenticated(RGWHandler_REST*, RGWOp*&,
RGWRequest*, req_state*, optional_yield, bool)+0x7f7) [0x7f97422376b7]",
"(process_request(rgw::sal::RGWRadosStore*, RGWREST*,
RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, rgw::auth::StrategyRegistry const&,
RGWRestfulIO*, OpsLogSink*, optional_yield, rgw::dmclock::Scheduler*,
std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> >*, std::chrono::duration<unsigned long,
std::ratio<1l, 1000000000l> >*, int*)+0x2861) [0x7f974223b831]",
"/lib64/libradosgw.so.2(+0x440cd0) [0x7f97421bacd0]",
"/lib64/libradosgw.so.2(+0x4425ea) [0x7f97421bc5ea]",
"make_fcontext()"
],
"ceph_version": "16.2.10",
"crash_id":
"2022-10-18T11:00:29.547546Z_b24a93fd-011a-45f4-96c4-4ebb771548ec",
"entity_name": "client.rgw.server4",
"os_id": "centos",
"os_name": "CentOS Stream",
"os_version": "8",
"os_version_id": "8",
"process_name": "radosgw",
"stack_sig":
"3d8bd0ab19b12dacd44b0317148da8e88c43c00daf88b40957cff16d03e92725",
"timestamp": "2022-10-18T11:00:29.547546Z",
"utsname_hostname": "server4.cz",
"utsname_machine": "x86_64",
"utsname_release": "4.18.0-338.el8.x86_64",
"utsname_sysname": "Linux",
"utsname_version": "#1 SMP Fri Aug 27 17:32:14 UTC 2021"
}

It takes a while to restart the gateway, so if we run mentioned command
in the cycle we can easily DOS all S3/Swift traffic.

Another problem we have discovered is with encoded URL. This will result
in the same problem - crash of RGW backend.

For example:
curl -vk "https://s3.endpointname.cz/aaa%3A"

As a quick fix, we set up discarding of every request containts tenant
without the specified bucket or encoded URL.

http-request deny if { path -m end : } || { path -m end %3A }

We are using pacific (16.2.10) version of ceph.
Can you please look at it?
Thank you

Best regards,
Michal Strnad

References:
https://bugzilla.redhat.com/show_bug.cgi?id=2139925
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2022-3854
Comment 2 Tim Serong 2022-12-22 04:40:19 UTC
From the backtrace, this is https://tracker.ceph.com/issues/55765 which has since been fixed upstream and backported to Ceph Pacific.  We'll be picking up the fix in our forthcoming Ceph 16.2.11 maintenance update.
Comment 3 Hu 2023-01-11 14:17:50 UTC
Affected:
- SUSE:SLE-15-SP2:Update/ceph                            15.2.16
- SUSE:SLE-15-SP3:Update/ceph                            16.2.9                               
- SUSE:SLE-15-SP3:Update:Products:SES7:Update/ceph       16.2.9                               
- SUSE:SLE-15-SP4:Update/ceph                            16.2.9                               
- openSUSE:Factory/ceph                                  16.2.9                               

Not Affected:
- SUSE:SLE-11-SP3:Update/ceph                            0.80.11                          
- SUSE:SLE-12-SP2:Update/ceph                            10.2.5
- SUSE:SLE-12-SP3:Update/ceph                            12.2.13
- SUSE:SLE-15:Update/ceph                                13.2.4
- SUSE:SLE-15-SP1:Update/ceph                            14.2.22