Bug 1213430 (CVE-2023-38403) - VUL-0: CVE-2023-38403: iperf: integer overflow leading to heap buffer overflow
Summary: VUL-0: CVE-2023-38403: iperf: integer overflow leading to heap buffer overflow
Status: RESOLVED FIXED
Alias: CVE-2023-38403
Product: SUSE Security Incidents
Classification: Novell Products
Component: Incidents (show other bugs)
Version: unspecified
Hardware: Other Other
: P3 - Medium : Major
Target Milestone: ---
Assignee: Security Team bot
QA Contact: Security Team bot
URL: https://smash.suse.de/issue/372741/
Whiteboard: CVSSv3.1:SUSE:CVE-2023-38403:7.4:(AV:...
Keywords:
Depends on:
Blocks:
 
Reported: 2023-07-18 11:57 UTC by Thomas Leroy
Modified: 2023-09-26 12:30 UTC (History)
4 users (show)

See Also:
Found By: Security Response Team
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Thomas Leroy 2023-07-18 11:57:10 UTC
CVE-2023-38403

iperf3 uses the length to determine the size of a dynamically allocated memory buffer in which to store the incoming message. If the length equals 0xffffffff, an integer overflow can be triggered in the receiving iperf3 process (typically the server), which can in turn cause heap corruption and an abort/crash. While this is unlikely to happen during normal iperf3 operation, a suitably crafted client program could send a sequence of bytes on the iperf3 control channel to cause an iperf3 server to crash.

Upstream fix:
https://github.com/esnet/iperf/commit/0ef151550d96cc4460f98832df84b4a1e87c65e9

Reference:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1040830
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2023-38403
https://bugzilla.redhat.com/show_bug.cgi?id=2222204
https://www.cve.org/CVERecord?id=CVE-2023-38403
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1040830
https://cwe.mitre.org/data/definitions/130.html
https://downloads.es.net/pub/iperf/esnet-secadv-2023-0001.txt.asc
https://github.com/esnet/iperf/commit/0ef151550d96cc4460f98832df84b4a1e87c65e9
https://github.com/esnet/iperf/issues/1542
Comment 1 Thomas Leroy 2023-07-18 12:00:23 UTC
There is no maintainer for iperf.
@Michal, I assigned the bug to you since you are the last one who updated iperf (in 2018...). Feel free to reassign to someone else if you think there is a better fit.

Affected:
- SUSE:SLE-15:Update
- openSUSE:Factory
Comment 4 Michal Svec 2023-07-24 12:28:36 UTC
iperf is more or less actively maintained in OBS:
https://build.opensuse.org/package/show/network:utilities/iperf

Perhaps someone can just take the latest version and submit as
a MU to 15.4/15.5/Factory?
Comment 5 Dirk Mueller 2023-07-25 07:58:28 UTC
(In reply to Michal Svec from comment #4)

> iperf is more or less actively maintained in OBS:
> https://build.opensuse.org/package/show/network:utilities/iperf
> 
> Perhaps someone can just take the latest version and submit as
> a MU to 15.4/15.5/Factory?

I submitted the security backport separately. version update requires an ECO. the immediate need is solved. we can do a version update however. would you be able to create the ECO and push that through the approvals?
Comment 6 Michal Svec 2023-07-25 08:08:08 UTC
Since iperf is only in PH, it's enough to submit to Leap 15.4/15.5 and
PH will inherit it automatically (and no ECO is needed).
Comment 7 Maintenance Automation 2023-07-26 16:47:05 UTC
SUSE-SU-2023:2987-1: An update that solves one vulnerability can now be installed.

Category: security (important)
Bug References: 1213430
CVE References: CVE-2023-38403
Sources used:
openSUSE Leap 15.4 (src): iperf-3.5-150000.3.3.1
openSUSE Leap 15.5 (src): iperf-3.5-150000.3.3.1
SUSE Package Hub 15 15-SP4 (src): iperf-3.5-150000.3.3.1
SUSE Package Hub 15 15-SP5 (src): iperf-3.5-150000.3.3.1
SUSE Enterprise Storage 7.1 (src): iperf-3.5-150000.3.3.1
SUSE Enterprise Storage 7 (src): iperf-3.5-150000.3.3.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.