Bug 1222439 (CVE-2024-26712) - VUL-0: CVE-2024-26712: kernel: powerpc/kasan: error page alignment
Summary: VUL-0: CVE-2024-26712: kernel: powerpc/kasan: error page alignment
Status: RESOLVED FIXED
Alias: CVE-2024-26712
Product: SUSE Security Incidents
Classification: Novell Products
Component: Incidents (show other bugs)
Version: unspecified
Hardware: Other Other
: P5 - None : Normal
Target Milestone: ---
Assignee: Kernel Bugs
QA Contact: Security Team bot
URL: https://smash.suse.de/issue/400183/
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-04-08 08:08 UTC by SMASH SMASH
Modified: 2024-04-08 08:09 UTC (History)
1 user (show)

See Also:
Found By: Security Response Team
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description SMASH SMASH 2024-04-08 08:08:44 UTC
In the Linux kernel, the following vulnerability has been resolved:

powerpc/kasan: Fix addr error caused by page alignment

In kasan_init_region, when k_start is not page aligned, at the begin of
for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
`va = block + k_cur - k_start` is less than block, the addr va is invalid,
because the memory address space from va to block is not alloced by
memblock_alloc, which will not be reserved by memblock_reserve later, it
will be used by other places.

As a result, memory overwriting occurs.

for example:
int __init __weak kasan_init_region(void *start, size_t size)
{
[...]
	/* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */
	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
	[...]
	for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
		/* at the begin of for loop
		 * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
		 * va(dcd96c00) is less than block(dcd97000), va is invalid
		 */
		void *va = block + k_cur - k_start;
		[...]
	}
[...]
}

Therefore, page alignment is performed on k_start before
memblock_alloc() to ensure the validity of the VA address.

References:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2024-26712
https://www.cve.org/CVERecord?id=CVE-2024-26712
https://git.kernel.org/stable/c/0516c06b19dc64807c10e01bb99b552bdf2d7dbe
https://git.kernel.org/stable/c/0c09912dd8387e228afcc5e34ac5d79b1e3a1058
https://git.kernel.org/stable/c/230e89b5ad0a33f530a2a976b3e5e4385cb27882
https://git.kernel.org/stable/c/2738e0aa2fb24a7ab9c878d912dc2b239738c6c6
https://git.kernel.org/stable/c/4a7aee96200ad281a5cc4cf5c7a2e2a49d2b97b0
https://git.kernel.org/stable/c/70ef2ba1f4286b2b73675aeb424b590c92d57b25
https://git.kernel.org/pub/scm/linux/security/vulns.git/plain/cve/published/2024/CVE-2024-26712.mbox
https://bugzilla.redhat.com/show_bug.cgi?id=2273158
Comment 1 Thomas Leroy 2024-04-08 08:09:32 UTC
Kernels are not shipped with KASAN. Closing