Bug 1199011 - io_uring based I/O often gets into "hung task" state on x86_64
io_uring based I/O often gets into "hung task" state on x86_64
Status: RESOLVED FIXED
Classification: openSUSE
Product: PUBLIC SUSE Linux Enterprise Server 15 SP4
Classification: openSUSE
Component: Kernel
unspecified
x86-64 Other
: P2 - High : Major
: ---
Assigned To: David Disseldorp
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2022-04-29 08:45 UTC by Dirk Mueller
Modified: 2022-08-01 13:37 UTC (History)
5 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Dirk Mueller 2022-04-29 08:45:52 UTC
Recently we switched the Open Build Service based VM workers from aio=threads to aio=io_uring which appears to have scalability advantages on larger core-count compute hosts. 

However, it turned out that this is highly unstable on x86_64. for some reason it works fine on aarch64 however. The symptoms are that after a few minutes of uptime, all I/O starts to hang. 

sysrq-w is empty, sysrq-t is showing things like this:

[ 1522.209449] task:qemu-kvm        state:S stack:    0 pid:31369 ppid: 28067 flags:0x00000220
[ 1522.209454] Call Trace:
[ 1522.209456]  <TASK>
[ 1522.209459]  __schedule+0x2cd/0x10a0
[ 1522.209464]  ? enqueue_hrtimer+0x2f/0x80
[ 1522.209470]  ? hrtimer_start_range_ns+0x136/0x300
[ 1522.209478]  schedule+0x41/0xc0
[ 1522.209482]  schedule_hrtimeout_range_clock+0x8d/0x100
[ 1522.209489]  ? hrtimer_init_sleeper+0x80/0x80
[ 1522.209496]  do_sys_poll+0x3e0/0x5f0
[ 1522.209505]  ? ioapic_service+0x120/0x140 [kvm fa6457d3aa9e02c8a430fcc7389d471963b8f339]
[ 1522.209597]  ? ioapic_set_irq+0xbb/0x250 [kvm fa6457d3aa9e02c8a430fcc7389d471963b8f339]
[ 1522.209688]  ? poll_select_finish+0x220/0x220
[ 1522.209694]  ? poll_select_finish+0x220/0x220
[ 1522.209699]  ? poll_select_finish+0x220/0x220
[ 1522.209705]  ? poll_select_finish+0x220/0x220
[ 1522.209710]  ? poll_select_finish+0x220/0x220
[ 1522.209715]  ? poll_select_finish+0x220/0x220
[ 1522.209721]  ? poll_select_finish+0x220/0x220
[ 1522.209725]  ? poll_select_finish+0x220/0x220
[ 1522.209731]  ? recalibrate_cpu_khz+0x10/0x10
[ 1522.209737]  ? ktime_get_ts64+0x4c/0xe0
[ 1522.209743]  ? __x64_sys_ppoll+0xa5/0xe0
[ 1522.209748]  __x64_sys_ppoll+0xa5/0xe0
[ 1522.209753]  do_syscall_64+0x5b/0x80
[ 1522.209758]  ? syscall_exit_to_user_mode+0x18/0x40
[ 1522.209763]  ? do_syscall_64+0x67/0x80
[ 1522.209767]  ? do_syscall_64+0x67/0x80
[ 1522.209771]  ? syscall_exit_to_user_mode+0x18/0x40
[ 1522.209776]  ? do_syscall_64+0x67/0x80
[ 1522.209779]  ? exc_page_fault+0x67/0x150
[ 1522.209785]  entry_SYSCALL_64_after_hwframe+0x44/0xae


I can also reproduce the issue without any qemu/kvm involved just by launching "io_uring-bench", which is a test utility part of the linux upstream kernel (not packaged in SUSE as far as I can see). launching 10-30 of those programs in parallel, the host I/O locks up after just a few seconds. 

it is not easy to debug what is going on for me, as gdb'ing processes is no longer possible. 

While using the "Kernel:stable:backports", it appears the issue can be reproduced with 

5.14.https://download.opensuse.org/repositories/home:/tiwai:/kernel:/5.14/backport/x86_64/kernel-default-5.14.15-lp153.1.1.g2ba76d0.x86_64.rpm


and it can no longer be reproduced with

https://download.opensuse.org/repositories/home:/tiwai:/kernel:/5.15/backport/x86_64/kernel-default-devel-5.15.13-lp153.1.1.g01786ae.x86_64.rpm

so it appears something in 5.15 seems to have fixed the issue. fs/io_uring.c between those two versions are 128 commits unfortunately including larger reworks, so its not easy to bisect. looking over the commit messages, there are about 9 commits that were cc'ed to stable@ but that we do not have in the SLE kernel.
Comment 1 Dirk Mueller 2022-04-29 08:49:27 UTC
adding these patches from the 5.14->5.15 diff does NOT fix the issue:

+       patches.suse/0001-io_uring-be-smarter-about-waking-multiple-CQ-ring-wa.patch
+       patches.suse/0002-io_uring-use-kvmalloc-for-fixed-files.patch
+       patches.suse/0003-io_uring-inline-fixed-part-of-io_file_get.patch
+       patches.suse/0004-io_uring-rename-io_file_supports_async.patch
+       patches.suse/0005-io_uring-avoid-touching-inode-in-rw-prep.patch
+       patches.suse/0006-io_uring-clean-io-wq-callbacks.patch
+       patches.suse/0007-io_uring-remove-unnecessary-PF_EXITING-check.patch
+       patches.suse/0008-io_uring-refactor-io_alloc_req.patch
+       patches.suse/0009-io_uring-don-t-halt-iopoll-too-early.patch
+       patches.suse/0010-io_uring-add-more-locking-annotations-for-submit.patch
+       patches.suse/0011-io_uring-optimise-io_cqring_wait-hot-path.patch
+       patches.suse/0012-io_uring-extract-a-helper-for-ctx-quiesce.patch
+       patches.suse/0013-io_uring-move-io_put_task-definition.patch
+       patches.suse/0014-io_uring-move-io_rsrc_node_alloc-definition.patch
+       patches.suse/0015-io_uring-inline-io_free_req_deferred.patch
+       patches.suse/0016-io_uring-deduplicate-open-iopoll-check.patch
+       patches.suse/0017-io_uring-improve-ctx-hang-handling.patch
+       patches.suse/0018-io_uring-kill-unused-IO_IOPOLL_BATCH.patch
+       patches.suse/0019-io_uring-drop-exec-checks-from-io_req_task_submit.patch
+       patches.suse/0020-io_uring-optimise-putting-task-struct.patch
+       patches.suse/0021-io_uring-move-io_fallback_req_func.patch
+       patches.suse/0022-io_uring-cache-__io_free_req-d-requests.patch
+       patches.suse/0023-io_uring-remove-redundant-args-from-cache_free.patch
+       patches.suse/0024-io_uring-use-inflight_entry-instead-of-compl.list.patch
+       patches.suse/0025-io_uring-inline-struct-io_comp_state.patch
+       patches.suse/0026-io_uring-remove-extra-argument-for-overflow-flush.patch
+       patches.suse/0027-io_uring-inline-io_poll_remove_waitqs.patch
+       patches.suse/0028-io_uring-clean-up-tctx_task_work.patch
+       patches.suse/0029-io_uring-remove-file-batch-get-optimisation.patch
+       patches.suse/0030-io_uring-run-timeouts-from-task_work.patch
+       patches.suse/0031-io_uring-run-linked-timeouts-from-task_work.patch
+       patches.suse/0032-io_uring-run-regular-file-completions-from-task_work.patch
+       patches.suse/0033-io_uring-remove-IRQ-aspect-of-io_ring_ctx-completion.patch
+       patches.suse/0034-io_uring-move-req_ref_get-and-friends.patch
+       patches.suse/0035-io_uring-remove-req_ref_sub_and_test.patch
+       patches.suse/0036-io_uring-remove-submission-references.patch
+       patches.suse/0037-io_uring-skip-request-refcounting.patch
Comment 2 Dirk Mueller 2022-04-29 08:58:19 UTC
these are the messages you typically see when the message hits:

[  976s] [  971.826717][  T128] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.                                                    
[  976s] [  971.827003][  T128] INFO: task rpm:1109 blocked for more than 491 seconds.                                                                       
[  976s] [  971.827110][  T128]       Not tainted 5.17.4-1-default #1
Comment 3 Dirk Mueller 2022-05-03 20:01:28 UTC
still bisecting, likely the fix is somewhere in 

https://github.com/torvalds/linux/commit/60f8fbaa954452104a1914e21c5cc109f7bf276a
Comment 4 Dirk Mueller 2022-05-05 08:08:45 UTC
Bisecting points to:

f95dc207b93da9c88ddbb7741ec3730c6657b88e is the first bad commit
commit f95dc207b93da9c88ddbb7741ec3730c6657b88e
Author: Jens Axboe <axboe@kernel.dk>
Date:   Tue Aug 31 13:57:32 2021 -0600

    io-wq: split bounded and unbounded work into separate lists
    
    We've got a few issues that all boil down to the fact that we have one
    list of pending work items, yet two different types of workers to
    serve them. This causes some oddities around workers switching type and
    even hashed work vs regular work on the same bounded list.
    
    Just separate them out cleanly, similarly to how we already do
    accounting of what is running. That provides a clean separation and
    removes some corner cases that can cause stalls when handling IO
    that is punted to io-wq.
    
    Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker transitions bound state")
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

indeed, Takashi imported the patch this is fixing:

commit e27807761f00266a659c2f9f4b34d787d53e3ce7
Author: Takashi Iwai <tiwai@suse.de>
Date:   Wed Sep 15 11:32:38 2021 +0200

    io-wq: check max_worker limits if a worker transitions bound
    state (stable-5.14.4).

but not the fixes afterwards. Takashi, any reason for that?
Comment 5 David Disseldorp 2022-05-05 10:03:42 UTC
(In reply to Dirk Mueller from comment #4)
> Bisecting points to:
> 
> f95dc207b93da9c88ddbb7741ec3730c6657b88e is the first bad commit
> commit f95dc207b93da9c88ddbb7741ec3730c6657b88e
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Tue Aug 31 13:57:32 2021 -0600
> 
>     io-wq: split bounded and unbounded work into separate lists
...
>     Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker
> transitions bound state")
>     Signed-off-by: Jens Axboe <axboe@kernel.dk>
> 
> indeed, Takashi imported the patch this is fixing:
> 
> commit e27807761f00266a659c2f9f4b34d787d53e3ce7
> Author: Takashi Iwai <tiwai@suse.de>
> Date:   Wed Sep 15 11:32:38 2021 +0200
> 
>     io-wq: check max_worker limits if a worker transitions bound
>     state (stable-5.14.4).
> 
> but not the fixes afterwards. Takashi, any reason for that?

The upstream stable/linux-5.14.y branch seems to be getting mainline cherry-picked changes for io-uring at random, which we appear to have inherited here. One further inconsistency I've seen is that we're carrying a cherry pick of:

commit 08bdbd39b58474d762242e1fadb7f2eb9ffcca71
Author: Jens Axboe <axboe@kernel.dk>
Date:   Tue Aug 31 06:57:25 2021 -0600

    io-wq: ensure that hash wait lock is IRQ disabling
...
Fixes: a9a4aa9fbfc5 ("io-wq: wqe and worker locks no longer need to be IRQ safe")

without:

commit a9a4aa9fbfc5b87f315c63d9a317648774a46879
Author: Jens Axboe <axboe@kernel.dk>
Date:   Mon Aug 30 06:33:08 2021 -0600

    io-wq: wqe and worker locks no longer need to be IRQ safe

and corresponding io-uring IRQ context changes.

I have some changes queued up (see expanded tree at https://gitlab.suse.de/dmdiss/linux/-/tree/bsc1198811_iouring_timeout_uaf_15sp4), but this doesn't yet include f95dc207b93da9c88ddbb7741ec3730c6657b88e, etc.
Comment 6 Takashi Iwai 2022-05-09 11:25:44 UTC
(In reply to Dirk Mueller from comment #4)
> but not the fixes afterwards. Takashi, any reason for that?

As David already clarified, we merely take the upstream 5.14.x stable that doesn't seem cherry-picking the full fixes.
Comment 7 David Disseldorp 2022-05-10 21:06:44 UTC
(In reply to David Disseldorp from comment #5)
> (In reply to Dirk Mueller from comment #4)
> > Bisecting points to:
> > 
> > f95dc207b93da9c88ddbb7741ec3730c6657b88e is the first bad commit
> > commit f95dc207b93da9c88ddbb7741ec3730c6657b88e
> > Author: Jens Axboe <axboe@kernel.dk>
> > Date:   Tue Aug 31 13:57:32 2021 -0600
> > 
> >     io-wq: split bounded and unbounded work into separate lists
> ...
> >     Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker
> > transitions bound state")
> >     Signed-off-by: Jens Axboe <axboe@kernel.dk>
> > 
> > indeed, Takashi imported the patch this is fixing:
> > 
> > commit e27807761f00266a659c2f9f4b34d787d53e3ce7
> > Author: Takashi Iwai <tiwai@suse.de>
> > Date:   Wed Sep 15 11:32:38 2021 +0200
> > 
> >     io-wq: check max_worker limits if a worker transitions bound
> >     state (stable-5.14.4).
> > 
> > but not the fixes afterwards. Takashi, any reason for that?
> 
> The upstream stable/linux-5.14.y branch seems to be getting mainline
> cherry-picked changes for io-uring at random, which we appear to have
> inherited here. One further inconsistency I've seen is that we're carrying a
> cherry pick of:
> 
> commit 08bdbd39b58474d762242e1fadb7f2eb9ffcca71
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Tue Aug 31 06:57:25 2021 -0600
> 
>     io-wq: ensure that hash wait lock is IRQ disabling
> ...
> Fixes: a9a4aa9fbfc5 ("io-wq: wqe and worker locks no longer need to be IRQ
> safe")
> 
> without:
> 
> commit a9a4aa9fbfc5b87f315c63d9a317648774a46879
> Author: Jens Axboe <axboe@kernel.dk>
> Date:   Mon Aug 30 06:33:08 2021 -0600
> 
>     io-wq: wqe and worker locks no longer need to be IRQ safe
> 
> and corresponding io-uring IRQ context changes.
> 
> I have some changes queued up (see expanded tree at
> https://gitlab.suse.de/dmdiss/linux/-/tree/
> bsc1198811_iouring_timeout_uaf_15sp4), but this doesn't yet include
> f95dc207b93da9c88ddbb7741ec3730c6657b88e, etc.

I've included the f95dc207b93da9c88ddbb7741ec3730c6657b88e fix, attempted to clean up some of the inherited stable/linux-5.14.y mess and pushed to https://gitlab.suse.de/dmdiss/linux/ -> bsc1198811_iouring_timeout_uaf_15sp4_with_bsc1199011 . It's passing liburing unit tests but I'd appreciate if you could give it a spin in your repro test environment. I'll submit against the quilt tree tomorrow.
Comment 8 David Disseldorp 2022-05-11 11:42:01 UTC
@Takashi: JFYI, my pending quilt-tree submission reverts quite a few of the stable/linux-5.14.y patches back to mainline. E.g.:

> -From 9f9d088a4b7d0b6451e9cdd5225d7b192608ca38 Mon Sep 17 00:00:00 2001
> +From 0242f6426ea78fbe3933b44f8c55ae93ec37f6cc Mon Sep 17 00:00:00 2001
>  From: Jens Axboe <axboe@kernel.dk>
>  Date: Tue, 31 Aug 2021 13:53:00 -0600
>  Subject: [PATCH] io-wq: fix queue stalling race
>  Git-commit: 0242f6426ea78fbe3933b44f8c55ae93ec37f6cc
>  Patch-mainline: v5.15-rc1
> -References: stable-5.14.19
> -
> -commit 0242f6426ea78fbe3933b44f8c55ae93ec37f6cc upstream.
>  
>  We need to set the stalled bit early, before we drop the lock for adding
>  us to the stall hash queue. If not, then we can race with new work being
> @@ -14,15 +11,16 @@ queued between adding us to the stall hash and io_worker_handle_work()
>  marking us stalled.
>  
>  Signed-off-by: Jens Axboe <axboe@kernel.dk>
> -Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>  Acked-by: Takashi Iwai <tiwai@suse.de>
> +[ddiss: revert from stable-5.14.19 to mainline fix]
> +Acked-by: David Disseldorp <ddiss@suse.de>
...
Comment 19 Stefan Weiberg 2022-05-20 07:16:51 UTC
I don't see a reason for respinning the GMC for this issue. We should target the fix for the first kernel update before the FCS though. For the acceptance week we would only consider respinning and including fixes for SHIP_STOPPER+ bugs affecting the installation media. For this bug a maintenance update should be a safe way to resolve the issue.
Comment 20 Dirk Mueller 2022-06-04 18:05:09 UTC
appears resolved in the update kernel.
Comment 33 Swamp Workflow Management 2022-07-21 22:34:58 UTC
SUSE-SU-2022:2520-1: An update that solves 49 vulnerabilities, contains 26 features and has 207 fixes is now available.

Category: security (important)
Bug References: 1055117,1061840,1065729,1071995,1089644,1103269,1118212,1121726,1137728,1156395,1157038,1157923,1175667,1179439,1179639,1180814,1183682,1183872,1184318,1184924,1187716,1188885,1189998,1190137,1190208,1190336,1190497,1190768,1190786,1190812,1191271,1191663,1192483,1193064,1193277,1193289,1193431,1193556,1193629,1193640,1193787,1193823,1193852,1194086,1194111,1194191,1194409,1194501,1194523,1194526,1194583,1194585,1194586,1194625,1194765,1194826,1194869,1195099,1195287,1195478,1195482,1195504,1195651,1195668,1195669,1195775,1195823,1195826,1195913,1195915,1195926,1195944,1195957,1195987,1196079,1196114,1196130,1196213,1196306,1196367,1196400,1196426,1196478,1196514,1196570,1196723,1196779,1196830,1196836,1196866,1196868,1196869,1196901,1196930,1196942,1196960,1197016,1197157,1197227,1197243,1197292,1197302,1197303,1197304,1197362,1197386,1197501,1197601,1197661,1197675,1197761,1197817,1197819,1197820,1197888,1197889,1197894,1197915,1197917,1197918,1197920,1197921,1197922,1197926,1198009,1198010,1198012,1198013,1198014,1198015,1198016,1198017,1198018,1198019,1198020,1198021,1198022,1198023,1198024,1198027,1198030,1198034,1198058,1198217,1198379,1198400,1198402,1198410,1198412,1198413,1198438,1198484,1198577,1198585,1198660,1198802,1198803,1198806,1198811,1198826,1198829,1198835,1198968,1198971,1199011,1199024,1199035,1199046,1199052,1199063,1199163,1199173,1199260,1199314,1199390,1199426,1199433,1199439,1199482,1199487,1199505,1199507,1199605,1199611,1199626,1199631,1199650,1199657,1199674,1199736,1199793,1199839,1199875,1199909,1200015,1200019,1200045,1200046,1200144,1200205,1200211,1200259,1200263,1200284,1200315,1200343,1200420,1200442,1200475,1200502,1200567,1200569,1200571,1200599,1200600,1200608,1200611,1200619,1200692,1200762,1200763,1200806,1200807,1200808,1200809,1200810,1200812,1200813,1200815,1200816,1200820,1200821,1200822,1200824,1200825,1200827,1200828,1200829,1200830,1200845,1200882,1200925,1201050,1201080,1201160,1201171,1201177,1201193,1201196,1201218,1201222,1201228,1201251,1201381,1201471,1201524
CVE References: CVE-2021-26341,CVE-2021-33061,CVE-2021-4204,CVE-2021-44879,CVE-2021-45402,CVE-2022-0264,CVE-2022-0494,CVE-2022-0617,CVE-2022-1012,CVE-2022-1016,CVE-2022-1184,CVE-2022-1198,CVE-2022-1205,CVE-2022-1462,CVE-2022-1508,CVE-2022-1651,CVE-2022-1652,CVE-2022-1671,CVE-2022-1679,CVE-2022-1729,CVE-2022-1734,CVE-2022-1789,CVE-2022-1852,CVE-2022-1966,CVE-2022-1972,CVE-2022-1974,CVE-2022-1998,CVE-2022-20132,CVE-2022-20154,CVE-2022-21123,CVE-2022-21125,CVE-2022-21127,CVE-2022-21166,CVE-2022-21180,CVE-2022-21499,CVE-2022-2318,CVE-2022-23222,CVE-2022-26365,CVE-2022-26490,CVE-2022-29582,CVE-2022-29900,CVE-2022-29901,CVE-2022-30594,CVE-2022-33740,CVE-2022-33741,CVE-2022-33742,CVE-2022-33743,CVE-2022-33981,CVE-2022-34918
JIRA References: SLE-13513,SLE-13521,SLE-15442,SLE-17855,SLE-18194,SLE-18234,SLE-18375,SLE-18377,SLE-18378,SLE-18382,SLE-18385,SLE-18901,SLE-18938,SLE-18978,SLE-19001,SLE-19026,SLE-19242,SLE-19249,SLE-19253,SLE-19924,SLE-21315,SLE-23643,SLE-24072,SLE-24093,SLE-24350,SLE-24549
Sources used:
openSUSE Leap 15.4 (src):    dtb-aarch64-5.14.21-150400.24.11.1, kernel-64kb-5.14.21-150400.24.11.1, kernel-debug-5.14.21-150400.24.11.1, kernel-default-5.14.21-150400.24.11.1, kernel-default-base-5.14.21-150400.24.11.1.150400.24.3.6, kernel-docs-5.14.21-150400.24.11.1, kernel-kvmsmall-5.14.21-150400.24.11.1, kernel-obs-build-5.14.21-150400.24.11.1, kernel-obs-qa-5.14.21-150400.24.11.1, kernel-source-5.14.21-150400.24.11.1, kernel-syms-5.14.21-150400.24.11.1, kernel-zfcpdump-5.14.21-150400.24.11.1
SUSE Linux Enterprise Workstation Extension 15-SP4 (src):    kernel-default-5.14.21-150400.24.11.1
SUSE Linux Enterprise Module for Live Patching 15-SP4 (src):    kernel-default-5.14.21-150400.24.11.1, kernel-livepatch-SLE15-SP4_Update_1-1-150400.9.5.3
SUSE Linux Enterprise Module for Legacy Software 15-SP4 (src):    kernel-default-5.14.21-150400.24.11.1
SUSE Linux Enterprise Module for Development Tools 15-SP4 (src):    kernel-docs-5.14.21-150400.24.11.1, kernel-obs-build-5.14.21-150400.24.11.1, kernel-source-5.14.21-150400.24.11.1, kernel-syms-5.14.21-150400.24.11.1
SUSE Linux Enterprise Module for Basesystem 15-SP4 (src):    kernel-64kb-5.14.21-150400.24.11.1, kernel-default-5.14.21-150400.24.11.1, kernel-default-base-5.14.21-150400.24.11.1.150400.24.3.6, kernel-source-5.14.21-150400.24.11.1, kernel-zfcpdump-5.14.21-150400.24.11.1
SUSE Linux Enterprise High Availability 15-SP4 (src):    kernel-default-5.14.21-150400.24.11.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 35 Swamp Workflow Management 2022-08-01 13:37:24 UTC
SUSE-SU-2022:2615-1: An update that solves 48 vulnerabilities, contains 26 features and has 202 fixes is now available.

Category: security (important)
Bug References: 1055117,1061840,1065729,1071995,1089644,1103269,1118212,1121726,1137728,1156395,1157038,1157923,1175667,1179439,1179639,1180814,1183682,1183872,1184318,1184924,1187716,1188885,1189998,1190137,1190208,1190336,1190497,1190768,1190786,1190812,1191271,1191663,1192483,1193064,1193277,1193289,1193431,1193556,1193629,1193640,1193787,1193823,1193852,1194086,1194111,1194191,1194409,1194501,1194523,1194526,1194583,1194585,1194586,1194625,1194765,1194826,1194869,1195099,1195287,1195478,1195482,1195504,1195651,1195668,1195669,1195775,1195823,1195826,1195913,1195915,1195926,1195944,1195957,1195987,1196079,1196114,1196130,1196213,1196306,1196367,1196400,1196426,1196478,1196514,1196570,1196723,1196779,1196830,1196836,1196866,1196868,1196869,1196901,1196930,1196942,1196960,1197016,1197157,1197227,1197243,1197292,1197302,1197303,1197304,1197362,1197386,1197501,1197601,1197661,1197675,1197761,1197817,1197819,1197820,1197888,1197889,1197894,1197915,1197917,1197918,1197920,1197921,1197922,1197926,1198009,1198010,1198012,1198013,1198014,1198015,1198016,1198017,1198018,1198019,1198020,1198021,1198022,1198023,1198024,1198027,1198030,1198034,1198058,1198217,1198379,1198400,1198402,1198412,1198413,1198438,1198484,1198577,1198585,1198660,1198802,1198803,1198806,1198811,1198826,1198835,1198968,1198971,1199011,1199024,1199035,1199046,1199052,1199063,1199163,1199173,1199260,1199314,1199390,1199426,1199433,1199439,1199482,1199487,1199505,1199507,1199605,1199611,1199626,1199631,1199650,1199657,1199674,1199736,1199793,1199839,1199875,1199909,1200015,1200019,1200045,1200046,1200144,1200205,1200211,1200259,1200263,1200284,1200315,1200343,1200420,1200442,1200475,1200502,1200567,1200569,1200571,1200572,1200599,1200600,1200608,1200611,1200619,1200692,1200762,1200763,1200806,1200807,1200808,1200809,1200810,1200812,1200815,1200816,1200820,1200822,1200824,1200825,1200827,1200828,1200829,1200830,1200845,1200882,1200925,1201050,1201160,1201171,1201177,1201193,1201196,1201218,1201222,1201228,1201251,150300
CVE References: CVE-2021-26341,CVE-2021-33061,CVE-2021-4204,CVE-2021-44879,CVE-2021-45402,CVE-2022-0264,CVE-2022-0494,CVE-2022-0617,CVE-2022-1012,CVE-2022-1016,CVE-2022-1184,CVE-2022-1198,CVE-2022-1205,CVE-2022-1508,CVE-2022-1651,CVE-2022-1652,CVE-2022-1671,CVE-2022-1679,CVE-2022-1729,CVE-2022-1734,CVE-2022-1789,CVE-2022-1852,CVE-2022-1966,CVE-2022-1972,CVE-2022-1974,CVE-2022-1998,CVE-2022-20132,CVE-2022-20154,CVE-2022-21123,CVE-2022-21125,CVE-2022-21127,CVE-2022-21166,CVE-2022-21180,CVE-2022-21499,CVE-2022-2318,CVE-2022-23222,CVE-2022-26365,CVE-2022-26490,CVE-2022-29582,CVE-2022-29900,CVE-2022-29901,CVE-2022-30594,CVE-2022-33740,CVE-2022-33741,CVE-2022-33742,CVE-2022-33743,CVE-2022-33981,CVE-2022-34918
JIRA References: SLE-13513,SLE-13521,SLE-15442,SLE-17855,SLE-18194,SLE-18234,SLE-18375,SLE-18377,SLE-18378,SLE-18382,SLE-18385,SLE-18901,SLE-18938,SLE-18978,SLE-19001,SLE-19026,SLE-19242,SLE-19249,SLE-19253,SLE-19924,SLE-21315,SLE-23643,SLE-24072,SLE-24093,SLE-24350,SLE-24549
Sources used:
openSUSE Leap 15.4 (src):    kernel-azure-5.14.21-150400.14.7.1, kernel-source-azure-5.14.21-150400.14.7.1, kernel-syms-azure-5.14.21-150400.14.7.1
SUSE Linux Enterprise Module for Public Cloud 15-SP4 (src):    kernel-azure-5.14.21-150400.14.7.1, kernel-source-azure-5.14.21-150400.14.7.1, kernel-syms-azure-5.14.21-150400.14.7.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.