Bug 1219073 - [mdadm] system crashed during md raid device attach/detach operations
Summary: [mdadm] system crashed during md raid device attach/detach operations
Status: NEW
Alias: None
Product: PUBLIC SUSE Linux Enterprise Server 15 SP4
Classification: openSUSE
Component: Other (show other bugs)
Version: unspecified
Hardware: x86-64 Other
: P5 - None : Normal
Target Milestone: ---
Assignee: Coly Li
QA Contact:
URL: https://openqa.suse.de/tests/13308075...
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-01-23 07:34 UTC by Richard Fan
Modified: 2024-03-26 15:25 UTC (History)
8 users (show)

See Also:
Found By: openQA
Services Priority:
Business Priority:
Blocker: Yes
Marketing QA Status: ---
IT Deployment: ---
santiago.zarate: SHIP_STOPPER? (volkan.oztuzun)


Attachments
system.map (4.83 MB, text/plain)
2024-01-23 07:45 UTC, Richard Fan
Details
vmlinux (12.00 MB, application/gzip)
2024-01-23 07:49 UTC, Richard Fan
Details
dmesg (50.00 KB, text/plain)
2024-01-23 07:51 UTC, Richard Fan
Details
readme (193 bytes, text/plain)
2024-01-23 07:52 UTC, Richard Fan
Details
test script (9.69 KB, application/x-shellscript)
2024-01-23 07:54 UTC, Richard Fan
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Richard Fan 2024-01-23 07:34:33 UTC
## Description

The issue is a sporadic one, but I can see critical issue that system gets crashed during the mdadm tests. 

Please see attached file[s] for crash dump logs. and do let me know if any information required.

Some md raid operation can be found below:

md1054 : active raid1 loop43[2] loop42[1] loop41[0]
      522240 blocks super 1.2 [3/3] [UUU]
      [=======>.............]  resync = 38.5% (201984/522240) finish=0.0min speed=201984K/sec
      
unused devices: <none>
      522240 blocks super 1.2 [3/3] [UUU]
# cat /proc/mdstat
Personalities : [raid0] [raid1] 
md1054 : active raid1 loop43[2] loop42[1] loop41[0]
      522240 blocks super 1.2 [3/3] [UUU]
      [=======>.............]  resync = 38.5% (201984/522240) finish=0.0min speed=201984K/sec
      
unused devices: <none>
# fdisk -l /dev/md1054
Disk /dev/md1054: 510 MiB, 534773760 bytes, 1044480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
# mkfs.ext4 /dev/md1054
mke2fs 1.46.4 (18-Aug-2021)
Discarding device blocks:      0/522240             done                            
Creating filesystem with 522240 1k blocks and 130560 inodes
Filesystem UUID: 989c4afc-61d7-4062-8e1d-69ef9eab3a5b
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Allocating group tables:  0/64     done                            
Writing inode tables:  0/64     done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information:  0/64     done
# mount /dev/md1054 /var/tmp/mdadm_test/13261/mnt
# dd if=/dev/urandom of=random_data.raw bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.340053 s, 308 MB/s
4539918c216c62bcaa768a4ad0553766  random_data.raw
Copying random file 1 ...
# cp random_data.raw /var/tmp/mdadm_test/13261/mnt/random_1.raw
Copying random file 2 ...
# cp random_data.raw /var/tmp/mdadm_test/13261/mnt/random_2.raw
Copying random file 3 ...
# cp random_data.raw /var/tmp/mdadm_test/13261/mnt/random_3.raw
Copying random file 4 ...
# cp random_data.raw /var/tmp/mdadm_test/13261/mnt/random_4.raw
# md5sum /var/tmp/mdadm_test/13261/mnt/random_1.raw
4539918c216c62bcaa768a4ad0553766  /var/tmp/mdadm_test/13261/mnt/random_1.raw
# md5sum /var/tmp/mdadm_test/13261/mnt/random_2.raw
4539918c216c62bcaa768a4ad0553766  /var/tmp/mdadm_test/13261/mnt/random_2.raw
# md5sum /var/tmp/mdadm_test/13261/mnt/random_3.raw
4539918c216c62bcaa768a4ad0553766  /var/tmp/mdadm_test/13261/mnt/random_3.raw
# md5sum /var/tmp/mdadm_test/13261/mnt/random_4.raw
4539918c216c62bcaa768a4ad0553766  /var/tmp/mdadm_test/13261/mnt/random_4.raw
# mdadm /dev/md1054 --fail /dev/loop42
mdadm: set /dev/loop42 faulty in /dev/md1054
             State : active, degraded, resyncing 
# cat /proc/mdstat
Personalities : [raid0] [raid1] 
md1054 : active raid1 loop43[2] loop42[1](F) loop41[0]
      522240 blocks super 1.2 [3/2] [U_U]
      [=============>.......]  resync = 65.9% (344640/522240) finish=0.0min speed=86160K/sec
      
unused devices: <none>
# md5sum /var/tmp/mdadm_test/13261/mnt/random_1.raw [system crashed here....]

#########################################################################

## Observation

openQA test in scenario sle-15-SP4-Server-DVD-Incidents-TERADATA-x86_64-mau-extratests2@64bit fails in
[mdadm](https://openqa.suse.de/tests/13308075/modules/mdadm/steps/6)

## Test suite description
Testsuite maintained at https://gitlab.suse.de/qa-maintenance/qam-openqa-yml. Run console tests against aggregated test repo


## Reproducible

Fails since (at least) Build [:31365:tomcat](https://openqa.suse.de/tests/13307913)


## Expected result

Last good: [:32173:MozillaFirefox](https://openqa.suse.de/tests/13307450) (or more recent)


## Further details

Always latest result in this scenario: [latest](https://openqa.suse.de/tests/latest?arch=x86_64&distri=sle&flavor=Server-DVD-Incidents-TERADATA&machine=64bit&test=mau-extratests2&version=15-SP4)
Comment 1 Richard Fan 2024-01-23 07:45:38 UTC
Created attachment 872080 [details]
system.map
Comment 2 Richard Fan 2024-01-23 07:49:05 UTC
Created attachment 872081 [details]
vmlinux
Comment 3 Richard Fan 2024-01-23 07:51:41 UTC
Created attachment 872082 [details]
dmesg
Comment 4 Richard Fan 2024-01-23 07:52:13 UTC
Created attachment 872083 [details]
readme
Comment 5 Richard Fan 2024-01-23 07:52:58 UTC
Core file is >20mb, so please ping me if you need it.
Comment 6 Richard Fan 2024-01-23 07:54:22 UTC
Created attachment 872084 [details]
test script
Comment 7 Richard Fan 2024-01-23 08:12:57 UTC
Here comes some serial logs:
 
 745.956569][T17364] md: resync of RAID array md1054
[  747.201224][T17389] EXT4-fs (md1054): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
[  750.972135][T17420] md/raid1:md1054: Disk failure on loop42, disabling device.
[  750.972135][T17420] md/raid1:md1054: Operation continuing on 2 devices.
[  750.984157][T17364] md: md1054: resync interrupted.
[  751.170168][T17439] md: resync of RAID array md1054
[  751.401757][    C0] ------------[ cut here ]------------
[  751.402663][    C0] kernel BUG at ../mm/filemap.c:1596!
[  751.403515][    C0] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
[  751.404482][    C0] CPU: 0 PID: 12 Comm: ksoftirqd/0 Kdump: loaded Not tainted 5.14.21-150400.24.103-default #1 SLE15-SP4 3bc766336f6bbcb4e05d29c2ae85689428be8050
[  751.406809][    C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552c-rebuilt.opensuse.org 04/01/2014
[  751.408758][    C0] RIP: 0010:end_page_writeback+0xd5/0xe0
[  751.409653][    C0] Code: 48 8b 07 48 c1 e8 33 83 e0 07 83 f8 04 75 e5 48 8b 47 08 8b 40 68 83 e8 01 83 f8 01 77 d6 5b e9 41 0c 01 00 5b e9 ab 0b 01 00 <0f> 0b 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 40 84 f6 41 54 55
[  751.412765][    C0] RSP: 0018:ffff98780006bd48 EFLAGS: 00010246
[  751.413723][    C0] RAX: 0000000000000000 RBX: ffffe2b0c183d0c0 RCX: 0000000000000000
[  751.414984][    C0] RDX: 0000000000000000 RSI: 0000000000000206 RDI: ffff895481079000
[  751.416255][    C0] RBP: 0000000016ff9000 R08: ffffffffc03df6f0 R09: 0000000000000001
[  751.417516][    C0] R10: ffff8954ea16a838 R11: 0000000000000001 R12: 0000000000001000
[  751.418777][    C0] R13: ffff895487fb4000 R14: 0000000000000000 R15: ffff8954b45aa438
[  751.420046][    C0] FS:  0000000000000000(0000) GS:ffff8954ffc00000(0000) knlGS:0000000000000000
[  751.421459][    C0] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  751.422501][    C0] CR2: 00007f621df1d008 CR3: 0000000002e0c000 CR4: 00000000000006f0
[  751.423763][    C0] Call Trace:
[  751.424300][    C0]  <TASK>
[  751.424764][    C0]  end_bio_extent_writepage+0xe0/0x1c0 [btrfs 318d31fd497a3dd1771871cc9102e5c0179f91f7]
[  751.426336][    C0]  raid_end_bio_io+0x28/0x90 [raid1 d253085f8743a1a13536ea408e79b7e7cbaab0c7]
[  751.427738][    C0]  raid1_end_write_request+0x147/0x3a0 [raid1 d253085f8743a1a13536ea408e79b7e7cbaab0c7]
[  751.429282][    C0]  blk_update_request+0xb8/0x4a0
[  751.430063][    C0]  blk_mq_end_request+0x1a/0x110
[  751.430843][    C0]  blk_complete_reqs+0x35/0x50
[  751.431596][    C0]  __do_softirq+0xd5/0x2c0
[  751.432299][    C0]  run_ksoftirqd+0x2a/0x40
[  751.432999][    C0]  smpboot_thread_fn+0x110/0x1d0
[  751.433779][    C0]  ? sort_range+0x20/0x20
[  751.434463][    C0]  kthread+0x156/0x180
[  751.435106][    C0]  ? set_kthread_struct+0x50/0x50
[  751.435910][    C0]  ret_from_fork+0x22/0x30
[  751.436611][    C0]  </TASK>
[  751.437087][    C0] Modules linked in: raid1 ext4 crc16 mbcache jbd2 raid0 md_mod loop st sd_mod t10_pi lp parport_pc msr xfrm_user xfrm_algo xsk_diag tcp_diag udp_diag raw_diag inet_diag unix_diag af_packet_diag netlink_diag binfmt_misc isofs af_packet nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_tables ebtable_nat ebtable_broute ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_mangle iptable_raw iptable_security ip_set nfnetlink iscsi_ibft iscsi_boot_sysfs ebtable_filter ebtables rfkill ip6table_filter ip6_tables iptable_filter bpfilter snd_hda_codec_generic ledtrig_audio xfs snd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi ppdev snd_hda_codec hid_generic snd_hda_core snd_hwdep snd_pcm joydev pcspkr snd_timer usbhid virtio_net net_failover failover snd i2c_piix4 parport soundcore button fuse configfs ip_tables
[  751.437132][    C0]  x_tables bochs_drm drm_vram_helper drm_kms_helper xhci_pci xhci_pci_renesas syscopyarea sysfillrect sysimgblt ata_generic fb_sys_fops xhci_hcd cec rc_core drm_ttm_helper ata_piix ttm sr_mod cdrom ahci libahci drm libata usbcore serio_raw virtio_blk virtio_scsi floppy qemu_fw_cfg btrfs blake2b_generic libcrc32c xor raid6_pq sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod virtio_rng [last unloaded: ppa]
[  751.457161][    C0] Supported: Yes
[    0.000000][    T0] Linux version 5.14.21-150400.24.103-default (geeko@buildhost) (gcc (SUSE Linux) 7.5.0, GNU ld (GNU Binutils; SUSE Linux Enterprise 15) 2.41.0.20230908-150100.7.46) #1 SMP PREEMPT_DYNAMIC Wed Jan 10 13:40:49 UTC 2024 (8afebed)
[    0.000000][    T0] Command line: elfcorehdr=0x71000000  video=1024x768 plymouth.ignore-serial-consoles console=ttyS0 console=tty kernel.softlockup_panic=1 security=apparmor mitigations=auto elevator=deadline sysrq=yes reset_devices acpi_no_memhotplug cgroup_disable=memory nokaslr numa=off irqpoll nr_cpus=1 root=kdump rootflags=bind rd.udev.children-max=8 disable_cpu_apicid=0   panic=1
Comment 8 Richard Fan 2024-01-24 02:31:43 UTC
The issue can be seen on sle15sp5 as well : http://openqa.suse.de/tests/13327359#step/mdadm/6
Comment 10 Marcus Meissner 2024-01-24 16:54:14 UTC
rpm-5.14.21-150400.24.100--sle15-sp4-updates..rpm-5.14.21-150400.24.103--sle15-sp4-ltss-updates

has a massive RAID rewrite causing this regression I think.
Comment 11 Coly Li 2024-01-25 05:51:13 UTC
(In reply to Marcus Meissner from comment #10)
> rpm-5.14.21-150400.24.100--sle15-sp4-updates..rpm-5.14.21-150400.24.103--
> sle15-sp4-ltss-updates
> 
> has a massive RAID rewrite causing this regression I think.

It is quite probably, recently we also have several fixes from upstream. Let me take a look.
Comment 12 Coly Li 2024-01-25 05:53:05 UTC
This this is SLE15-SP4-LTSS, I don't plan to add more fixes to fix the problematic patches. Let me check and drop the suspicious git-fixes.
Comment 17 Coly Li 2024-01-25 15:24:22 UTC
Richard,

Is it possible to do the similar testing with a testing kernel on so many testing instances?  I tested all the backport on my machine and no error detected, if I drop some suspicious tests and such panic doesn't observed on our QA testing environment, I will be much more confident.

So far it seems to be vary probably an upstream issue, before figuring out the exact fix, I plan to drop the suspicious patches firstly.

Thanks.

Coly Li
Comment 18 Coly Li 2024-01-25 15:40:22 UTC
From my intuition (really just intuition and no evidence), I am not very confident to the following 2 git-fixes,
      patches.suse/md-raid1-free-the-r1bio-before-waiting-for-blocked-r-992d.patch
      patches.suse/md-Set-MD_BROKEN-for-RAID1-and-RAID10-9631.patch

If it is possible to do the similar testing, I'd like to generate a branch without the above 2 patches and see how lucky we are.

Coly Li
Comment 31 Richard Fan 2024-03-01 06:31:20 UTC
The panic issue can be reproduce from openQA test, 
e.g., https://openqa.suse.de/tests/13655014#step/mdadm/

Can you please help check if the patch is checked in?
Comment 32 Coly Li 2024-03-01 17:48:55 UTC
(In reply to Richard Fan from comment #31)
> The panic issue can be reproduce from openQA test, 
> e.g., https://openqa.suse.de/tests/13655014#step/mdadm/
> 
> Can you please help check if the patch is checked in?

My change to drop the patches were merged into SLE15-SP4-LTSS on Feb 28.

The above testing kernel is 5.14.21-150400.24.108-default, from the git log I see,
==== commit log start ====
tag rpm-5.14.21-150400.24.108
Tagger: Kernel Build Daemon <kbuild@suse.de>
Date:   Thu Feb 15 16:27:48 2024 +0100

Released kernel-5.14.21-150400.24.108 for products
sle15-sp4-ltss-updates

commit d77a474ed7d89ef63b1f2026afa2a66c797659ac (tag: rpm-5.14.21-150400.24.108--sle15-sp4-ltss-updates, tag: rpm-5.14.21-150400.24.108)
Author: NeilBrown <neilb@suse.de>
Date:   Fri Feb 9 11:04:55 2024 +1100

    Refresh patches.suse/nfsd-fix-RELEASE_LOCKOWNER.patch.

    Accidentally removed nfs4_get_stateowner
==== commit log end ====

So the SLES-15-SP4 Build20240229-1 uses kernel source tagged on Feb 15, my change was not merged at that time.

Coly Li
Comment 35 Maintenance Automation 2024-03-14 20:30:09 UTC
SUSE-SU-2024:0900-1: An update that solves 49 vulnerabilities and has five security fixes can now be installed.

Category: security (important)
Bug References: 1211515, 1213456, 1214064, 1218195, 1218216, 1218562, 1218915, 1219073, 1219126, 1219127, 1219146, 1219295, 1219633, 1219653, 1219827, 1219835, 1220009, 1220140, 1220187, 1220238, 1220240, 1220241, 1220243, 1220250, 1220251, 1220253, 1220254, 1220255, 1220257, 1220326, 1220328, 1220330, 1220335, 1220344, 1220350, 1220364, 1220398, 1220409, 1220433, 1220444, 1220457, 1220459, 1220469, 1220649, 1220735, 1220736, 1220796, 1220797, 1220825, 1220845, 1220917, 1220930, 1220931, 1220933
CVE References: CVE-2019-25162, CVE-2021-46923, CVE-2021-46924, CVE-2021-46932, CVE-2021-46934, CVE-2021-47083, CVE-2022-48627, CVE-2023-28746, CVE-2023-5197, CVE-2023-52340, CVE-2023-52429, CVE-2023-52439, CVE-2023-52443, CVE-2023-52445, CVE-2023-52447, CVE-2023-52448, CVE-2023-52449, CVE-2023-52451, CVE-2023-52452, CVE-2023-52456, CVE-2023-52457, CVE-2023-52463, CVE-2023-52464, CVE-2023-52467, CVE-2023-52475, CVE-2023-52478, CVE-2023-52482, CVE-2023-52484, CVE-2023-52530, CVE-2023-52531, CVE-2023-52559, CVE-2023-6270, CVE-2023-6817, CVE-2024-0607, CVE-2024-1151, CVE-2024-23849, CVE-2024-23850, CVE-2024-23851, CVE-2024-26585, CVE-2024-26586, CVE-2024-26589, CVE-2024-26591, CVE-2024-26593, CVE-2024-26595, CVE-2024-26598, CVE-2024-26602, CVE-2024-26603, CVE-2024-26607, CVE-2024-26622
Sources used:
openSUSE Leap 15.4 (src): kernel-syms-5.14.21-150400.24.111.1, kernel-source-5.14.21-150400.24.111.1, kernel-obs-build-5.14.21-150400.24.111.1, kernel-livepatch-SLE15-SP4_Update_24-1-150400.9.3.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-obs-qa-5.14.21-150400.24.111.1
openSUSE Leap Micro 5.3 (src): kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
openSUSE Leap Micro 5.4 (src): kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Linux Enterprise Micro for Rancher 5.3 (src): kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Linux Enterprise Micro 5.3 (src): kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Linux Enterprise Micro for Rancher 5.4 (src): kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Linux Enterprise Micro 5.4 (src): kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Linux Enterprise Live Patching 15-SP4 (src): kernel-livepatch-SLE15-SP4_Update_24-1-150400.9.3.1
SUSE Linux Enterprise High Performance Computing ESPOS 15 SP4 (src): kernel-source-5.14.21-150400.24.111.1, kernel-obs-build-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1
SUSE Linux Enterprise High Performance Computing LTSS 15 SP4 (src): kernel-source-5.14.21-150400.24.111.1, kernel-obs-build-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1
SUSE Linux Enterprise Desktop 15 SP4 LTSS 15-SP4 (src): kernel-source-5.14.21-150400.24.111.1, kernel-obs-build-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1
SUSE Linux Enterprise Server 15 SP4 LTSS 15-SP4 (src): kernel-source-5.14.21-150400.24.111.1, kernel-obs-build-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1
SUSE Linux Enterprise Server for SAP Applications 15 SP4 (src): kernel-source-5.14.21-150400.24.111.1, kernel-obs-build-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1
SUSE Manager Proxy 4.3 (src): kernel-source-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Manager Retail Branch Server 4.3 (src): kernel-source-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1
SUSE Manager Server 4.3 (src): kernel-source-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 36 Maintenance Automation 2024-03-15 16:30:17 UTC
SUSE-SU-2024:0900-2: An update that solves 49 vulnerabilities and has five security fixes can now be installed.

Category: security (important)
Bug References: 1211515, 1213456, 1214064, 1218195, 1218216, 1218562, 1218915, 1219073, 1219126, 1219127, 1219146, 1219295, 1219633, 1219653, 1219827, 1219835, 1220009, 1220140, 1220187, 1220238, 1220240, 1220241, 1220243, 1220250, 1220251, 1220253, 1220254, 1220255, 1220257, 1220326, 1220328, 1220330, 1220335, 1220344, 1220350, 1220364, 1220398, 1220409, 1220433, 1220444, 1220457, 1220459, 1220469, 1220649, 1220735, 1220736, 1220796, 1220797, 1220825, 1220845, 1220917, 1220930, 1220931, 1220933
CVE References: CVE-2019-25162, CVE-2021-46923, CVE-2021-46924, CVE-2021-46932, CVE-2021-46934, CVE-2021-47083, CVE-2022-48627, CVE-2023-28746, CVE-2023-5197, CVE-2023-52340, CVE-2023-52429, CVE-2023-52439, CVE-2023-52443, CVE-2023-52445, CVE-2023-52447, CVE-2023-52448, CVE-2023-52449, CVE-2023-52451, CVE-2023-52452, CVE-2023-52456, CVE-2023-52457, CVE-2023-52463, CVE-2023-52464, CVE-2023-52467, CVE-2023-52475, CVE-2023-52478, CVE-2023-52482, CVE-2023-52484, CVE-2023-52530, CVE-2023-52531, CVE-2023-52559, CVE-2023-6270, CVE-2023-6817, CVE-2024-0607, CVE-2024-1151, CVE-2024-23849, CVE-2024-23850, CVE-2024-23851, CVE-2024-26585, CVE-2024-26586, CVE-2024-26589, CVE-2024-26591, CVE-2024-26593, CVE-2024-26595, CVE-2024-26598, CVE-2024-26602, CVE-2024-26603, CVE-2024-26607, CVE-2024-26622
Sources used:
SUSE Manager Proxy 4.3 (src): kernel-source-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1
SUSE Manager Server 4.3 (src): kernel-source-5.14.21-150400.24.111.1, kernel-default-base-5.14.21-150400.24.111.2.150400.24.52.1, kernel-syms-5.14.21-150400.24.111.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 39 Maintenance Automation 2024-03-22 16:30:05 UTC
SUSE-SU-2024:0977-1: An update that solves 49 vulnerabilities and has five security fixes can now be installed.

Category: security (important)
Bug References: 1211515, 1213456, 1214064, 1218195, 1218216, 1218562, 1218915, 1219073, 1219126, 1219127, 1219146, 1219295, 1219633, 1219653, 1219827, 1219835, 1220009, 1220140, 1220187, 1220238, 1220240, 1220241, 1220243, 1220250, 1220251, 1220253, 1220254, 1220255, 1220257, 1220326, 1220328, 1220330, 1220335, 1220344, 1220350, 1220364, 1220398, 1220409, 1220433, 1220444, 1220457, 1220459, 1220469, 1220649, 1220735, 1220736, 1220796, 1220797, 1220825, 1220845, 1220917, 1220930, 1220931, 1220933
CVE References: CVE-2019-25162, CVE-2021-46923, CVE-2021-46924, CVE-2021-46932, CVE-2021-46934, CVE-2021-47083, CVE-2022-48627, CVE-2023-28746, CVE-2023-5197, CVE-2023-52340, CVE-2023-52429, CVE-2023-52439, CVE-2023-52443, CVE-2023-52445, CVE-2023-52447, CVE-2023-52448, CVE-2023-52449, CVE-2023-52451, CVE-2023-52452, CVE-2023-52456, CVE-2023-52457, CVE-2023-52463, CVE-2023-52464, CVE-2023-52467, CVE-2023-52475, CVE-2023-52478, CVE-2023-52482, CVE-2023-52484, CVE-2023-52530, CVE-2023-52531, CVE-2023-52559, CVE-2023-6270, CVE-2023-6817, CVE-2024-0607, CVE-2024-1151, CVE-2024-23849, CVE-2024-23850, CVE-2024-23851, CVE-2024-26585, CVE-2024-26586, CVE-2024-26589, CVE-2024-26591, CVE-2024-26593, CVE-2024-26595, CVE-2024-26598, CVE-2024-26602, CVE-2024-26603, CVE-2024-26607, CVE-2024-26622
Maintenance Incident: [SUSE:Maintenance:33016](https://smelt.suse.de/incident/33016/)
Sources used:
SUSE Linux Enterprise Micro for Rancher 5.3 (src):
 kernel-source-rt-5.14.21-150400.15.71.1
SUSE Linux Enterprise Micro 5.3 (src):
 kernel-source-rt-5.14.21-150400.15.71.1
SUSE Linux Enterprise Micro for Rancher 5.4 (src):
 kernel-source-rt-5.14.21-150400.15.71.1
SUSE Linux Enterprise Micro 5.4 (src):
 kernel-source-rt-5.14.21-150400.15.71.1
SUSE Linux Enterprise Live Patching 15-SP4 (src):
 kernel-livepatch-SLE15-SP4-RT_Update_19-1-150400.1.3.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 40 Coly Li 2024-03-26 10:00:51 UTC
Can we close this bug report?
Comment 41 Richard Fan 2024-03-26 10:10:35 UTC
(In reply to Coly Li from comment #40)
> Can we close this bug report?

Based on the openQA result, the issue is rarely seen now. Thanks Coly for the kindly help!