Bug 253158 - Panic: snapacct_ufs2: bad block - Non-suJ mksnap_ffs(8) crash
Summary: Panic: snapacct_ufs2: bad block - Non-suJ mksnap_ffs(8) crash
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: Unspecified
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-fs (Nobody)
URL:
Keywords: crash
Depends on:
Blocks:
 
Reported: 2021-02-01 12:55 UTC by Harald Schmalzbauer
Modified: 2022-10-12 00:50 UTC (History)
7 users (show)

See Also:


Attachments
Proposed fix for bug. (666 bytes, patch)
2021-02-01 23:30 UTC, Kirk McKusick
no flags Details | Diff
correct fix for bug (10.75 KB, patch)
2021-02-11 07:27 UTC, Kirk McKusick
no flags Details | Diff
kgdb output with volatile struct uio short_uio1 (11.26 KB, text/plain)
2021-02-14 17:52 UTC, Harald Schmalzbauer
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Harald Schmalzbauer 2021-02-01 12:55:50 UTC
Hello,

sorry for missing the patch and/or helpful code/debug links, but I want at least to record my long standing problem, which I haven't found time to start with before...

I'm using "factory" ufs-snapshots since FreeBSD 8ish for almost all my deployments.
Starting with 10, I'm observing crashes which I couldn't ever reproduce (below is a oneliner doing exactly what _should_ trigger the crash, but doesn't).

But since the crashes have been guaratneed to show up for several years now, I salvaged such a filessystem, which makes it possible to reproduce the crash.
It's a mdimage(4) of a 36GB root memory disk, compressed 10G in size:
http://www.omnilan.de/downloads/fs_slash.gz
As soon as I utilize mksnap_ffs(8) on that md(4),
  (gunzip -c fs_slash.gz  > /tmp/fs_slash
   mdconfig -a -t vnode -f /tmp/fs_slash
   mount /dev/md0 /mnt # to be adopted
   mksnap_ffs /mnt/.snap/.test
  )
I get the following kernel panic:
panic: snapacct_ufs2: bad block
cpuid = 3
time = 1612183847
KDB: stack backtrace:
#0 0xffffffff80c515c5 at kdb_backtrace+0x65
#1 0xffffffff80c04251 at vpanic+0x181
#2 0xffffffff80c040c3 at panic+0x43
#3 0xffffffff80e86ab4 at snapacct_ufs2+0x164
#4 0xffffffff80e8998c at indiracct_ufs2+0x21c
#5 0xffffffff80e86283 at expunge_ufs2+0x3a3
#6 0xffffffff80e84c51 at ffs_snapshot+0x2061
#7 0xffffffff80eac94c at ffs_mount+0x128c
#8 0xffffffff80cd4c3d at vfs_domount+0x8bd
#9 0xffffffff80cd3b87 at vfs_donmount+0x8e7
#10 0xffffffff80cd3269 at sys_nmount+0x69
#11 0xffffffff81038b3c at amd64_syscall+0x10c
#12 0xffffffff8100f1fe at fast_syscall_common+0xf8


--------
Here's the description/test for what I essentially do at setup time, which leads to filesystems causing the mksnap_ffs(8) crash when e.g. FreeBSD boots up and runs background fsck(8) - which required a snaphshot, which crashes the machine...

1. Create a file with known size to fit content (this example exactly corresponds to the size of the known-bad fs_slash linked above
# actually I'm using dd if=/dev/null instead of truncate(1) as used in the test loop)
2. newfs without SoftUpdates and/or Journaling, just use a label
3. Mount md(4) and copy content for pre-calculated size
4. do snapshot of the filesystem, move the snap inode, create hard link to it as well as a symlink to the hardlink
5. umount - as done with real setup
6. additional test: re-mount, make another snapshot - that's where the realworld  crash happens on about any 10-20th rollout
7. cleanup
==
sh -c 'count=0; oldpwd="$(pwd)"; while [ $count -lt 50 ]; do mdun=`truncate -s 37312k /tmp/snaptest && mdconfig -n -a -t vnode -f /tmp/snaptest` ; newfs -L mksnapFFS -n -E /dev/md${mdun} && mount /dev/md${mdun} /mnt && find -x / \! -type d -flags nosnapshot | cpio -pmdu /mnt && mksnap_ffs /mnt/.snap/.test && cd /mnt && { [ -d usr ] || mkdir usr ; } && mv .snap/.test usr && ln usr/.test .snap/.test && ln -s .test .snap/$(stat -f"%i %B %v test" /mnt/.snap/.test | sha1) && cd "${oldpwd}" && echo Successfull: $((count += 1)) ; umount /mnt ; mdconfig -d -u ${mdun} ; done'
Comment 1 Harald Schmalzbauer 2021-02-01 13:19:51 UTC
Please note that I confused MB and GB in the size of the download link, resp. the md_image!
It's 10MB for download, 36MB uncompressed!

Sorry,
-harry
Comment 2 Kirk McKusick freebsd_committer freebsd_triage 2021-02-01 23:30:07 UTC
Created attachment 222084 [details]
Proposed fix for bug.
Comment 3 Kirk McKusick freebsd_committer freebsd_triage 2021-02-01 23:31:56 UTC
Thanks for your report. It is very helpful when you provide a way to reproduce it. Please try my proposed fix and let me know if it helps.
Comment 4 Harald Schmalzbauer 2021-02-02 14:21:16 UTC
(In reply to Kirk McKusick from comment #3)

Happy to confirm that creating a snapshot with the "known-problematic" FFS mdimage(4) doesn't panic anymore with stable_13 and your patch applied (FreeBSD 13.0-ALPHA3 #0 stable/13-f19a4e97d-dirty).
I can also confirm that the now successfully creatable snapshot seems to work as well as the already present snapshot of the problematic FFS image; at least for ls(1) and fsck(1).

No idea about the root cause and why this problem only appeared once out of dozend(s) new filesystems.
Any hints highly appreciated, in case it's possible to explain in a few sentences to someone not familiar with FFS internals.

Thank you very much!
-harry
Comment 5 Kirk McKusick freebsd_committer freebsd_triage 2021-02-02 19:26:51 UTC
(In reply to Harald Schmalzbauer from comment #4)
Thanks for testing. Once I figured it out, the fix seemed obvious. But that is not always the case, so glad it worked out. It is especially helpful to have a reliable way to identify problems and confirm their solution.

The panic arose when the inode allocated for the snapshot came from an inode block not previously used for snapshots.

The filesystem tries to cluster the inodes in a directory to be close together so that directory-wide operations that stat every file (like `ls -l') or read many files (like `grep *.c') only need to read a minimal number of inode blocks (each 32K disk block holds 128 inodes).  If all of the snapshots are taken in /.snap, then they will typically all be allocated from the same inode block.

You filled your filesystem full enough that there were no more locally available inodes (`ls -i /mnt/.snap' shows .factory as inode 4114 while the new snapshot got inode 2830 which is more than 128 inodes away). Writing that new inode caused .factory to notice and make a copy of the old inode block before allowing it to be changed.

To make a copy .factory needed to allocate a block in which to put it.  That new block got allocated between the time that the new snapshot allocation froze the filesystem and was making a pass over the block maps to create its own frozen image. The panic occurred because the code had assumed that no new blocks could be allocated. That was true except in the rare case I just cited above. So, the fix was to accept that there are legitimate cases where other snapshots can allocate blocks when the filesystem is frozen.

Not sure if that qualifies as a simple explanation.
Comment 6 Harald Schmalzbauer 2021-02-03 05:03:44 UTC
(In reply to Kirk McKusick from comment #5)
It qualifies for another extra award of highest degree!
Free operating system, free support and free lesson of excellence.
Highly appreciated, perfectly clear explanation, as far as my knwoledge allows me to follow, thanks a lot for taking extra time!
-harry
Comment 7 Kirk McKusick freebsd_committer freebsd_triage 2021-02-11 07:27:46 UTC
Created attachment 222359 [details]
correct fix for bug

The previous fix masked the problem but had other unintended side effects.

The reported panic arises because the /mnt/.snap/.factory snapshot allocated the last block in the filesystem. The snapshot code allocates the last block in the filesystem as a way of setting its length to be the size of the filesystem. Part of taking a snapshot is to remove all the earlier snapshots from the image of the newest snapshot so that newer snapshots will not claim the blocks of the earlier snapshots. The panic occurs when the new snapshot finds that both it and an earlier snapshot claim the same block.

The fix is to set the size of the snapshot to be one block after the last block in the filesystem. This block can never be allocated since it is not a valid block in the filesystem. This extra block is used as a place to store the initial list of blocks that the snapshot has already copied and is used to avoid a deadlock in and speed up the copyonwrite() function.
Comment 8 commit-hook freebsd_committer freebsd_triage 2021-02-12 05:35:28 UTC
A commit in branch main references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=8563de2f2799b2cb6f2f06e3c9dddd48dca2a986

commit 8563de2f2799b2cb6f2f06e3c9dddd48dca2a986
Author:     Kirk McKusick <mckusick@FreeBSD.org>
AuthorDate: 2021-02-12 05:31:16 +0000
Commit:     Kirk McKusick <mckusick@FreeBSD.org>
CommitDate: 2021-02-12 05:31:16 +0000

    Fix bug 253158 - Panic: snapacct_ufs2: bad block - mksnap_ffs(8) crash

    The panic reported in 253158 arises because the /mnt/.snap/.factory
    snapshot allocated the last block in the filesystem. The snapshot
    code allocates the last block in the filesystem as a way of setting
    its length to be the size of the filesystem. Part of taking a
    snapshot is to remove all the earlier snapshots from the image of
    the newest snapshot so that newer snapshots will not claim the blocks
    of the earlier snapshots. The panic occurs when the new snapshot
    finds that both it and an earlier snapshot claim the same block.

    The fix is to set the size of the snapshot to be one block after
    the last block in the filesystem. This block can never be allocated
    since it is not a valid block in the filesystem. This extra block
    is used as a place to store the initial list of blocks that the
    snapshot has already copied and is used to avoid a deadlock in and
    speed up the ffs_copyonwrite() function.

    Reported by:  Harald Schmalzbauer
    Tested by:    Peter Holm
    PR:           253158
    Sponsored by: Netflix

 sys/ufs/ffs/ffs_snapshot.c | 137 +++++++++++++++++++++++----------------------
 1 file changed, 70 insertions(+), 67 deletions(-)
Comment 9 Harald Schmalzbauer 2021-02-12 09:00:27 UTC
Thank you very much for further analysis and fix.
Unfortunately, this seems to introduce a new problem:

/sbin/mksnap_ffs /.snap/.test2

/usr/sbin/fstyp /.snap/.test2

Repdiducable panic:
Fatal trap 12: page fault while in kernel mode
current process: fstyp

  #7 ... uiomove_fromphys+
 #8 ... vn_io_fault_uiomove+
 #9 ... ffs_read+
#10 ... VOP_READ_APV+
#11 ... vn_read+
#12 ... vn_io_doio+
#13 ... vn_io_fault1+
#14 ... vn_io_fault+
#15 ... dofileread+
#16 ... sys_read+
#17 ... amd64_syscall+

Unfortunately I can't test on debug environment currently, the crash happens with produtcion deployment tests without serial console, su just trinscribed a stacktrace snippet.

Thanks,
-harry
Comment 10 Kirk McKusick freebsd_committer freebsd_triage 2021-02-12 18:30:01 UTC
(In reply to Harald Schmalzbauer from comment #9)
Thanks for your report. I tried various versions of taking snapshots and running /usr/sbin/fstyp on them using the disk image that you originally posted. None of them trigger a panic.

If you are able to create a disk image or a script that demonstrates the problem it would be most helpful.
Comment 11 Harald Schmalzbauer 2021-02-12 18:40:14 UTC
(In reply to Kirk McKusick from comment #10)

Hmmm, I'm applied your diff to stable/13.  I'm not aware of any ffs related differences between main and stable/13.
Do you have a stable/13 machine for testing?

Here, it's just a matter of:
mdmfs -s 100m 3 /mnt
mksnap_ffs /mnt/.snap/test2
fstyp /mnt/.snap/test2

This panics stable/13 kernel from tady with 8563de2f2799 from main.

Will double check diffs...

Sorry for the hassle
Comment 12 Harald Schmalzbauer 2021-02-12 19:19:24 UTC
(In reply to Harald Schmalzbauer from comment #11)

To be precise, I didn't apply the commit (8563de2f2799) but the attachment here, which results in the following diff between main-patch and pr-patch:
+++ /opt/deploy-tools/FreeBSD-src/stable_13/sys/ufs/ffs/ffs_snapshot.c  2021-02-12 20:05:44.560632000 +0100
@@ -532,7 +532,7 @@
         * in the direct blocks, but we add the slop for them in case
         * they do not end up there. The snapshot list size may get
         * expanded by one because of an update of an inode block for
-        * an unlinked but still open files when it is expunged.
+        * an unlinked but still open file when it is expunged.
         *
         * Because the direct block pointers are always copied, they
         * are not added to the list. Instead ffs_copyonwrite()
@@ -723,7 +723,7 @@
                free(copy_fs, M_UFSMNT);
                copy_fs = NULL;
        }
-       KASSERT(error != 0 || (error == 0 && sn != NULL && copy_fs != NULL),
+       KASSERT(error != 0 || (sn != NULL && copy_fs != NULL),
                ("missing snapshot setup parameters"));
        /*
         * Resume operation on filesystem.

To my limited knowledge, this can't be the root caus of my panic.
stable/13 is missing two commits from kib@ for ffs_snapshot.c, but I'm quite sure that these don't affect the changes here.
Compiling a kernel with verified sources and actual 8563de2f2799 applied...
Meanwhile trying to find something I could run main on...
Comment 13 Harald Schmalzbauer 2021-02-12 19:35:01 UTC
(In reply to Harald Schmalzbauer from comment #12)
FreeBSD bosco.local 13.0-STABLE FreeBSD 13.0-STABLE #0 stable/13-18097ee2f-dirty: Fri Feb 12 20:16:32 CET 2021     hes@preed.egn.mo1.omnilan.net:/usr/local/share/deploy-tools/obj/CoffeeLake-stable_13/amd64.amd64/sys/CoffeeLake.bosco  amd64
bosco:/usr/home/admin/#:3 mdmfs -s 100m 3 /mnt
bosco:/usr/home/admin/#:4 mksnap_ffs /mnt/.snap/test2
bosco:/usr/home/admin/#:5 fstyp /mnt/.snap/test2
ssh dead due to crash...

Will compile main for that machine, but this takes few hours.
Can you confirm that the your machine doesn't panic with the 3 commands from above (note that I'm not running fstyp(8) against /dev/mdX but /mnt/.snap/test2 itself - this was a feasable way of determining the validity of a auto-detected snapshot in my deploy/update chain)

Thanks,
-harry
Comment 14 Cy Schubert freebsd_committer freebsd_triage 2021-02-12 20:19:44 UTC
I've been able to reproduce this in a very recent 14-CURRENT in a VM. No dump though, the VM doesn't have the extra disk (yet).
Comment 15 Kirk McKusick freebsd_committer freebsd_triage 2021-02-12 22:25:30 UTC
(In reply to Cy Schubert from comment #14)
I just tried this test and cannot reproduce it:

Script started on Fri Feb 12 14:23:53 2021

guest_13 # uname -a
FreeBSD guest_13 14.0-CURRENT FreeBSD 14.0-CURRENT #0 main-n244791-62374dfa0f0d: Fri Feb 12 14:11:34 UTC 2021     root@guest_13:/usr/src/git-freebsd/src/sys/amd64/compile/GENERIC  amd64

guest_13 # mdmfs -s 100m 3 /mnt

guest_13 # mksnap_ffs /mnt/.snap/test2

guest_13 # fstyp /mnt/.snap/test2
ufs

guest_13 # df
Filesystem      1K-blocks     Used    Avail Capacity  Mounted on
/dev/gpt/rootfs  96446284 21825260 66905324    25%    /
devfs                   1        1        0   100%    /dev
/dev/md3            98716      584    90236     1%    /mnt
guest_13 # exit

Script done on Fri Feb 12 14:25:30 2021

Can you see what you are doing that is different than me to reproduce it?
Comment 16 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-12 22:34:52 UTC
Well, configure the crash dumps and get vmcore then.  Put your kernel with debug
symbols (kernel.full) and vmcore somewhere.  Might be it would be easier way to
extract the information.
Comment 17 Cy Schubert freebsd_committer freebsd_triage 2021-02-12 22:59:20 UTC
I'll do that after $JOB tonight.
Comment 18 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 00:38:05 UTC
slippy# kgdb /alt/vm64/root/usr/lib/debug/boot/kernel/kernel.debug vmcore.0
GNU gdb (GDB) 10.1 [GDB v10.1 for FreeBSD]
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-portbld-freebsd14.0".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /alt/vm64/root/usr/lib/debug/boot/kernel/kernel.debug...

Unread portion of the kernel message buffer:


Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address	= 0x30
fault code		= supervisor read data, page not present
instruction pointer	= 0x20:0xffffffff809feb04
stack pointer	        = 0x28:0xfffffe0097a84580
frame pointer	        = 0x28:0xfffffe0097a845c0
code segment		= base rx0, limit 0xfffff, type 0x1b
			= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags	= interrupt enabled, resume, IOPL = 0
current process		= 1154 (fstyp)
trap number		= 12
panic: page fault
cpuid = 0
time = 1613173364
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0097a84230
vpanic() at vpanic+0x181/frame 0xfffffe0097a84280
panic() at panic+0x43/frame 0xfffffe0097a842e0
trap_fatal() at trap_fatal+0x387/frame 0xfffffe0097a84340
trap_pfault() at trap_pfault+0x4f/frame 0xfffffe0097a843a0
trap() at trap+0x27d/frame 0xfffffe0097a844b0
calltrap() at calltrap+0x8/frame 0xfffffe0097a844b0
--- trap 0xc, rip = 0xffffffff809feb04, rsp = 0xfffffe0097a84580, rbp = 0xfffffe0097a845c0 ---
pmap_map_io_transient() at pmap_map_io_transient+0x44/frame 0xfffffe0097a845c0
pmap_copy_pages() at pmap_copy_pages+0xa7/frame 0xfffffe0097a84650
vn_io_fault_pgmove() at vn_io_fault_pgmove+0x99/frame 0xfffffe0097a84680
ffs_read() at ffs_read+0x2e7/frame 0xfffffe0097a84710
VOP_READ_APV() at VOP_READ_APV+0x1f/frame 0xfffffe0097a84730
vn_read() at vn_read+0x1ed/frame 0xfffffe0097a847b0
vn_io_fault_doio() at vn_io_fault_doio+0x43/frame 0xfffffe0097a84810
vn_io_fault1() at vn_io_fault1+0x2c4/frame 0xfffffe0097a84960
vn_io_fault() at vn_io_fault+0x1a4/frame 0xfffffe0097a849e0
dofileread() at dofileread+0x81/frame 0xfffffe0097a84a30
kern_preadv() at kern_preadv+0x62/frame 0xfffffe0097a84a70
sys_pread() at sys_pread+0x8a/frame 0xfffffe0097a84ac0
amd64_syscall() at amd64_syscall+0x10c/frame 0xfffffe0097a84bf0
fast_syscall_common() at fast_syscall_common+0xf8/frame 0xfffffe0097a84bf0
--- syscall (475, FreeBSD ELF64, sys_pread), rip = 0x2c0408fa, rsp = 0x7fffffffe258, rbp = 0x7fffffffe280 ---
Uptime: 2m31s
Dumping 163 out of 480 MB:..10%..20%..30%..40%..49%..59%..69%..79%..89%..98%

__curthread () at /opt/src/git-src/sys/amd64/include/pcpu_aux.h:55
55		__asm("movq %%gs:%P1,%0" : "=r" (td) : "n" (offsetof(struct pcpu,
(kgdb) bt
#0  __curthread () at /opt/src/git-src/sys/amd64/include/pcpu_aux.h:55
#1  doadump (textdump=textdump@entry=1) at /opt/src/git-src/sys/kern/kern_shutdown.c:399
#2  0xffffffff806b7b4b in kern_reboot (howto=260) at /opt/src/git-src/sys/kern/kern_shutdown.c:486
#3  0xffffffff806b7fd0 in vpanic (fmt=<optimized out>, ap=<optimized out>)
    at /opt/src/git-src/sys/kern/kern_shutdown.c:919
#4  0xffffffff806b7dd3 in panic (fmt=<unavailable>) at /opt/src/git-src/sys/kern/kern_shutdown.c:843
#5  0xffffffff80a0f6d7 in trap_fatal (frame=0xfffffe0097a844c0, eva=48)
    at /opt/src/git-src/sys/amd64/amd64/trap.c:915
#6  0xffffffff80a0f72f in trap_pfault (frame=frame@entry=0xfffffe0097a844c0, usermode=false, 
    signo=<optimized out>, signo@entry=0x0, ucode=<optimized out>, ucode@entry=0x0)
    at /opt/src/git-src/sys/amd64/amd64/trap.c:732
#7  0xffffffff80a0ed8d in trap (frame=0xfffffe0097a844c0)
    at /opt/src/git-src/sys/amd64/amd64/trap.c:398
#8  <signal handler called>
#9  0xffffffff809feb04 in pmap_map_io_transient (page=page@entry=0xfffffe0097a845d0, 
    vaddr=vaddr@entry=0xfffffe0097a84610, count=count@entry=2, can_fault=can_fault@entry=0)
    at /opt/src/git-src/sys/amd64/amd64/pmap.c:9979
#10 0xffffffff809fea17 in pmap_copy_pages (ma=0xfffffe0001312f00, a_offset=0, 
    mb=0xfffffe0097a7ca60, b_offset=0, xfersize=xfersize@entry=32768)
    at /opt/src/git-src/sys/amd64/amd64/pmap.c:7825
#11 0xffffffff807b4109 in vn_io_fault_pgmove (ma=0x0, offset=<optimized out>, offset@entry=0, 
    xfersize=xfersize@entry=32768, uio=uio@entry=0xfffffe0097a848b0)
    at /opt/src/git-src/sys/kern/vfs_vnops.c:1513
#12 0xffffffff80937497 in ffs_read (ap=<optimized out>)
    at /opt/src/git-src/sys/ufs/ffs/ffs_vnops.c:789
#13 0xffffffff80a5264f in VOP_READ_APV (vop=0xffffffff80ce8588 <ffs_vnodeops2>, 
    a=a@entry=0xfffffe0097a84760) at vnode_if.c:1050
#14 0xffffffff807b7c8d in VOP_READ (vp=0xfffff800065f65b8, uio=0xfffffe0097a848b0, ioflag=0, 
    cred=<optimized out>) at ./vnode_if.h:542
#15 vn_read (fp=0xfffff8000004d000, uio=0xfffffe0097a848b0, active_cred=0xfffff80007a66b00, 
    flags=<optimized out>, td=<optimized out>) at /opt/src/git-src/sys/kern/vfs_vnops.c:1027
#16 0xffffffff807b79f3 in vn_io_fault_doio (args=args@entry=0xfffffe0097a84970, 
    uio=uio@entry=0xfffffe0097a848b0, td=0xfffffe009781b100)
    at /opt/src/git-src/sys/kern/vfs_vnops.c:1174
#17 0xffffffff807b3a54 in vn_io_fault1 (vp=<optimized out>, uio=uio@entry=0xfffffe0097a84a80, 
    args=args@entry=0xfffffe0097a84970, td=td@entry=0xfffffe009781b100)
    at /opt/src/git-src/sys/kern/vfs_vnops.c:1342
#18 0xffffffff807b0fc4 in vn_io_fault (fp=<optimized out>, uio=0xfffffe0097a84a80, 
    active_cred=<optimized out>, flags=<optimized out>, td=<optimized out>)
    at /opt/src/git-src/sys/kern/vfs_vnops.c:1414
#19 0xffffffff807281f1 in fo_read (fp=0xfffff8000004d000, uio=0xfffffe0097a84a80, active_cred=0x2, 
    flags=-1750577984, td=0xfffffe009781b100) at /opt/src/git-src/sys/sys/file.h:330
#20 dofileread (td=td@entry=0xfffffe009781b100, fd=fd@entry=3, fp=0xfffff8000004d000, 
    auio=auio@entry=0xfffffe0097a84a80, offset=<optimized out>, offset@entry=20676608, 
    flags=<optimized out>, flags@entry=1) at /opt/src/git-src/sys/kern/sys_generic.c:369
#21 0xffffffff80727fc2 in kern_preadv (td=0xfffffe009781b100, fd=3, 
    auio=auio@entry=0xfffffe0097a84a80, offset=20676608)
    at /opt/src/git-src/sys/kern/sys_generic.c:335
#22 0xffffffff80727eca in kern_pread (td=<optimized out>, fd=-1750579696, buf=<optimized out>, 
    nbyte=18446741877230685376, offset=0) at /opt/src/git-src/sys/kern/sys_generic.c:244
#23 sys_pread (td=0xfffffe0097a845d0, uap=<optimized out>)
    at /opt/src/git-src/sys/kern/sys_generic.c:226
#24 0xffffffff80a0ffdc in syscallenter (td=0xfffffe009781b100)
    at /opt/src/git-src/sys/amd64/amd64/../../kern/subr_syscall.c:189
#25 amd64_syscall (td=0xfffffe009781b100, traced=0) at /opt/src/git-src/sys/amd64/amd64/trap.c:1156
#26 <signal handler called>
#27 0x000000002c0408fa in ?? ()
Backtrace stopped: Cannot access memory at address 0x7fffffffe258
(kgdb) 
(kgdb) p page[i]
value has been optimized out
(kgdb) p i
$16 = <optimized out>
(kgdb) p count
$17 = 2
(kgdb) p page[0]
$18 = (vm_page_t) 0xfffffe00005807e8
(kgdb) p page[1]
$19 = (vm_page_t) 0x0
(kgdb) 
(kgdb) up
#10 0xffffffff809fea17 in pmap_copy_pages (ma=0xfffffe0001312f00, a_offset=0, 
    mb=0xfffffe0097a7ca60, b_offset=0, xfersize=xfersize@entry=32768)
    at /opt/src/git-src/sys/amd64/amd64/pmap.c:7825
7825			mapped = pmap_map_io_transient(pages, vaddr, 2, FALSE);
(kgdb) p pages
$20 = {0xfffffe00005807e8, 0x0}
(kgdb) p vaddr
$21 = {18446735277843750912, 18446735277723444664}
(kgdb) 
(kgdb) up
#10 0xffffffff809fea17 in pmap_copy_pages (ma=0xfffffe0001312f00, a_offset=0, 
    mb=0xfffffe0097a7ca60, b_offset=0, xfersize=xfersize@entry=32768)
    at /opt/src/git-src/sys/amd64/amd64/pmap.c:7825
7825			mapped = pmap_map_io_transient(pages, vaddr, 2, FALSE);
(kgdb) p pages
$20 = {0xfffffe00005807e8, 0x0}
(kgdb) p vaddr
$21 = {18446735277843750912, 18446735277723444664}
(kgdb) 
(kgdb) l
7820			pages[0] = ma[a_offset >> PAGE_SHIFT];
7821			b_pg_offset = b_offset & PAGE_MASK;
7822			pages[1] = mb[b_offset >> PAGE_SHIFT];
7823			cnt = min(xfersize, PAGE_SIZE - a_pg_offset);
7824			cnt = min(cnt, PAGE_SIZE - b_pg_offset);
7825			mapped = pmap_map_io_transient(pages, vaddr, 2, FALSE);
7826			a_cp = (char *)vaddr[0] + a_pg_offset;
7827			b_cp = (char *)vaddr[1] + b_pg_offset;
7828			bcopy(a_cp, b_cp, cnt);
7829			if (__predict_false(mapped))
(kgdb) p pages[0]
$22 = (vm_page_t) 0xfffffe00005807e8
(kgdb) p pages[1]
$23 = (vm_page_t) 0x0
(kgdb) p b_offset
$24 = 0
(kgdb) p a_offset
$25 = 0
(kgdb) p ma[0]
$26 = (vm_page_t) 0xfffffe00005807e8
(kgdb) p mb[0]
$27 = (vm_page_t) 0x0
(kgdb) 

Break time is over.
Comment 19 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-13 00:51:17 UTC
(In reply to Cy Schubert from comment #18)
For start, show locals from the frame 11.
Comment 20 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 01:57:11 UTC
(kgdb) frame 11
#11 0xffffffff807b4109 in vn_io_fault_pgmove (ma=0x0, offset=<optimized out>, offset@entry=0, 
    xfersize=xfersize@entry=32768, uio=uio@entry=0xfffffe0097a848b0)
    at /opt/src/git-src/sys/kern/vfs_vnops.c:1513
1513			pmap_copy_pages(ma, offset, td->td_ma, iov_base & PAGE_MASK,
(kgdb) info args
ma = 0x0
offset = <optimized out>
xfersize = 32768
uio = 0xfffffe0097a848b0
(kgdb) info locals
td = 0xfffffe009781b100
cnt = 32768
iov_base = 732655616
pgadv = <optimized out>
(kgdb) 

I'll build a GENERIC kernel for this VM.
Comment 21 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-13 02:16:51 UTC
(In reply to Cy Schubert from comment #20)
And please show locals from the frame #12 (ffs_read).
I hope that there are alive vars 'fs', 'ip', and 'bp'.  For each of them
I want to see their objects printed, i.e.
p *fs
p *ip
p *bp
Comment 22 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 04:31:32 UTC
No joy.

(kgdb) frame 12
#12 0xffffffff80f22e57 in ffs_read (ap=<optimized out>) at /opt/src/vm64/sys/ufs/ffs/ffs_vnops.c:789
789				error = vn_io_fault_pgmove(bp->b_pages, blkoffset,
(kgdb) info args
ap = <optimized out>
(kgdb) info locals
vp = <optimized out>
uio = 0xfffffe00882928b0
ioflag = 0
ip = <optimized out>
seqcount = 0
orig_resid = <optimized out>
fs = <optimized out>
bp = <optimized out>
error = 0
bflag = 72
bytesinfile = <optimized out>
lbn = <optimized out>
nextlbn = <optimized out>
size = 32768
blkoffset = 0
xfersize = 32768
(kgdb) p *fs
value has been optimized out
(kgdb) p *ip
value has been optimized out
(kgdb) p *bp
value has been optimized out
(kgdb) 

We'll need to build a -O0 kernel.
Comment 23 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 04:33:51 UTC
Next data point:

The sample script only produces an error on a newly formatted filesystem. If newfs is performed followed by mksnap_ffs, then a reboot followed by fstyp, there is no panic. The panic only occurs when the fstyp is executed after mksnap_ffs but prior to a reboot.
Comment 24 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-13 04:53:07 UTC
(In reply to Cy Schubert from comment #23)
I think it is combination of the filesystem size and something that I do not
currently quite understand in the ffs snapshot code that makes UFS_BALLOC()
to return buffer of full block size, but only with the first page populated,
instead of all 8 pages.

The buffer should be pointed to by bp, and belongs to the vnode carrying the
ip inode.

So I want/need the data I asked for.  Might be Kirk has an idea from the
description above.
Comment 25 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 07:21:30 UTC
With -O0 -g.

#12 0xffffffff8166a738 in ffs_read (ap=0xfffffe0087e5a598)
    at /opt/src/vm64/sys/ufs/ffs/ffs_vnops.c:789
789				error = vn_io_fault_pgmove(bp->b_pages, blkoffset,
(kgdb) info args
ap = 0xfffffe0087e5a598
(kgdb) info locals
vp = 0xfffff8001cb0c1e8
ip = 0xfffff8000797f0a0
uio = 0xfffffe0087e5a728
fs = 0xfffffe0088551000
bp = 0xfffffe000132b480
lbn = 128
nextlbn = 129
bytesinfile = 17175707648
size = 32768
xfersize = 32768
blkoffset = 0
orig_resid = 16486400
bflag = 72
error = 0
ioflag = 0
seqcount = 0
(kgdb) p *fs
$1 = {fs_firstfield = 0, fs_unused_1 = 0, fs_sblkno = 24, fs_cblkno = 32, fs_iblkno = 40, 
  fs_dblkno = 5048, fs_old_cgoffset = 0, fs_old_cgmask = 0, fs_old_time = 0, fs_old_size = 0, 
  fs_old_dsize = 0, fs_ncg = 27, fs_bsize = 32768, fs_fsize = 4096, fs_frag = 8, fs_minfree = 8, 
  fs_old_rotdelay = 0, fs_old_rps = 0, fs_bmask = -32768, fs_fmask = -4096, fs_bshift = 15, 
  fs_fshift = 12, fs_maxcontig = 32, fs_maxbpg = 4096, fs_fragshift = 3, fs_fsbtodb = 3, 
  fs_sbsize = 4096, fs_spare1 = {0, 0}, fs_nindir = 4096, fs_inopb = 128, fs_old_nspf = 0, 
  fs_optim = 0, fs_old_npsect = 0, fs_old_interleave = 0, fs_old_trackskew = 0, fs_id = {1613200563, 
    1688935920}, fs_old_csaddr = 0, fs_cssize = 4096, fs_cgsize = 32768, fs_spare2 = 0, 
  fs_old_nsect = 0, fs_old_spc = 0, fs_old_ncyl = 0, fs_old_cpg = 0, fs_ipg = 80128, 
  fs_fpg = 160056, fs_old_cstotal = {cs_ndir = 0, cs_nbfree = 0, cs_nifree = 0, cs_nffree = 0}, 
  fs_fmod = 1 '\001', fs_clean = 0 '\000', fs_ronly = 0 '\000', fs_old_flags = -128 '\200', 
  fs_fsmnt = "/mnt", '\000' <repeats 463 times>, fs_volname = '\000' <repeats 31 times>, 
  fs_swuid = 0, fs_pad = 0, fs_cgrotor = 18, fs_ocsp = {0x0 <repeats 15 times>}, 
  fs_si = 0xfffff800042d7320, fs_old_cpc = 0, fs_maxbsize = 32768, fs_unrefs = 0, 
  fs_providersize = 4194304, fs_metaspace = 6400, fs_sparecon64 = {0 <repeats 13 times>}, 
  fs_sblockactualloc = 65536, fs_sblockloc = 65536, fs_cstotal = {cs_ndir = 2, cs_nbfree = 507157, 
    cs_nifree = 2163451, cs_nffree = 21, cs_numclusters = 0, cs_spare = {0, 0, 0}}, 
  fs_time = 1613200585, fs_size = 4194304, fs_dsize = 4058631, fs_csaddr = 5048, 
  fs_pendingblocks = 0, fs_pendinginodes = 0, fs_snapinum = {4, 0 <repeats 19 times>}, 
  fs_avgfilesize = 16384, fs_avgfpdir = 64, fs_save_cgsize = 0, fs_mtime = 1613200569, 
  fs_sujfree = 0, fs_sparecon32 = {0 <repeats 21 times>}, fs_ckhash = 2732082773, fs_metackhash = 7, 
  fs_flags = 512, fs_contigsumsize = 16, fs_maxsymlinklen = 120, fs_old_inodefmt = 0, 
  fs_maxfilesize = 2252349704110079, fs_qbmask = 32767, fs_qfmask = 4095, fs_state = 0, 
  fs_old_postblformat = 0, fs_old_nrpos = 0, fs_spare5 = {0, 0}, fs_magic = 424935705}
(kgdb) p *ip
$2 = {i_nextsnap = {tqe_next = 0x0, tqe_prev = 0xfffff80007220b10}, i_vnode = 0xfffff8001cb0c1e8, 
  i_ump = 0xfffff800063a0800, i_dquot = {0x0, 0x0}, i_un = {dirhash = 0x0, snapblklist = 0x0}, 
  dinode_u = {din1 = 0xfffff80006521600, din2 = 0xfffff80006521600}, i_number = 4, i_flag = 1024, 
  i_effnlink = 1, i_count = 0, i_endoff = 0, i_diroff = 0, i_offset = 0, i_nextclustercg = -1, 
  i_ea_area = 0x0, i_ea_len = 0, i_ea_error = 0, i_ea_refs = 0, i_size = 17179901952, 
  i_gen = 466217878, i_flags = 2097152, i_uid = 0, i_gid = 5, i_mode = 33056, i_nlink = 1}
(kgdb) p *bp
$3 = {b_bufobj = 0xfffff8001cb0c2d0, b_bcount = 32768, b_caller1 = 0x0, 
  b_data = 0xfffffe00450d4000 <error: Cannot access memory at address 0xfffffe00450d4000>, 
  b_error = 0, b_iocmd = 1, b_ioflags = 16, b_iooffset = 4194304, b_resid = 0, b_iodone = 0x0, 
  b_ckhashcalc = 0x0, b_ckhash = 0, b_blkno = 8192, b_offset = 4194304, b_bobufs = {
    tqe_next = 0xfffffe0001320480, tqe_prev = 0xfffffe000132ba60}, b_vflags = 0, 
  b_qindex = 0 '\000', b_domain = 0 '\000', b_subqueue = 65535, b_flags = 805306912, b_xflags = 2, 
  b_lock = {lock_object = {lo_name = 0xffffffff81b9a2a3 "bufwait", lo_flags = 645070848, 
      lo_data = 0, lo_witness = 0xfffff8001fd6b580}, lk_lock = 18446741876973481472, 
    lk_exslpfail = 0, lk_timo = 0, lk_pri = 96}, b_bufsize = 32768, b_runningbufspace = 0, 
  b_kvasize = 0, b_dirtyoff = 0, b_dirtyend = 0, 
  b_kvabase = 0xfffffe00450d4000 <error: Cannot access memory at address 0xfffffe00450d4000>, 
  b_lblkno = 128, b_vp = 0xfffff8001cb0c1e8, b_rcred = 0x0, b_wcred = 0x0, {b_freelist = {
      tqe_next = 0xffffffffffffffff, tqe_prev = 0xffffffffffffffff}, {
      b_pgiodone = 0xffffffffffffffff, b_pgbefore = -1, b_pgafter = -1}}, b_cluster = {
    cluster_head = {tqh_first = 0x0, tqh_last = 0x0}, cluster_entry = {tqe_next = 0x0, 
      tqe_prev = 0x0}}, b_npages = 8, b_dep = {lh_first = 0x0}, b_fsprivate1 = 0x0, 
  b_fsprivate2 = 0x0, b_fsprivate3 = 0x0, b_io_tracking = {0xffffffff81af1ffc "getblkx", 
    0xffffffff81a62d1e "g_vfs_strategy", 0xffffffff81af0ca8 "g_io_request", 
    0xffffffff81b81378 "g_io_check", 0xffffffff81ad975d "g_disk_start", 
    0xffffffff81aaa544 "biodone", 0xffffffff81b990e4 "g_io_deliver", 0xffffffff81aaa544 "biodone", 
    0xffffffff81adac85 "bufdone", 0x0 <repeats 23 times>}, b_io_tcnt = 9, 
  b_pages = 0xfffffe000132b6c0}
(kgdb)
Comment 26 Harald Schmalzbauer 2021-02-13 09:11:49 UTC
(In reply to Cy Schubert from comment #23)

Side note to the reboot-healing:  Ever since OpenZFS import, destroying snapshots without rebooting after creation doesn't work anymore.  A reboot is required in order to be able to destroy OpenZFS snapshots.
I'm aware that (Open)ZFS and ffs snapshots are completely different things, but maybe this ffs-snapshot fix revealed a upper layer bug, which also causes the strange (Open)ZFS snapshot behaviour - most likely not, but wanted to mention.  Will file a different PR for the OpenZFS snapshot later... 

Thanks for all your attention!
Comment 27 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-13 09:19:22 UTC
(In reply to Cy Schubert from comment #25)
This is not what I expected.  But still, please show
p bp->b_pages[0]
...
p bp->b_pages[7]
Comment 28 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-13 09:45:07 UTC
I also want to see td->td_ma and td->td_ma_cnt.  It would be useful to
show td->td_ma[0] ... td->td_ma[td->td_ma_cnt - 1] as well.
Comment 29 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 15:04:31 UTC
(In reply to Harald Schmalzbauer from comment #26)
Two points:

1. Please don't confuse one problem with another. ZFS snapshots is a totally different problem and should be a different PR.

2. I have no such problem with ZFS snapshots on 14-CURRENT.
Comment 30 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 15:08:34 UTC
(In reply to Konstantin Belousov from comment #27)
Since building with -O0 -g, the panic now occurs in the bcopy() two statements before the reported panic.

Again at frame 12:

(kgdb) frame 12
#12 0xffffffff8166a738 in ffs_read (ap=0xfffffe0087e5a598)
    at /opt/src/vm64/sys/ufs/ffs/ffs_vnops.c:789
789				error = vn_io_fault_pgmove(bp->b_pages, blkoffset,
(kgdb) p bp->b_pages[0]
Cannot access memory at address 0xfffffdfff7529e58
(kgdb) p bp->b_pages[1]
Cannot access memory at address 0xfffffdfff7529e60
(kgdb) p bp->b_pages[2]
Cannot access memory at address 0xfffffdfff7529e68
(kgdb) p bp->b_pages[3]
Cannot access memory at address 0xfffffdfff7529e70
(kgdb) p bp->b_pages[4]
Cannot access memory at address 0xfffffdfff7529e78
(kgdb) p bp->b_pages[5]
Cannot access memory at address 0xfffffdfff7529e80
(kgdb) p bp->b_pages[6]
Cannot access memory at address 0xfffffdfff7529e88
(kgdb) p bp->b_pages[7]
Cannot access memory at address 0xfffffdfff7529e90
(kgdb)
Comment 31 Cy Schubert freebsd_committer freebsd_triage 2021-02-13 15:16:54 UTC
(In reply to Konstantin Belousov from comment #28)
 
Also at frame 12:

(kgdb) down
#11 0xffffffff812edca4 in vn_io_fault_pgmove (ma=0xfffffe000132b6c0, offset=0, xfersize=32768, 
    uio=0xfffffe0087e5a728) at /opt/src/vm64/sys/kern/vfs_vnops.c:1513
1513			pmap_cop

(kgdb) p td->td_ma_cnt
$3 = 4025
(kgdb) set $end=td->td_ma_cnt-1
(kgdb) set $i=0
(kgdb) while ($i<$end)
 >p td->td_ma[$i]
 >set $i=$i+1
 >end
$4 = (struct vm_page *) 0xfffffe0087e529f0
$5 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$6 = (struct vm_page *) 0xfffff8001fd65700
$7 = (struct vm_page *) 0xfffff8001fd6b580
$8 = (struct vm_page *) 0xfffffe0087e52a20
$9 = (struct vm_page *) 0x32010178eb
$10 = (struct vm_page *) 0x30c00000000002e
$11 = (struct vm_page *) 0x330101edbd
$12 = (struct vm_page *) 0x104000100000032
$13 = (struct vm_page *) 0x350101edbd
$14 = (struct vm_page *) 0x30c00010000002e
$15 = (struct vm_page *) 0xffffffff81a9edbd
$16 = (struct vm_page *) 0x11fd65900
$17 = (struct vm_page *) 0xfffff8001fd65a80
$18 = (struct vm_page *) 0xfffff8001fd65700
$19 = (struct vm_page *) 0x181207cc9
$20 = (struct vm_page *) 0xfffffe0087e52a70
$21 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$22 = (struct vm_page *) 0xfffff8001fd65a80
$23 = (struct vm_page *) 0xfffff8001fd65700
$24 = (struct vm_page *) 0xfffffe0087e52aa0
$25 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$26 = (struct vm_page *) 0x0
$27 = (struct vm_page *) 0xfffff8001fd65a80
$28 = (struct vm_page *) 0xfffff8001fd65700
$29 = (struct vm_page *) 0x100000000
$30 = (struct vm_page *) 0xfffffe0087e52e10
$31 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$32 = (struct vm_page *) 0xfffff800045b8200
$33 = (struct vm_page *) 0x0
$34 = (struct vm_page *) 0x4d887e52ae0
$35 = (struct vm_page *) 0xffffffff81aaa07a
$36 = (struct vm_page *) 0x900000000
$37 = (struct vm_page *) 0xfffffe008853a700
$38 = (struct vm_page *) 0xfffffe0087e52b50
$39 = (struct vm_page *) 0xffffffff8111b784 <__mtx_assert+260>
$40 = (struct vm_page *) 0xfffffe0087e52b10
$41 = (struct vm_page *) 0xfffffe008853a700
$42 = (struct vm_page *) 0x286
$43 = (struct vm_page *) 0x3201010286
$44 = (struct vm_page *) 0x104fe0000000058
$45 = (struct vm_page *) 0xffffffff81a9edbd
$46 = (struct vm_page *) 0x10000002e
$47 = (struct vm_page *) 0xfffff8001fd65900
$48 = (struct vm_page *) 0xfffff8001fd66c00
$49 = (struct vm_page *) 0x101015780
$50 = (struct vm_page *) 0xfffffe0087e52b60
$51 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$52 = (struct vm_page *) 0xfffff8001fd65900
$53 = (struct vm_page *) 0xfffff8001fd66c00
$54 = (struct vm_page *) 0xfffffe0087e52b90
$55 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$56 = (struct vm_page *) 0x0
$57 = (struct vm_page *) 0xfffff8001fd65900
$58 = (struct vm_page *) 0xfffff8001fd66c00
$59 = (struct vm_page *) 0x100000000
$60 = (struct vm_page *) 0xfffffe0087e52f00
$61 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
--Type <RET> for more, q to quit, c to continue without paging--c
$62 = (struct vm_page *) 0x0
$63 = (struct vm_page *) 0xfffff8001fd66300
$64 = (struct vm_page *) 0xfffff8001fd66c00
$65 = (struct vm_page *) 0x100000000
$66 = (struct vm_page *) 0xfffffe0087e52f30
$67 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$68 = (struct vm_page *) 0x4
$69 = (struct vm_page *) 0xfffffe008853a700
$70 = (struct vm_page *) 0x286
$71 = (struct vm_page *) 0x286
$72 = (struct vm_page *) 0xfffffe0087e52c10
$73 = (struct vm_page *) 0xffffffff81208f05 <intr_restore+21>
$74 = (struct vm_page *) 0x100013287e52c78
$75 = (struct vm_page *) 0xfffffe008853a700
$76 = (struct vm_page *) 0xfffffe0087e52f80
$77 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$78 = (struct vm_page *) 0xfffffe0087e52c40
$79 = (struct vm_page *) 0xffffffff81208f05 <intr_restore+21>
$80 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$81 = (struct vm_page *) 0x286
$82 = (struct vm_page *) 0xfffffe0087e52d30
$83 = (struct vm_page *) 0xffffffff81208e5b <witness_unlock+1179>
$84 = (struct vm_page *) 0xfffffe0087e52d50
$85 = (struct vm_page *) 0x570101a700
$86 = (struct vm_page *) 0x104f8000000002e
$87 = (struct vm_page *) 0xffffffff81a9edbd
$88 = (struct vm_page *) 0x187e52fe0
$89 = (struct vm_page *) 0xfffff8001fd66b80
$90 = (struct vm_page *) 0xfffff8001fd65700
$91 = (struct vm_page *) 0x4d0101d858
$92 = (struct vm_page *) 0x30cfe000000002e
$93 = (struct vm_page *) 0xffffffff81a9edbd
$94 = (struct vm_page *) 0x11fd66b80
$95 = (struct vm_page *) 0xfffffe008853a700
$96 = (struct vm_page *) 0xfffffe0087e53020
$97 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$98 = (struct vm_page *) 0xfffffe0087e52ce0
$99 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$100 = (struct vm_page *) 0xfffff8001fd66680
$101 = (struct vm_page *) 0xfffff8001fd65700
$102 = (struct vm_page *) 0xfffffe0087e52d10
$103 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$104 = (struct vm_page *) 0x0
$105 = (struct vm_page *) 0xfffffe008853a700
$106 = (struct vm_page *) 0xfffffe0087e53070
$107 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$108 = (struct vm_page *) 0xfffffe0087e53080
$109 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$110 = (struct vm_page *) 0x30c00020000002e
$111 = (struct vm_page *) 0xffffffff81a9edbd
$112 = (struct vm_page *) 0x18275d850
$113 = (struct vm_page *) 0xfffff8001fd66680
$114 = (struct vm_page *) 0xfffff8001fd65700
$115 = (struct vm_page *) 0x181a9edbd
$116 = (struct vm_page *) 0xfffffe0087e52d70
$117 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$118 = (struct vm_page *) 0xfffff8001fd66680
$119 = (struct vm_page *) 0xfffff8001fd65700
$120 = (struct vm_page *) 0xfffffe0087e52da0
$121 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$122 = (struct vm_page *) 0x0
$123 = (struct vm_page *) 0xfffff8001fd66680
$124 = (struct vm_page *) 0xfffff8001fd65700
$125 = (struct vm_page *) 0x4d01010000
$126 = (struct vm_page *) 0x104fe0000000032
$127 = (struct vm_page *) 0xffffffff81a9edbd
$128 = (struct vm_page *) 0x187e52de0
$129 = (struct vm_page *) 0xfffff8001fd66680
$130 = (struct vm_page *) 0xfffff8001fd65900
$131 = (struct vm_page *) 0x11fd65900
$132 = (struct vm_page *) 0xfffffe0087e52df0
$133 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$134 = (struct vm_page *) 0xfffff8001fd66680
$135 = (struct vm_page *) 0xfffff8001fd65900
$136 = (struct vm_page *) 0xfffffe0087e52e20
$137 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$138 = (struct vm_page *) 0x0
$139 = (struct vm_page *) 0xfffff8001fd66680
$140 = (struct vm_page *) 0xfffff8001fd65900
$141 = (struct vm_page *) 0x100000000
$142 = (struct vm_page *) 0xfffffe0087e53190
$143 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$144 = (struct vm_page *) 0xfffffe0087e52e50
$145 = (struct vm_page *) 0xfffffe008853a700
$146 = (struct vm_page *) 0xfffff8001fd66c00
$147 = (struct vm_page *) 0x4d0101a700
$148 = (struct vm_page *) 0x30cfe000000002f
$149 = (struct vm_page *) 0xffffffff81a9edbd
$150 = (struct vm_page *) 0x18275d858
$151 = (struct vm_page *) 0xfffff8001fd66680
$152 = (struct vm_page *) 0xfffff8001fd65780
$153 = (struct vm_page *) 0x182814ac0
$154 = (struct vm_page *) 0xfffffe0087e52ea0
$155 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$156 = (struct vm_page *) 0xfffff8001fd66680
$157 = (struct vm_page *) 0xfffff8001fd65780
$158 = (struct vm_page *) 0xfffffe0087e52ed0
$159 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$160 = (struct vm_page *) 0x0
$161 = (struct vm_page *) 0xfffff8001fd66680
$162 = (struct vm_page *) 0xfffff8001fd65780
$163 = (struct vm_page *) 0x100000000
$164 = (struct vm_page *) 0xfffffe0087e53240
$165 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$166 = (struct vm_page *) 0xfffffe0087e52f10
$167 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$168 = (struct vm_page *) 0x0
$169 = (struct vm_page *) 0xfffff8001fd65a00
$170 = (struct vm_page *) 0xfffff8001fd65780
$171 = (struct vm_page *) 0x100000000
$172 = (struct vm_page *) 0xfffffe0087e53280
$173 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$174 = (struct vm_page *) 0x30cfe000000002f
$175 = (struct vm_page *) 0xffffffff81a9edbd
$176 = (struct vm_page *) 0x187e532a0
$177 = (struct vm_page *) 0xfffff8001fd66680
$178 = (struct vm_page *) 0xfffff8001fd65780
$179 = (struct vm_page *) 0x187e52f60
$180 = (struct vm_page *) 0xfffffe0087e52f70
$181 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$182 = (struct vm_page *) 0xfffff8001fd66680
$183 = (struct vm_page *) 0xfffff8001fd65780
$184 = (struct vm_page *) 0xfffffe0087e52fa0
$185 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$186 = (struct vm_page *) 0x0
$187 = (struct vm_page *) 0xfffff8001fd66680
$188 = (struct vm_page *) 0xfffff8001fd65780
$189 = (struct vm_page *) 0x100000000
$190 = (struct vm_page *) 0xfffffe0087e53310
$191 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$192 = (struct vm_page *) 0x30c1b540000002e
$193 = (struct vm_page *) 0xffffffff81a9edbd
$194 = (struct vm_page *) 0x18853a700
$195 = (struct vm_page *) 0xfffff8001fd65900
$196 = (struct vm_page *) 0xfffff8001fd65700
$197 = (struct vm_page *) 0x330101a700
$198 = (struct vm_page *) 0x30cfe000000002f
$199 = (struct vm_page *) 0xffffffff81a9edbd
$200 = (struct vm_page *) 0x11fd65900
$201 = (struct vm_page *) 0xfffff8001fd65980
$202 = (struct vm_page *) 0xfffff8001fd65780
$203 = (struct vm_page *) 0x181a9edbd
$204 = (struct vm_page *) 0xfffffe0087e53030
$205 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$206 = (struct vm_page *) 0xfffff8001fd65980
$207 = (struct vm_page *) 0xfffff8001fd65780
$208 = (struct vm_page *) 0xfffffe0087e53060
$209 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$210 = (struct vm_page *) 0x0
$211 = (struct vm_page *) 0xfffff8001fd65980
$212 = (struct vm_page *) 0xfffff8001fd65780
$213 = (struct vm_page *) 0x3301010000
$214 = (struct vm_page *) 0x30cfe000000002f
$215 = (struct vm_page *) 0xffffffff81a9edbd
$216 = (struct vm_page *) 0x187e530a0
$217 = (struct vm_page *) 0xfffff8001fd65980
$218 = (struct vm_page *) 0xfffff8001fd65780
$219 = (struct vm_page *) 0x181a9edbd
$220 = (struct vm_page *) 0xfffffe0087e530b0
$221 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$222 = (struct vm_page *) 0xfffff8001fd65980
$223 = (struct vm_page *) 0xfffff8001fd65780
$224 = (struct vm_page *) 0xfffffe0087e530e0
$225 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$226 = (struct vm_page *) 0x0
$227 = (struct vm_page *) 0xfffff8001fd65980
$228 = (struct vm_page *) 0xfffff8001fd65780
$229 = (struct vm_page *) 0xfffffe008853a700
$230 = (struct vm_page *) 0xfffffe0087e53450
$231 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$232 = (struct vm_page *) 0xfffff8001fd65780
$233 = (struct vm_page *) 0x100000000
$234 = (struct vm_page *) 0xfffffe0087e53470
$235 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$236 = (struct vm_page *) 0x286
$237 = (struct vm_page *) 0x0
$238 = (struct vm_page *) 0x4d887e53140
$239 = (struct vm_page *) 0xffffffff81aaa07a
$240 = (struct vm_page *) 0x906073000
$241 = (struct vm_page *) 0xfffffe008853a700
$242 = (struct vm_page *) 0xfffffe0087e531b0
$243 = (struct vm_page *) 0xffffffff8111b784 <__mtx_assert+260>
$244 = (struct vm_page *) 0xfffffe0087e53170
$245 = (struct vm_page *) 0xfffffe008853a700
$246 = (struct vm_page *) 0x286
$247 = (struct vm_page *) 0x2b01010286
$248 = (struct vm_page *) 0x30cfe00000001c7
$249 = (struct vm_page *) 0xffffffff81a9edbd
$250 = (struct vm_page *) 0x100000000
$251 = (struct vm_page *) 0xfa01015580
$252 = (struct vm_page *) 0x104f80000000009
$253 = (struct vm_page *) 0xffffffff81a9edbd
$254 = (struct vm_page *) 0x187e531c0
$255 = (struct vm_page *) 0xfffff8001fd6bd00
$256 = (struct vm_page *) 0xfffff8001fd64480
$257 = (struct vm_page *) 0x11fd72380
$258 = (struct vm_page *) 0xfffffe0087e531e0
$259 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$260 = (struct vm_page *) 0xfffff8001fd6bd00
$261 = (struct vm_page *) 0xfffff8001fd64480
$262 = (struct vm_page *) 0xfffffe0087e53210
$263 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$264 = (struct vm_page *) 0x0
$265 = (struct vm_page *) 0xfffff8001fd6bd00
$266 = (struct vm_page *) 0xfffff8001fd64480
$267 = (struct vm_page *) 0x100000000
$268 = (struct vm_page *) 0xfffffe0087e53580
$269 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$270 = (struct vm_page *) 0xfffffe0087e53590
$271 = (struct vm_page *) 0xffffffff81206b52 <witness_checkorder+1986>
$272 = (struct vm_page *) 0x0
$273 = (struct vm_page *) 0x0
$274 = (struct vm_page *) 0x0
$275 = (struct vm_page *) 0xfffffe008853a700
$276 = (struct vm_page *) 0xfffffe0087e535c0
$277 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$278 = (struct vm_page *) 0x28271e4b8
$279 = (struct vm_page *) 0xffffffff8275d888 <w_locklistdata+259000>
$280 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$281 = (struct vm_page *) 0xfffffe0001002b40
$282 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$283 = (struct vm_page *) 0xffffffff8275d888 <w_locklistdata+259000>
$284 = (struct vm_page *) 0xfffffe0087e53390
$285 = (struct vm_page *) 0xffffffff81209bb9 <witness_assert+153>
$286 = (struct vm_page *) 0x0
$287 = (struct vm_page *) 0xfffffe008853a700
$288 = (struct vm_page *) 0xfffffe0087e53620
$289 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$290 = (struct vm_page *) 0xfffffe0087e532f0
$291 = (struct vm_page *) 0xffffffff816a3821 <slab_item_index+33>
$292 = (struct vm_page *) 0xfffff8000793e000
$293 = (struct vm_page *) 0xfffff8000793ef30
$294 = (struct vm_page *) 0x100000286
$295 = (struct vm_page *) 0xffffffff8275d870 <w_locklistdata+258976>
$296 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$297 = (struct vm_page *) 0xfffffe0001003940
$298 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$299 = (struct vm_page *) 0x0
$300 = (struct vm_page *) 0x4d887e53410
$301 = (struct vm_page *) 0xffffffff81aaa07a
$302 = (struct vm_page *) 0x90361ffd8
$303 = (struct vm_page *) 0xfffffe008853a700
$304 = (struct vm_page *) 0xfffffe0087e533a0
$305 = (struct vm_page *) 0xffffffff8111b784 <__mtx_assert+260>
$306 = (struct vm_page *) 0xfffffe0087e53360
$307 = (struct vm_page *) 0xfffffe008853a700
$308 = (struct vm_page *) 0x286
$309 = (struct vm_page *) 0x286
$310 = (struct vm_page *) 0xfffffe0087e53380
$311 = (struct vm_page *) 0xffffffff818ba265 <intr_restore+21>
$312 = (struct vm_page *) 0x0
$313 = (struct vm_page *) 0x286
$314 = (struct vm_page *) 0xfffffe0087e533a0
$315 = (struct vm_page *) 0xfa0101a1c2
$316 = (struct vm_page *) 0x104000000000009
$317 = (struct vm_page *) 0xffffffff81a9edbd
$318 = (struct vm_page *) 0x187e53410
$319 = (struct vm_page *) 0xfffff8001fd6bd00
$320 = (struct vm_page *) 0xfffff8001fd64480
$321 = (struct vm_page *) 0x100000000
$322 = (struct vm_page *) 0xfffffe0087e533e0
$323 = (struct vm_page *) 0xffffffff81207cc9 <isitmychild+41>
$324 = (struct vm_page *) 0xfffff8001fd6bd00
$325 = (struct vm_page *) 0xfffff8001fd64480
$326 = (struct vm_page *) 0xfffffe0087e53410
$327 = (struct vm_page *) 0xffffffff812078eb <witness_lock_order_check+91>
$328 = (struct vm_page *) 0x0
$329 = (struct vm_page *) 0xfffff8001fd6bd00
$330 = (struct vm_page *) 0xfffff8001fd64480
$331 = (struct vm_page *) 0x100000000
$332 = (struct vm_page *) 0xfffffe0087e53780
$333 = (struct vm_page *) 0xffffffff81206941 <witness_checkorder+1457>
$334 = (struct vm_page *) 0xfffffe0087e53790
$335 = (struct vm_page *) 0xffffffff8120642b <witness_checkorder+155>
$336 = (struct vm_page *) 0x282
$337 = (struct vm_page *) 0xfffffe008853a700
$338 = (struct vm_page *) 0xffffffff8231b6f8 <lock_class_lockmgr>
$339 = (struct vm_page *) 0xffffffff8275d858 <w_locklistdata+258952>
$340 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$341 = (struct vm_page *) 0xfffffe008853a850
$342 = (struct vm_page *) 0x4f987e53480
$343 = (struct vm_page *) 0xffffffff81ba5917
$344 = (struct vm_page *) 0x286
$345 = (struct vm_page *) 0x286
$346 = (struct vm_page *) 0xfffffe0087e534a0
$347 = (struct vm_page *) 0xffffffff81208f05 <intr_restore+21>
$348 = (struct vm_page *) 0xfffffe0087e534b0
$349 = (struct vm_page *) 0x286
$350 = (struct vm_page *) 0xfffffe0087e53590
$351 = (struct vm_page *) 0xffffffff81208e5b <witness_unlock+1179>
$352 = (struct vm_page *) 0x100000000000000
$353 = (struct vm_page *) 0xfffffe008853a700
$354 = (struct vm_page *) 0xfffff8001fd64480
$355 = (struct vm_page *) 0xfffff8001fd6bd00
$356 = (struct vm_page *) 0xfffffe0087e53580
$357 = (struct vm_page *) 0xffffffff823204e8 <lock_class_mtx_sleep>
$358 = (struct vm_page *) 0xffffffff8275d870 <w_locklistdata+258976>
$359 = (struct vm_page *) 0xffffffff8275d858 <w_locklistdata+258952>
$360 = (struct vm_page *) 0x0
$361 = (struct vm_page *) 0xfffff80015b1eca8
$362 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$363 = (struct vm_page *) 0x0
$364 = (struct vm_page *) 0xfffffe0087e53580
$365 = (struct vm_page *) 0xfffffe008853a700
$366 = (struct vm_page *) 0xfffffe0087e53590
$367 = (struct vm_page *) 0xffffffff8111b784 <__mtx_assert+260>
$368 = (struct vm_page *) 0xfffff80015b15b78
$369 = (struct vm_page *) 0xfffffe008853a700
$370 = (struct vm_page *) 0x4
$371 = (struct vm_page *) 0xfffffe008853a700
$372 = (struct vm_page *) 0xfffff80015b1eca8
$373 = (struct vm_page *) 0xfffffe008853a700
$374 = (struct vm_page *) 0xfffffe0087e535d0
$375 = (struct vm_page *) 0xffffffff82e1f530
$376 = (struct vm_page *) 0x2387e535e0
$377 = (struct vm_page *) 0xffffffff82e1f530
$378 = (struct vm_page *) 0xfffffe0087e535d0
$379 = (struct vm_page *) 0xffffffff83010300
$380 = (struct vm_page *) 0xffffffff83010384
$381 = (struct vm_page *) 0xffffffff83010384
$382 = (struct vm_page *) 0xffffffff83010300
$383 = (struct vm_page *) 0xffffffff83010300
$384 = (struct vm_page *) 0xfffffe0088542120
$385 = (struct vm_page *) 0xfffffe008853ae00
$386 = (struct vm_page *) 0xffffffff83010384
$387 = (struct vm_page *) 0xffffffff83010300
$388 = (struct vm_page *) 0xcc9d0e7c19f5a875
$389 = (struct vm_page *) 0xfffffe0087e53628
$390 = (struct vm_page *) 0xffffffff818d9d7d <pmap_activate_sw_nopcid_pti+125>
$391 = (struct vm_page *) 0x1cb1a000
$392 = (struct vm_page *) 0x1cb19000
$393 = (struct vm_page *) 0x82e1f0d8
$394 = (struct vm_page *) 0xfffffe0088542120
$395 = (struct vm_page *) 0xfffffe008853ae00
$396 = (struct vm_page *) 0x1cb1a000
$397 = (struct vm_page *) 0x1
$398 = (struct vm_page *) 0xffffffff826b3df0 <vmspace0+368>
$399 = (struct vm_page *) 0xfffffe0087e53698
$400 = (struct vm_page *) 0xffffffff818d9f3e <pmap_activate_sw+302>
$401 = (struct vm_page *) 0x1
$402 = (struct vm_page *) 0xfffffe0088542170
$403 = (struct vm_page *) 0x0
$404 = (struct vm_page *) 0x0
$405 = (struct vm_page *) 0xffffffff826b3da0 <vmspace0+288>
$406 = (struct vm_page *) 0xffffffff826b3da0 <vmspace0+288>
$407 = (struct vm_page *) 0x82e1f0c0
$408 = (struct vm_page *) 0xfffffe0088542120
$409 = (struct vm_page *) 0xffffffff826b3da0 <vmspace0+288>
$410 = (struct vm_page *) 0xfffffe008853ae00
$411 = (struct vm_page *) 0xffffffff826b3da0 <vmspace0+288>
$412 = (struct vm_page *) 0xcc9d0e7c19f5a875
$413 = (struct vm_page *) 0xfffffe0087e53770
$414 = (struct vm_page *) 0xffffffff818a6d61 <cpu_switch+241>
$415 = (struct vm_page *) 0xffffffff811a488f <sched_throw+767>
$416 = (struct vm_page *) 0x87e53720
$417 = (struct vm_page *) 0xffffffff00000b68
$418 = (struct vm_page *) 0xffffffff82e1f0c0
$419 = (struct vm_page *) 0xffffffff82e1f0c0
$420 = (struct vm_page *) 0xffffffff82e1f0c0
$421 = (struct vm_page *) 0xffffffff8271e4a0 <w_mtx>
$422 = (struct vm_page *) 0xffffffff8271e4b8 <w_mtx+24>
$423 = (struct vm_page *) 0xffffffff8271e4a0 <w_mtx>
$424 = (struct vm_page *) 0xffffffff8271e4b8 <w_mtx+24>
$425 = (struct vm_page *) 0xffffffff8271e4a0 <w_mtx>
$426 = (struct vm_page *) 0x8b687e53770
$427 = (struct vm_page *) 0xffffffff81aaa07a
$428 = (struct vm_page *) 0x15b15670
$429 = (struct vm_page *) 0xffffffff8271e4b8 <w_mtx+24>
$430 = (struct vm_page *) 0xffffffff82e1f0c0
$431 = (struct vm_page *) 0xfffffe008853ae00
$432 = (struct vm_page *) 0xfffffe008853a700
$433 = (struct vm_page *) 0xffffffff82e1f0c0
$434 = (struct vm_page *) 0xfffffe0087e53770
$435 = (struct vm_page *) 0xffffffff81209011 <witness_thread_exit+257>
$436 = (struct vm_page *) 0x3b18853a920
$437 = (struct vm_page *) 0xffffffff81a6f73a
$438 = (struct vm_page *) 0xffffffff8275d850 <w_locklistdata+258944>
$439 = (struct vm_page *) 0xcc9d0e7c19f5a875
$440 = (struct vm_page *) 0xfffffe0087e53820
$441 = (struct vm_page *) 0xffffffff8117b79c <thread_exit+1452>
$442 = (struct vm_page *) 0x0
$443 = (struct vm_page *) 0xffffffff81a6f73a
$444 = (struct vm_page *) 0x39f00000000
$445 = (struct vm_page *) 0xffffffff818ba17d <spinlock_exit+13>
$446 = (struct vm_page *) 0x286
$447 = (struct vm_page *) 0xfffffe008853a700
$448 = (struct vm_page *) 0x7fff267f7fff267f
$449 = (struct vm_page *) 0x1f62434487
$450 = (struct vm_page *) 0x1f62434487
$451 = (struct vm_page *) 0x1f62302e93
$452 = (struct vm_page *) 0x1f62302e93
$453 = (struct vm_page *) 0x1f62302e93
$454 = (struct vm_page *) 0xfffff80015b03c00
$455 = (struct vm_page *) 0xfffff80015b15670
$456 = (struct vm_page *) 0xfffff80015b15688
$457 = (struct vm_page *) 0xfffff80015b15528
$458 = (struct vm_page *) 0x2aa15b15828
$459 = (struct vm_page *) 0xfffffe008853a700
$460 = (struct vm_page *) 0x1f62434487
$461 = (struct vm_page *) 0x1315f4
$462 = (struct vm_page *) 0xfffffe0087e53a70
$463 = (struct vm_page *) 0xffffffff810da63d
$464 = (struct vm_page *) 0x1542b10
$465 = (struct vm_page *) 0x0
$466 = (struct vm_page *) 0xfffff80015b03c00
$467 = (struct vm_page *) 0x0
$468 = (struct vm_page *) 0x0
$469 = (struct vm_page *) 0x0
$470 = (struct vm_page *) 0x27400000000
$471 = (struct vm_page *) 0xffffffff00000000
$472 = (struct vm_page *) 0x0
$473 = (struct vm_page *) 0x8853a700
$474 = (struct vm_page *) 0x0
$475 = (struct vm_page *) 0xffffffff81a86df6
$476 = (struct vm_page *) 0x26000000000
$477 = (struct vm_page *) 0x23e00000000
$478 = (struct vm_page *) 0x0
$479 = (struct vm_page *) 0xffffffff81b45f39
$480 = (struct vm_page *) 0x0
$481 = (struct vm_page *) 0x2
$482 = (struct vm_page *) 0x0
$483 = (struct vm_page *) 0xfffffe008853a700
$484 = (struct vm_page *) 0xfffffe0087e53960
$485 = (struct vm_page *) 0xffffffff811f56c5 <userret+1077>
$486 = (struct vm_page *) 0x282
$487 = (struct vm_page *) 0xfffffe008853a700
$488 = (struct vm_page *) 0xfffffe0087e53900
$489 = (struct vm_page *) 0x19f5a875
$490 = (struct vm_page *) 0xffffffff81a86df6
$491 = (struct vm_page *) 0x0
$492 = (struct vm_page *) 0xffffffff81a86df6
$493 = (struct vm_page *) 0xffffebb8
$494 = (struct vm_page *) 0x0
$495 = (struct vm_page *) 0xfff80015b15528
$496 = (struct vm_page *) 0x0
$497 = (struct vm_page *) 0x0
$498 = (struct vm_page *) 0xfffffe000000016b
$499 = (struct vm_page *) 0xfffff80015b15528
$500 = (struct vm_page *) 0xfffffe0087e53c00
$501 = (struct vm_page *) 0xfffffe008853a700
$502 = (struct vm_page *) 0xfffffe0087e53bd0
$503 = (struct vm_page *) 0x818ed218
$504 = (struct vm_page *) 0x15b
$505 = (struct vm_page *) 0x155
$506 = (struct vm_page *) 0xffffffff81a86df6
$507 = (struct vm_page *) 0x8853a700
$508 = (struct vm_page *) 0x155
$509 = (struct vm_page *) 0x155
$510 = (struct vm_page *) 0x0
$511 = (struct vm_page *) 0xffffffff81b45f39
$512 = (struct vm_page *) 0x0
$513 = (struct vm_page *) 0x0
$514 = (struct vm_page *) 0x12c00000000
$515 = (struct vm_page *) 0xfffffe008853a700
$516 = (struct vm_page *) 0xfffffe0087e53a60
$517 = (struct vm_page *) 0x0
$518 = (struct vm_page *) 0x1fffe0087e53aa0
$519 = (struct vm_page *) 0xffffffff810daa97 <sys_wait4+183>
$520 = (struct vm_page *) 0x100
$521 = (struct vm_page *) 0xfffff80015b15530
$522 = (struct vm_page *) 0xfffff80015b15528
$523 = (struct vm_page *) 0xfffff8000606ac00
$524 = (struct vm_page *) 0x0
$525 = (struct vm_page *) 0xfffff80004067b80
$526 = (struct vm_page *) 0x100000000
$527 = (struct vm_page *) 0x388540958
$528 = (struct vm_page *) 0xfffffe0088540860
$529 = (struct vm_page *) 0x88540860
$530 = (struct vm_page *) 0xfffffe0087e53a80
$531 = (struct vm_page *) 0x0
$532 = (struct vm_page *) 0xfffffe0087e53c00
$533 = (struct vm_page *) 0xfffff80015b15528
$534 = (struct vm_page *) 0x0
$535 = (struct vm_page *) 0xfffffe008853a700
$536 = (struct vm_page *) 0xfffffe0087e53aa0
$537 = (struct vm_page *) 0xffffffff810d8d86
$538 = (struct vm_page *) 0xfffffe0087e53c00
$539 = (struct vm_page *) 0x15b15528
$540 = (struct vm_page *) 0xfffffe008853aae8
$541 = (struct vm_page *) 0xfffffe008853a700
$542 = (struct vm_page *) 0xfffffe0087e53b40
$543 = (struct vm_page *) 0xffffffff818ef396 <syscallenter+1590>
$544 = (struct vm_page *) 0x202
$545 = (struct vm_page *) 0x100000000000202
$546 = (struct vm_page *) 0x0
$547 = (struct vm_page *) 0xffffffff818ba265 <intr_restore+21>
$548 = (struct vm_page *) 0x0
$549 = (struct vm_page *) 0x202
$550 = (struct vm_page *) 0xfefe000000000c
$551 = (struct vm_page *) 0x15b15528
$552 = (struct vm_page *) 0x0
$553 = (struct vm_page *) 0x80035ef0d
$554 = (struct vm_page *) 0xfffff80015b15528
$555 = (struct vm_page *) 0xfffffe008853a700
$556 = (struct vm_page *) 0x100000000000000
$557 = (struct vm_page *) 0x0
$558 = (struct vm_page *) 0xffffffff82310c50 <sysent+32>
$559 = (struct vm_page *) 0xfffffe008853aad8
$560 = (struct vm_page *) 0xfffff80015b15528
$561 = (struct vm_page *) 0xfffffe008853a700
$562 = (struct vm_page *) 0xfffffe0087e53bf0
$563 = (struct vm_page *) 0xffffffff818eea9b <amd64_syscall+27>
$564 = (struct vm_page *) 0x43882e1f0c0
$565 = (struct vm_page *) 0xffffffff81b51bb6
$566 = (struct vm_page *) 0xfffffe0087e53cc0
$567 = (struct vm_page *) 0xffffffff82202798 <Giant+24>
$568 = (struct vm_page *) 0xfffffe0087e53bf0
$569 = (struct vm_page *) 0xffffffff810e2ad9 <fork_exit+409>
$570 = (struct vm_page *) 0xfffffe0087e53c00
$571 = (struct vm_page *) 0x42500000000
$572 = (struct vm_page *) 0xcc9d0e7c19f5a875
$573 = (struct vm_page *) 0x0
$574 = (struct vm_page *) 0xcc9d0e7c19f5a875
$575 = (struct vm_page *) 0x3
$576 = (struct vm_page *) 0x7fffffffec98
$577 = (struct vm_page *) 0x0
$578 = (struct vm_page *) 0x7fffffffec78
$579 = (struct vm_page *) 0x3
$580 = (struct vm_page *) 0xfffffe0087e53bf0
$581 = (struct vm_page *) 0xffffffff818ee188 <trap_check+72>
$582 = (struct vm_page *) 0x0
$583 = (struct vm_page *) 0xfffffe008853a700
$584 = (struct vm_page *) 0x7fffffffeba0
$585 = (struct vm_page *) 0xffffffff818ade0e <fast_syscall_common+248>
$586 = (struct vm_page *) 0x0
$587 = (struct vm_page *) 0x1
$588 = (struct vm_page *) 0x2251ff
$589 = (struct vm_page *) 0x21a032
$590 = (struct vm_page *) 0xffffffffffffdf70
$591 = (struct vm_page *) 0x80024f120
$592 = (struct vm_page *) 0x1
$593 = (struct vm_page *) 0x7fffffffeb18
$594 = (struct vm_page *) 0x7fffffffeba0
$595 = (struct vm_page *) 0x1
$596 = (struct vm_page *) 0x1
$597 = (struct vm_page *) 0x7fffffffec98
$598 = (struct vm_page *) 0x0
$599 = (struct vm_page *) 0x0
$600 = (struct vm_page *) 0x3
$601 = (struct vm_page *) 0x1b00130000000c
$602 = (struct vm_page *) 0x7fffffffebb8
$603 = (struct vm_page *) 0x3b003b00000001
$604 = (struct vm_page *) 0x2
$605 = (struct vm_page *) 0x8003e13fa
$606 = (struct vm_page *) 0x43
$607 = (struct vm_page *) 0x206
$608 = (struct vm_page *) 0x7fffffffeaa8
$609 = (struct vm_page *) 0x3b
$610 = (struct vm_page *) 0x37f
$611 = (struct vm_page *) 0x0
$612 = (struct vm_page *) 0x0
$613 = (struct vm_page *) 0xffff00001f80
$614 = (struct vm_page *) 0x0
$615 = (struct vm_page *) 0x0
$616 = (struct vm_page *) 0x0
$617 = (struct vm_page *) 0x0
$618 = (struct vm_page *) 0x0
$619 = (struct vm_page *) 0x0
$620 = (struct vm_page *) 0x0
$621 = (struct vm_page *) 0x0
$622 = (struct vm_page *) 0x0
$623 = (struct vm_page *) 0x0
$624 = (struct vm_page *) 0x0
$625 = (struct vm_page *) 0x0
$626 = (struct vm_page *) 0x0
$627 = (struct vm_page *) 0x0
$628 = (struct vm_page *) 0x0
$629 = (struct vm_page *) 0x0
$630 = (struct vm_page *) 0x0
$631 = (struct vm_page *) 0x0
$632 = (struct vm_page *) 0x7fffffffd8ef
$633 = (struct vm_page *) 0x7fffffffd8ee
$634 = (struct vm_page *) 0x0
$635 = (struct vm_page *) 0x0
$636 = (struct vm_page *) 0x0
$637 = (struct vm_page *) 0x0
$638 = (struct vm_page *) 0xc0c0c0c0
$639 = (struct vm_page *) 0x0
$640 = (struct vm_page *) 0x0
$641 = (struct vm_page *) 0x0
$642 = (struct vm_page *) 0x0
$643 = (struct vm_page *) 0x0
$644 = (struct vm_page *) 0x0
$645 = (struct vm_page *) 0x0
$646 = (struct vm_page *) 0x0
$647 = (struct vm_page *) 0x0
$648 = (struct vm_page *) 0x0
$649 = (struct vm_page *) 0x0
$650 = (struct vm_page *) 0x0
$651 = (struct vm_page *) 0x0
$652 = (struct vm_page *) 0x0
$653 = (struct vm_page *) 0x0
$654 = (struct vm_page *) 0x0
$655 = (struct vm_page *) 0x0
$656 = (struct vm_page *) 0x0
$657 = (struct vm_page *) 0x0
$658 = (struct vm_page *) 0x0
$659 = (struct vm_page *) 0x0
$660 = (struct vm_page *) 0x0
$661 = (struct vm_page *) 0x0
$662 = (struct vm_page *) 0x0
$663 = (struct vm_page *) 0x0
$664 = (struct vm_page *) 0x0
$665 = (struct vm_page *) 0x0
$666 = (struct vm_page *) 0x0
$667 = (struct vm_page *) 0x0
$668 = (struct vm_page *) 0x0
$669 = (struct vm_page *) 0x0
$670 = (struct vm_page *) 0x0
$671 = (struct vm_page *) 0x0
$672 = (struct vm_page *) 0x0
$673 = (struct vm_page *) 0x0
$674 = (struct vm_page *) 0x3
$675 = (struct vm_page *) 0x0
$676 = (struct vm_page *) 0x0
$677 = (struct vm_page *) 0x0
$678 = (struct vm_page *) 0x0
$679 = (struct vm_page *) 0x0
$680 = (struct vm_page *) 0x0
$681 = (struct vm_page *) 0x0
$682 = (struct vm_page *) 0x0
$683 = (struct vm_page *) 0x0
$684 = (struct vm_page *) 0x0
$685 = (struct vm_page *) 0x0
$686 = (struct vm_page *) 0x0
$687 = (struct vm_page *) 0x0
$688 = (struct vm_page *) 0x0
$689 = (struct vm_page *) 0x0
$690 = (struct vm_page *) 0x0
$691 = (struct vm_page *) 0x0
$692 = (struct vm_page *) 0x0
$693 = (struct vm_page *) 0x0
$694 = (struct vm_page *) 0x0
$695 = (struct vm_page *) 0x0
$696 = (struct vm_page *) 0x0
$697 = (struct vm_page *) 0x0
$698 = (struct vm_page *) 0x0
$699 = (struct vm_page *) 0x0
$700 = (struct vm_page *) 0x0
$701 = (struct vm_page *) 0x0
$702 = (struct vm_page *) 0x0
$703 = (struct vm_page *) 0x0
$704 = (struct vm_page *) 0x0
$705 = (struct vm_page *) 0x0
$706 = (struct vm_page *) 0x0
$707 = (struct vm_page *) 0x0
$708 = (struct vm_page *) 0x0
$709 = (struct vm_page *) 0x0
$710 = (struct vm_page *) 0x0
$711 = (struct vm_page *) 0x0
$712 = (struct vm_page *) 0x0
$713 = (struct vm_page *) 0x0
Cannot access memory at address 0xfffffe0087e54000
(kgdb)
Comment 32 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-13 19:49:31 UTC
(In reply to Cy Schubert from comment #30)
You do understand that this is not useful.

I give up, unless somebody either provides the image where
fstyp on the snap panics the system, or kernel.debug+vmcore
(with vfs_vnops.c and ffs_vnops.c compiled with -O0).
Comment 33 Cy Schubert freebsd_committer freebsd_triage 2021-02-14 02:45:49 UTC
It's on freefall in ~cy/pr253158.tar.xz.
Comment 34 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-14 09:56:54 UTC
(In reply to Cy Schubert from comment #33)
So everything in the dump looks fine, except in the vn_io_fault1() frame,
the short_uio offset and resid are corrupted, which ultimately causes the
panic when ffs_read() tries to actually move bytes around.

Please apply the following debugging patch, compile the same way as you did,
and provide me with the kernel.full+vmcore, again.  Thanks.

diff --git a/sys/kern/vfs_vnops.c b/sys/kern/vfs_vnops.c
index f8943b3c07e7..72357d3ab2af 100644
--- a/sys/kern/vfs_vnops.c
+++ b/sys/kern/vfs_vnops.c
@@ -1339,6 +1339,8 @@ vn_io_fault1(struct vnode *vp, struct uio *uio, struct vn_io_fault_args *args,
 		td->td_ma = ma;
 		td->td_ma_cnt = cnt;
 
+volatile struct uio short_uio1;
+short_uio1 = short_uio;
 		error = vn_io_fault_doio(args, &short_uio, td);
 		vm_page_unhold_pages(ma, cnt);
 		adv = len - short_uio.uio_resid;
Comment 35 Harald Schmalzbauer 2021-02-14 17:52:23 UTC
Created attachment 222440 [details]
kgdb output with volatile struct uio short_uio1

(In reply to Konstantin Belousov from comment #34)

Please find attached the output after adding your diff - I could reproduce on main-14 aswell.
Nothing obviously (to me) differs in my debug info, so might be useless too?
Comment 36 Cy Schubert freebsd_committer freebsd_triage 2021-02-14 19:34:22 UTC
Panic. See freefall:~cy/pr253158-2.tar.xz
Comment 37 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-15 03:39:01 UTC
(In reply to Cy Schubert from comment #36)
Ok, I can (partially) understand it.

Below are two patches.  I believe that either one of them should fix
the problem.  Can you check please? [Both are needed for correctness]

commit 83a450af9edfd1b5ca705e8101870109225fdc7d
Author: Konstantin Belousov <kib@FreeBSD.org>
Date:   Mon Feb 15 05:36:02 2021 +0200

    UFS snapshots: properly set the vm object size.
    
    PR:     253158

diff --git a/sys/ufs/ffs/ffs_snapshot.c b/sys/ufs/ffs/ffs_snapshot.c
index 8f0adde6f5e4..6da84fb46bb0 100644
--- a/sys/ufs/ffs/ffs_snapshot.c
+++ b/sys/ufs/ffs/ffs_snapshot.c
@@ -59,6 +59,9 @@ __FBSDID("$FreeBSD$");
 #include <sys/rwlock.h>
 #include <sys/vnode.h>
 
+#include <vm/vm.h>
+#include <vm/vm_extern.h>
+
 #include <geom/geom.h>
 
 #include <ufs/ufs/extattr.h>
@@ -328,6 +331,7 @@ ffs_snapshot(mp, snapfile)
 		goto out;
 	bawrite(bp);
 	ip->i_size = lblktosize(fs, (off_t)(numblks + 1));
+	vnode_pager_setsize(vp, ip->i_size);
 	DIP_SET(ip, i_size, ip->i_size);
 	UFS_INODE_SET_FLAG(ip, IN_SIZEMOD | IN_CHANGE | IN_UPDATE);
 	/*


commit 7b34e5b278f9f2af69f5d39f7999507a17238293
Author: Konstantin Belousov <kib@FreeBSD.org>
Date:   Mon Feb 15 05:34:06 2021 +0200

    pgcache read: protect against reads past end of the vm object size
    
    If uio_offset is past end of the object size, calculated resid is negative.
    Delegate handling this case to the locked read, as any other non-trivial
    situation.
    
    PR:     253158

diff --git a/sys/kern/vfs_vnops.c b/sys/kern/vfs_vnops.c
index 46b333b2261f..b13eb442e436 100644
--- a/sys/kern/vfs_vnops.c
+++ b/sys/kern/vfs_vnops.c
@@ -967,6 +967,8 @@ vn_read_from_obj(struct vnode *vp, struct uio *uio)
 #else
 	vsz = atomic_load_64(&obj->un_pager.vnp.vnp_size);
 #endif
+	if (uio->uio_offset >= vsz)
+		goto out;
 	if (uio->uio_offset + resid > vsz)
 		resid = vsz - uio->uio_offset;
Comment 38 Cy Schubert freebsd_committer freebsd_triage 2021-02-15 04:32:09 UTC
That fixes it.

beastie# newfs /dev/ada3
/dev/ada3: 16384.0MB (33554432 sectors) block size 32768, fragment size 4096
	using 27 cylinder groups of 625.22MB, 20007 blks, 80128 inodes.
super-block backups (for fsck_ffs -b #) at:
 192, 1280640, 2561088, 3841536, 5121984, 6402432, 7682880, 8963328, 10243776,
 11524224, 12804672, 14085120, 15365568, 16646016, 17926464, 19206912,
 20487360, 21767808, 23048256, 24328704, 25609152, 26889600, 28170048,
 29450496, 30730944, 32011392, 33291840
beastie# mount /dev/ada3 /mnt
beastie# mksnap_ffs /mnt/.snap/test
beastie# fstyp /mnt/.snap/test
ufs
beastie# 


Note: when committing, I didn't report this problem. I only tested it.
Comment 39 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-15 04:44:22 UTC
(In reply to Cy Schubert from comment #38)
Did you tested each patch alone, or just applied both?

I am interested to see if each patch alone fixes it as well.
Comment 40 Kirk McKusick freebsd_committer freebsd_triage 2021-02-15 07:05:09 UTC
(In reply to Konstantin Belousov from comment #37)
Doing the vnode_pager_setsize() after setting the size is clearly the correct fix.

The previous code did not call vnode_pager_setsize() but worked because later in ffs_snapshot() it does a UFS_WRITE() to output the snaplist. Previously the UFS_WRITE() allocated the extra block at the end of the file which caused it to do the needed vnode_pager_setsize(). But the new code had already allocated the extra block, so UFS_WRITE() did not extend the size and thus did not do the vnode_pager_setsize().
Comment 41 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-15 07:28:21 UTC
(In reply to Kirk McKusick from comment #40)
There are actually two bugs, fixed by two patches.  One is the wrong size of
the vnode vm object.  BTW, I opted for additional vnode_pager_setsize()
instead of setting fs_size + fs_blksize in initial vnode_create_vobject(),
but I might reconsider this.  It is somewhat simpler to see consequences
of the fix/no fix when testing this variant of the patch alone.

Second bug is that page cache read path in vfs_vnops.c is confused when
uio_offset is past the end of file as recorded by vnode_pager_setsize().
It results in negative resid corrupting the state of the io request.

Either of changes should fix the problem, which I want to get confirmations for.
But both bugs should be fixed.
Comment 42 Harald Schmalzbauer 2021-02-15 08:19:51 UTC
(In reply to Konstantin Belousov from comment #41)

I tried both diffs on their own on main-14 post 8563de2f2799b2cb6f2f06e3c9dddd48dca2a986 and you are right that both fix the panic with my simple test (ffs_snapshot.c=>fstyp-success; patch -R=>fstyp-panic; vfs_vnops.c=>fstyp-"filesystem not recognized" but no panic).

As you might have noticed, the code path is far beyond my skills.
But I think I understand your explanation.
What I do not understand is why it wasn't a problem for Kirk McKusick e.g.
Depends on cache size?

Here's the full output of the vfs_vnops.c-only-patched kernel and fstyp(8) result:
fstyp: fread: Operation not permitted
fstyp: /.snap/.test2: filesystem not recognized

Thanks,
-harry
Comment 43 Cy Schubert freebsd_committer freebsd_triage 2021-02-15 08:54:33 UTC
(In reply to Konstantin Belousov from comment #39)

Both patches, each by themselves, don't panic. The sys/ufs/ffs/ffs_snapshot.c patch resolves the problem without causing any other problems.

However the sys/kern/vfs_vnops.c patch by itself causes other problems which results in the following errors during boot:

mountroot: waiting for device /dev/ada0s1a...
input in flex scanner failed
read: read error: Unknown error: 66047
warning: total configured swap (524276 pages) exceeds maximum recommended amount
 (457736 pages).
[...]
Starting dhclient.
input in flex scanner failed
/etc/rc.d/dhclient: WARNING: failed to start dhclient
vtnet2: link state changed to UP
pid 417 (sleep), jid 0, uid 0: exited on signal 3
Quit

The system hung during boot because the network interface never obtained an IP address through dhclient.

beastie# newfs /dev/ada3
/dev/ada3: 16384.0MB (33554432 sectors) block size 32768, fragment size 4096
        using 27 cylinder groups of 625.22MB, 20007 blks, 80128 inodes.
super-block backups (for fsck_ffs -b #) at:
 192, 1280640, 2561088, 3841536, 5121984, 6402432, 7682880, 8963328, 10243776,
 11524224, 12804672, 14085120, 15365568, 16646016, 17926464, 19206912,
 20487360, 21767808, 23048256, 24328704, 25609152, 26889600, 28170048,
 29450496, 30730944, 32011392, 33291840
beastie# mount /dev/ada3 /mnt
beastie# mksnap_ffs /mnt/.snap/test
beastie# fstyp /mnt/.snap/test: filesystem not recognized.
beastie#

It's getting late here. I cannot continue testing sys/kern/vfs_vnops.c by itself tonight. I can continue tomorrow.

Repeat: sys/ufs/ffs/ffs_snapshot.c does fix the problem without any negative effects.
Comment 44 Cy Schubert freebsd_committer freebsd_triage 2021-02-15 09:03:26 UTC
(In reply to Harald Schmalzbauer from comment #42)

The sys/kern/vfs_vnops.c patch by itself introduces other problems. Together, both patches or the sys/ufs/ffs/ffs_snapshot.c patch by itself resolve the panic without any regressions.
Comment 45 Konstantin Belousov freebsd_committer freebsd_triage 2021-02-15 09:47:44 UTC
(In reply to Cy Schubert from comment #44)
Yes, the vfs_vnops.c patch might leave error uninitialized.  Fixed commit below.

commit 04822fadd7b1d7d20373cf3fa8e7fdd5a26e7da9
Author: Konstantin Belousov <kib@FreeBSD.org>
Date:   Mon Feb 15 05:34:06 2021 +0200

    pgcache read: protect against reads past end of the vm object size
    
    If uio_offset is past end of the object size, calculated resid is negative.
    Delegate handling this case to the locked read, as any other non-trivial
    situation.
    
    PR:     253158

diff --git a/sys/kern/vfs_vnops.c b/sys/kern/vfs_vnops.c
index 46b333b2261f..3e6abb01bfd7 100644
--- a/sys/kern/vfs_vnops.c
+++ b/sys/kern/vfs_vnops.c
@@ -967,6 +967,10 @@ vn_read_from_obj(struct vnode *vp, struct uio *uio)
 #else
 	vsz = atomic_load_64(&obj->un_pager.vnp.vnp_size);
 #endif
+	if (uio->uio_offset >= vsz) {
+		error = EJUSTRETURN;
+		goto out;
+	}
 	if (uio->uio_offset + resid > vsz)
 		resid = vsz - uio->uio_offset;
Comment 46 Kirk McKusick freebsd_committer freebsd_triage 2021-02-15 22:02:18 UTC
(In reply to Konstantin Belousov from comment #41)
You should defintely push the fix for ffs_snapshot.c as it fixes the snapshot breakage. As mentioned earlier, I would like to see the snapshot fixes pushed into 13.0 if that is possible.
Comment 47 Kirk McKusick freebsd_committer freebsd_triage 2021-02-15 22:06:14 UTC
(In reply to Harald Schmalzbauer from comment #42)
Thanks for your ongoing help in resolving this problem.

For now, I suggest that you include the patch to ffs_snapshot.c as it solves that problem. For now I would not include the change to vfs_vnops.c as it is not needed to fix the snapshot problem and does cause other regressions.
Comment 48 Cy Schubert freebsd_committer freebsd_triage 2021-02-15 22:44:41 UTC
(In reply to Kirk McKusick from comment #47)
kib's second patch to vfs_vnops.c fixes the regression. Agreed however it can wait.
Comment 49 commit-hook freebsd_committer freebsd_triage 2021-02-16 05:15:57 UTC
A commit in branch main references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=c61fae1475f1864dc4bba667b642f279afd44855

commit c61fae1475f1864dc4bba667b642f279afd44855
Author:     Konstantin Belousov <kib@FreeBSD.org>
AuthorDate: 2021-02-15 03:34:06 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-02-16 05:09:37 +0000

    pgcache read: protect against reads past end of the vm object size

    If uio_offset is past end of the object size, calculated resid is negative.
    Delegate handling this case to the locked read, as any other non-trivial
    situation.

    PR:     253158
    Reported by:    Harald Schmalzbauer <bugzilla.freebsd@omnilan.de>
    Tested by:      cy
    Sponsored by:   The FreeBSD Foundation
    MFC after:      1 week

 sys/kern/vfs_vnops.c | 4 ++++
 1 file changed, 4 insertions(+)
Comment 50 commit-hook freebsd_committer freebsd_triage 2021-02-16 05:15:58 UTC
A commit in branch main references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=c31480a1f66537e59b02e935a547bcfc76715278

commit c31480a1f66537e59b02e935a547bcfc76715278
Author:     Konstantin Belousov <kib@FreeBSD.org>
AuthorDate: 2021-02-15 03:36:02 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-02-16 05:11:52 +0000

    UFS snapshots: properly set the vm object size.

    Citing Kirk:
    The previous code [before 8563de2f2799b2cb -- kib] did not call
    vnode_pager_setsize() but worked because later in ffs_snapshot() it
    does a UFS_WRITE() to output the snaplist. Previously the UFS_WRITE()
    allocated the extra block at the end of the file which caused it to do
    the needed vnode_pager_setsize(). But the new code had already allocated
    the extra block, so UFS_WRITE() did not extend the size and thus did not
    do the vnode_pager_setsize().

    PR:     253158
    Reported by:    Harald Schmalzbauer <bugzilla.freebsd@omnilan.de>
    Reviewed by:    mckusick
    Tested by:      cy
    Sponsored by:   The FreeBSD Foundation
    MFC after:      1 week

 sys/ufs/ffs/ffs_snapshot.c | 4 ++++
 1 file changed, 4 insertions(+)
Comment 51 commit-hook freebsd_committer freebsd_triage 2021-02-23 09:10:46 UTC
A commit in branch stable/13 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=1c23f70aeb299e672a4fc483c45c721548ae1047

commit 1c23f70aeb299e672a4fc483c45c721548ae1047
Author:     Konstantin Belousov <kib@FreeBSD.org>
AuthorDate: 2021-02-15 03:34:06 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-02-23 08:50:22 +0000

    pgcache read: protect against reads past end of the vm object size

    PR:     253158

    (cherry picked from commit c61fae1475f1864dc4bba667b642f279afd44855)

 sys/kern/vfs_vnops.c | 4 ++++
 1 file changed, 4 insertions(+)
Comment 52 commit-hook freebsd_committer freebsd_triage 2021-02-23 11:26:09 UTC
A commit in branch releng/13.0 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=4b737a9c58cac69008f189cc44e7d1a81a0b601c

commit 4b737a9c58cac69008f189cc44e7d1a81a0b601c
Author:     Konstantin Belousov <kib@FreeBSD.org>
AuthorDate: 2021-02-15 03:34:06 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-02-23 11:21:00 +0000

    pgcache read: protect against reads past end of the vm object size

    PR:     253158
    Approved by:    re (gjb)

    (cherry picked from commit c61fae1475f1864dc4bba667b642f279afd44855)

 sys/kern/vfs_vnops.c | 4 ++++
 1 file changed, 4 insertions(+)
Comment 53 commit-hook freebsd_committer freebsd_triage 2021-02-25 12:59:34 UTC
A commit in branch stable/13 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=66308a13dddcf4282521c044ee668c15a638cdd6

commit 66308a13dddcf4282521c044ee668c15a638cdd6
Author:     Kirk McKusick <mckusick@FreeBSD.org>
AuthorDate: 2021-02-12 05:31:16 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-02-25 12:56:20 +0000

    Fix bug 253158 - Panic: snapacct_ufs2: bad block - mksnap_ffs(8) crash

    PR:           253158

    (cherry picked from commit 8563de2f2799b2cb6f2f06e3c9dddd48dca2a986)
    (cherry picked from commit c31480a1f66537e59b02e935a547bcfc76715278)

 sys/ufs/ffs/ffs_snapshot.c | 141 ++++++++++++++++++++++++---------------------
 1 file changed, 74 insertions(+), 67 deletions(-)
Comment 54 commit-hook freebsd_committer freebsd_triage 2021-02-25 20:55:11 UTC
A commit in branch releng/13.0 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=f3a1daebaff5181c69ba1086d63de694bf298c64

commit f3a1daebaff5181c69ba1086d63de694bf298c64
Author:     Kirk McKusick <mckusick@FreeBSD.org>
AuthorDate: 2021-02-12 05:31:16 +0000
Commit:     Konstantin Belousov <kib@FreeBSD.org>
CommitDate: 2021-02-25 20:51:10 +0000

    Fix bug 253158 - Panic: snapacct_ufs2: bad block - mksnap_ffs(8) crash

    PR:     253158
    Approved by:    re (delphij, gjb)

    (cherry picked from commit 8563de2f2799b2cb6f2f06e3c9dddd48dca2a986)
    (cherry picked from commit c31480a1f66537e59b02e935a547bcfc76715278)

 sys/ufs/ffs/ffs_snapshot.c | 141 ++++++++++++++++++++++++---------------------
 1 file changed, 74 insertions(+), 67 deletions(-)
Comment 55 commit-hook freebsd_committer freebsd_triage 2021-03-16 00:07:52 UTC
A commit in branch stable/12 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=cf0310dfefee8672680fb45b7ee25722e7630227

commit cf0310dfefee8672680fb45b7ee25722e7630227
Author:     Kirk McKusick <mckusick@FreeBSD.org>
AuthorDate: 2021-02-12 05:31:16 +0000
Commit:     Kirk McKusick <mckusick@FreeBSD.org>
CommitDate: 2021-03-16 00:11:29 +0000

    Fix bug 253158 - Panic: snapacct_ufs2: bad block - mksnap_ffs(8) crash

    PR:           253158

    (cherry picked from commit 8563de2f2799b2cb6f2f06e3c9dddd48dca2a986)
    (cherry picked from commit c31480a1f66537e59b02e935a547bcfc76715278)

 sys/ufs/ffs/ffs_snapshot.c | 145 ++++++++++++++++++++++++---------------------
 1 file changed, 78 insertions(+), 67 deletions(-)