I have made an attempt at upgrading from FreeBSD 11.0-RELEASE to 11.1-RELEASE. Unfortunately, the 11.1-RELEASE kernel panics with the following error: panic: Solaris(panic): blkptr at 0xfffffe00033e5f80 has invalid CHECKSUM 0 The backtrace is as follows: kbd_backtrace+0x67 vpanic+0x186 panic+0x43 vcmn_err+0xc2 zfs_panic_recover+0x5a zfs_blkptr_verify+0x8b zio_read+0x2c spa_load_verify_cb+0x14a traverse_visitbp+0x1f8 traverse_visitbp+0x400 traverse_visitbp+0x400 traverse_visitbp+0x400 traverse_visitbp+0x400 traverse_dnode+0xc7 traverse_visitbp+0x753 traverse_impl+0x22b traverse_pool+0x16d spa_load+0x1bce The root file system is part of this ZFS pool, which means I can't even boot my system. Attempting to import the pool from the shell prompt after booting from 11.1-RELEASE installation media also results in a kernel panic. The 11.0-RELEASE kernel boots from this ZFS pool without trouble. Scrubing does not find any error. The pool is in perfect health according to "zpool status" (on 11.0-RELEASE).
Could you please try to obtain a crash dump with 11.1 ? You can set dumpdev parameter in /boot/loader.conf to a suitable value if you have a dedicated dump partition or a swap partition. E.g.: dumpdev="gpt/7660D.swap" dumpdev="ada0p2" etc. Then you can boot back to 11.0 to get the crash dump extracted and saved. As to the issue itself... It's possible that you have some bad data with correct checksum on disk (no guesses on how it came to be). The older ZFS code is not as thorough in validating the data. The scrub does not detect it because the checksum is correct.
Additional debugging confirmed the bad on-disk data theory.
"The older ZFS code is not as thorough in validating the data." Andriy, this is interesting (and quite scary ?). What are the differences between 11.0 and 11.1 in terms of ZFS data strength ? Many thanks ! Ben
(In reply to Ben RUBSON from comment #3) Not sure what's scary... (at least, scarier than before). But let me try to clarify the problem first. A bit flip happened in RAM (non-ECC) and some corrupted (meta-)data got written to disk. It happened to be a block pointer within an indirect block. For ZFS the indirect block looked totally valid as its checksum was calculated after the bit flip. So, ZFS had no reason to distrust the block pointers in the block. Still, the newer ZFS does some additional validation (sanity checking) of those block pointers while the older ZFS fully trusted them to be correct. A corrupt block pointer would typically result in a crash later on. And such a crash is hard(-er) to debug, that's why the extra checks were added. In some cases the corruption would be almost benign, so things would appear to be okay. In this case, the block pointer was actually a hole block pointer and the corruption was of the almost benign variety. So, really, the culprit here was faulty RAM. If your data gets corrupted in memory, you have corrupted data and there is no way ZFS can help with that. If your metadata gets corrupted in memory, then ZFS might be able to detect that and bail out early, or it can fail to detect the problem and crash later on, or it can even try to read a wrong block, but then the checksum error is the most likely outcome. The usual advice applies, use ECC memory and have backups. Even on a system with ECC memory some hardware can corrupt memory by writing to wrong location via DMA, even on a system with reliable hardware there still can be a kernel (driver) bug that would corrupt memory contents, etc.