Created attachment 158088 [details] Picture of last second before reboot while pool was still able to be mounted I have a 6 disk RaidZ2 pool that lists one data set as 16E in size. Their are 4 files that crash the server on access (Open, Move, Delete, Rename, ect). Even trying to destroy the data set crashes the machine. zfs destroy -r <pool>/<dataset> causes instant reboot with same assert. On all listed OS's Assert is as follows. panic: solaris assert: rs == NULL, file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c, line: 186 cpuid = 0 This pool was created with FreeNAS 9.0.x, migrated to Nas4Free 9.3 then Nas4Free 10.1 when noticed stability problem arose. Problem has been replicated on import of same pool on FreeBSD 10.1 Live USB, FreeBSD 11 Live USB, Debian Linux w/ ZFS on Linux, Ubuntu Linux w/ ZoL in attempts to rescue data from disks. Last attempt to rescue data while also running zdb -b -AAA caused fault (same assert as above) and reboot, now even importing pool, or booting OS that expects pool to exist causes instant reboot with same Assert Below are links to some screen shots, and a video (Tool large to upload them all) https://youtu.be/pKp3PyZLISQ https://www.dropbox.com/sc/gulc0vaecn3m5zh/AADEhZeOgdUKPVT8DYpBTxSfa https://www.dropbox.com/sc/tssj647exm3pa8c/AAA7N1kY_tp3PCd1qkg2ST2ia https://www.dropbox.com/s/ghekuaywxd6ixbm/Capture1.PNG?dl=0
Created attachment 158089 [details] Picture showing data set sizes in pool, hpool2/jails is one in question, files / folders on right are ones that crash system