After a crash (by starting powerd, if that matters) zpool comes back fine, but panics when trying to mount one particular fs inside the pool. All other fs are fine, also the properties of the broken fs can be accessed. A picture of the crash and a trace using ddb can be found here: <http://www.pmp.uni-hannover.de/test/Mitarbeiter/g_kuehn/data/zfs-panic2.jpg> It looks like there is a problem replaying the ZIL. Some more info about the hardware and setup: These are 4x2.5" 400GB drives (WD4000BEVT) in a RAID-Z1 setup on a Supermicro AOC-USAS-L8i controller (LSI chip, mpt driver) in a VIA VB8001 board (powered by a Via Nano 1.6GHz) with 4GB of memory. System runs off a UFS-FS CF-card and uses ZFS for data, /var and /tmp. Fix: Unknown to me. However, imho zfs should not panic the kernel, even if there is a corrupted zil. If these cases cannot be avoided 100%, something like an --disgard-zil switch would be very helpful from my (user's) point of view. How-To-Repeat: I had a similar issue before when the system had crashed once for a different reason. So the situation is probably easily triggered here. I have not yet tried to re-do the pool and trigger it again to be able to give feedback on the problem at hand.
Responsible Changed From-To: freebsd-bugs->freebsd-fs Over to maintainer(s). To submitter: can you also please give the output of "zdb -C"?
Output of "zdb -C" as requested: tank version=13 name='tank' state=0 txg=32618 pool_guid=17523106262699816181 hostname='' vdev_tree type='root' id=0 guid=17523106262699816181 children[0] type='raidz' id=0 guid=2668789775933362751 nparity=1 metaslab_array=14 metaslab_shift=33 ashift=9 asize=1600334594048 is_log=0 children[0] type='disk' id=0 guid=4872680480919708890 path='/dev/label/disk0' whole_disk=0 DTL=63 children[1] type='disk' id=1 guid=14727435584907659484 path='/dev/label/disk1' whole_disk=0 DTL=60 children[2] type='disk' id=2 guid=1501397252321623055 path='/dev/label/disk2' whole_disk=0 DTL=62 children[3] type='disk' id=3 guid=15105917771654568537 path='/dev/label/disk3' whole_disk=0 DTL=61
For the record: I fixed my pool now by booting OpenSolaris dev 131 and simply importing/exporting the pool. Now it works fine again under FreeBSD. Since v128 OSL also has a -F(ix) feature for importing corrupt pools. This would be a very useful feature in FreeBSD, too. cu Gerrit
Responsible Changed From-To: freebsd-fs->mm I'll take it.
There are two new patches in 8-STABLE thad fixed ZIL replay crashes. Could you try the lastest 8-STABLE? Or can this PR be closed?
State Changed From-To: open->closed Closing on feedback timeout.