Hi, I get strange behaviour with ZFS zpool status/scrub I think. Almost everytime when I launch zpool scrub the old pool that is not existent in that system for quite a long time (2 months) applies after launching zpool scrub. Other thing is that I the 'oldfs' pool was not on these disks, but was created on other disks that were removed from that system. Of course zpool destroy helps, but only until next zpool scrub. # zpool status pool: basefs state: ONLINE scrub: scrub in progress for 0h0m, 0.00% done, 1572h56m to go config: NAME STATE READ WRITE CKSUM basefs ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ada0s3 ONLINE 0 0 0 ada1s3 ONLINE 0 0 0 ada2s3 ONLINE 0 0 0 # zpool scrub basefs # zpool status pool: basefs state: ONLINE scrub: scrub in progress for 0h0m, 0.00% done, 1572h56m to go config: NAME STATE READ WRITE CKSUM basefs ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ada0s3 ONLINE 0 0 0 ada1s3 ONLINE 0 0 0 ada2s3 ONLINE 0 0 0 errors: No known data errors pool: oldfs state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scrub: none requested config: NAME STATE READ WRITE CKSUM oldfs UNAVAIL 0 0 0 insufficient replicas ada3s3 UNAVAIL 0 0 0 cannot open # zpool destroy oldfs # zpool status pool: basefs state: ONLINE scrub: scrub in progress for 0h6m, 2.61% done, 4h9m to go config: NAME STATE READ WRITE CKSUM basefs ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ada0s3 ONLINE 0 0 0 ada1s3 ONLINE 0 0 0 ada2s3 ONLINE 0 0 0 errors: No known data errors Regards, vermaden How-To-Repeat: # zpool status # zpool scrub ${EXISTING_POOL} # zpool status
Responsible Changed From-To: freebsd-bugs->freebsd-fs Over to maintainer(s).
State Changed From-To: open->feedback Could you provide outup of: # zdb -l /dev/ada3s3
Responsible Changed From-To: freebsd-fs->pjd I'll take this one. Date: 14 May 2010 06:38:45 +0200
State Changed From-To: feedback->closed In your report oldfs was reported with ada3s3: NAME STATE READ WRITE CKSUM oldfs UNAVAIL 0 0 0 insufficient replicas ada3s3 UNAVAIL 0 0 0 cannot open If there was no ada3 in your system, which could contain incomplete ZFS metadata the only other possibility is that there was some info about oldfs in your /boot/zfs/zpool.cache file. 'zpool export oldfs' instead of 'zpool destroy oldfs' should be enough to fix it in the future.