I've two SUN E450 workstations both connected to 4 StorEDGE A1000. The first machine has 4GB of RAM and 4 CPUs, the second (where the problem appears) has 1.5GB of RAM and 3 CPUs. On the first workstation I created a zpool (raidz named "store") with 32 disks. All works fine, export / import works fine. I tried to export the pool on the first workstation (works) and then to import the pool on the second workstation (fails): ----------->8------------ sunslave# zpool import store internal error: Value too large to be stored in data type Abort (core dumped) -----------8<------------ The backtrace: ----------->8------------ sunslave# gdb /usr/obj/usr/src/cddl/sbin/zpool/zpool GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "sparc64-marcel-freebsd"... (gdb) run import -f store Starting program: /usr/obj/usr/src/cddl/sbin/zpool/zpool import -f store internal error: Value too large to be stored in data type Program received signal SIGABRT, Aborted. 0x0000000040e063e8 in kill () from /lib/libc.so.7 (gdb) bt #0 0x0000000040e063e8 in kill () from /lib/libc.so.7 #1 0x0000000040d4e660 in abort () from /lib/libc.so.7 #2 0x000000004046d04c in zfs_verror (hdl=0x305000, error=2047, fmt=0x40480a30 "%s", ap=0x7fdffff92a8) at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_util.c:211 #3 0x000000004046d8dc in zpool_standard_error_fmt (hdl=0x305000, error=84, fmt=0x40480a30 "%s") at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_util.c:387 #4 0x000000004046d610 in zpool_standard_error (hdl=0x305000, error=84, msg=0x7fdffff9398 "cannot import 'store'") at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_util.c:329 #5 0x0000000040463e34 in zpool_import (hdl=0x305000, config=0x3809e8, newname=0x0, altroot=0x0) at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c:700 #6 0x0000000000106164 in do_import (config=0x3809e8, newname=0x0, mntopts=0x0, altroot=0x0, force=1, argc=3, argv=0x7fdffffebd8) at /usr/src/cddl/sbin/zpool/../../../cddl/contrib/opensolaris/cmd/zpool/zpool_main.c:1231 #7 0x0000000000106b88 in zpool_do_import (argc=1, argv=0x7fdffffebe8) at /usr/src/cddl/sbin/zpool/../../../cddl/contrib/opensolaris/cmd/zpool/zpool_main.c:1471 #8 0x000000000010cc10 in main (argc=4, argv=0x7fdffffebd0) at /usr/src/cddl/sbin/zpool/../../../cddl/contrib/opensolaris/cmd/zpool/zpool_main.c:3570 (gdb) -----------8<------------ Some sysctl-vars: sunslave# sysctl -a | grep kmem vm.kmem_size_scale: 3 vm.kmem_size_max: 805306368 vm.kmem_size_min: 0 vm.kmem_size: 805306368 sunslave# sysctl -a | grep vnodes kern.maxvnodes: 120000 kern.minvnodes: 0 vfs.freevnodes: 174 vfs.wantfreevnodes: 13483 vfs.numvnodes: 683 sunslave# sysctl vfs.zfs vfs.zfs.arc_min: 25165824 vfs.zfs.arc_max: 603979776 vfs.zfs.mdcomp_disable: 0 vfs.zfs.prefetch_disable: 1 vfs.zfs.zio.taskq_threads: 0 vfs.zfs.recover: 0 vfs.zfs.vdev.cache.size: 10485760 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zil_disable: 0 vfs.zfs.debug: 0
Responsible Changed From-To: freebsd-sparc64->pjd Assign to maintainer of ZFS.
State Changed From-To: open->feedback does this happen with version 13?
Responsible Changed From-To: pjd->kmacy does this happen with version 13? http://svn.freebsd.org/base/user/kmacy/ZFS_MFC/
Responsible Changed From-To: kmacy->freebsd-fs kmacy has asked for all of his PRs to be reassigned back to the pool. To submitter: feedback on this was requested. If you did provide feedback then it doesn't appear to have made it into the PR trail - could you please resend it?
State Changed From-To: feedback->closed Feedback timeout.