Bug 123566 - [zfs] zpool import issue: EOVERFLOW
Summary: [zfs] zpool import issue: EOVERFLOW
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: sparc64 (show other bugs)
Version: 7.0-STABLE
Hardware: Any Any
: Normal Affects Only Me
Assignee: freebsd-fs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-05-10 12:20 UTC by Marian Tietz
Modified: 2012-03-31 20:12 UTC (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Marian Tietz 2008-05-10 12:20:02 UTC
I've two SUN E450 workstations both connected to 4 StorEDGE A1000.
The first machine has 4GB of RAM and 4 CPUs, the second (where the problem
appears) has 1.5GB of RAM and 3 CPUs.

On the first workstation I created a zpool (raidz named "store") with 32 disks. 
All works fine, export / import works fine. I tried to export the 
pool on the first workstation (works) and then to import the pool on the
second workstation (fails):

----------->8------------
sunslave# zpool import store
internal error: Value too large to be stored in data type
Abort (core dumped)
-----------8<------------

The backtrace:

----------->8------------
sunslave# gdb /usr/obj/usr/src/cddl/sbin/zpool/zpool
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "sparc64-marcel-freebsd"...
(gdb) run import -f store
Starting program: /usr/obj/usr/src/cddl/sbin/zpool/zpool import -f store
internal error: Value too large to be stored in data type

Program received signal SIGABRT, Aborted.
0x0000000040e063e8 in kill () from /lib/libc.so.7
(gdb) bt
#0  0x0000000040e063e8 in kill () from /lib/libc.so.7
#1  0x0000000040d4e660 in abort () from /lib/libc.so.7
#2  0x000000004046d04c in zfs_verror (hdl=0x305000, error=2047, fmt=0x40480a30 "%s", ap=0x7fdffff92a8)
    at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_util.c:211
#3  0x000000004046d8dc in zpool_standard_error_fmt (hdl=0x305000, error=84, fmt=0x40480a30 "%s")
        at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_util.c:387
#4  0x000000004046d610 in zpool_standard_error (hdl=0x305000, error=84, msg=0x7fdffff9398 "cannot import 'store'")
        at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_util.c:329
#5  0x0000000040463e34 in zpool_import (hdl=0x305000, config=0x3809e8, newname=0x0, altroot=0x0)
        at /usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c:700
#6  0x0000000000106164 in do_import (config=0x3809e8, newname=0x0, mntopts=0x0, altroot=0x0, force=1, argc=3, 
        argv=0x7fdffffebd8) at /usr/src/cddl/sbin/zpool/../../../cddl/contrib/opensolaris/cmd/zpool/zpool_main.c:1231
#7  0x0000000000106b88 in zpool_do_import (argc=1, argv=0x7fdffffebe8)
        at /usr/src/cddl/sbin/zpool/../../../cddl/contrib/opensolaris/cmd/zpool/zpool_main.c:1471
#8  0x000000000010cc10 in main (argc=4, argv=0x7fdffffebd0)
        at /usr/src/cddl/sbin/zpool/../../../cddl/contrib/opensolaris/cmd/zpool/zpool_main.c:3570
(gdb) 
-----------8<------------

Some sysctl-vars:

sunslave# sysctl -a | grep kmem
vm.kmem_size_scale: 3
vm.kmem_size_max: 805306368
vm.kmem_size_min: 0
vm.kmem_size: 805306368

sunslave# sysctl -a | grep vnodes
kern.maxvnodes: 120000
kern.minvnodes: 0
vfs.freevnodes: 174
vfs.wantfreevnodes: 13483
vfs.numvnodes: 683

sunslave# sysctl vfs.zfs
vfs.zfs.arc_min: 25165824
vfs.zfs.arc_max: 603979776
vfs.zfs.mdcomp_disable: 0
vfs.zfs.prefetch_disable: 1
vfs.zfs.zio.taskq_threads: 0
vfs.zfs.recover: 0
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_disable: 0
vfs.zfs.debug: 0
Comment 1 Marius Strobl freebsd_committer freebsd_triage 2008-05-10 21:40:55 UTC
Responsible Changed
From-To: freebsd-sparc64->pjd


Assign to maintainer of ZFS.
Comment 2 K. Macy freebsd_committer freebsd_triage 2009-05-17 06:58:28 UTC
State Changed
From-To: open->feedback


does this happen with version 13?
Comment 3 K. Macy freebsd_committer freebsd_triage 2009-05-17 06:58:28 UTC
Responsible Changed
From-To: pjd->kmacy


does this happen with version 13? 

http://svn.freebsd.org/base/user/kmacy/ZFS_MFC/
Comment 4 Gavin Atkinson freebsd_committer freebsd_triage 2011-05-29 23:09:04 UTC
Responsible Changed
From-To: kmacy->freebsd-fs

kmacy has asked for all of his PRs to be reassigned back to the pool. 

To submitter: feedback on this was requested.  If you did provide feedback 
then it doesn't appear to have made it into the PR trail - could you please 
resend it?
Comment 5 Jaakko Heinonen freebsd_committer freebsd_triage 2012-03-31 20:12:00 UTC
State Changed
From-To: feedback->closed

Feedback timeout.