Bug 106030 - [ufs] [panic] panic in ufs from geom when a dead disk is invalidated
Summary: [ufs] [panic] panic in ufs from geom when a dead disk is invalidated
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 7.0-CURRENT
Hardware: Any Any
: Normal Affects Only Me
Assignee: freebsd-fs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2006-11-29 20:20 UTC by Matt Jacob
Modified: 2011-06-19 09:38 UTC (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Matt Jacob freebsd_committer freebsd_triage 2006-11-29 20:20:12 UTC
I had a mounted ufs disk that went away. I rebooted so as to avoid a
panic. Too bad. Geom paniced on me anyway:

Syncing disks, vnodes remaining...2 (da8:isp1:0:6:2): Invalidating pack
g_vfs_done():da8a[WRITE(offset=81920, length=4096)]error = 6
panic: bundirty: buffer 0xc6d76f70 still on queue 1
cpuid = 0
KDB: enter: panic
[thread pid 3 tid 100000 ]
Stopped at      kdb_enter+0x2b: nop
db> bt
Tracing pid 3 tid 100000 td 0xc1e98000
kdb_enter(c0936604) at kdb_enter+0x2b
panic(c093f33c,c6d76f70,1,c6d76f70,cba0ec48,...) at panic+0x127
bundirty(c6d76f70) at bundirty+0x35
brelse(c6d76f70) at brelse+0x82f
bufdone_finish(c6d76f70) at bufdone_finish+0x34c
bufdone(c6d76f70) at bufdone+0xaa
ffs_backgroundwritedone(c6d76f70) at ffs_backgroundwritedone+0xca
bufdone(c6d76f70) at bufdone+0x8f
g_vfs_done(c21475ac) at g_vfs_done+0x8a
biodone(c21475ac) at biodone+0x58
g_io_schedule_up(c1e98000) at g_io_schedule_up+0xe6
g_up_procbody(0,cba0ed38) at g_up_procbody+0x5a
fork_exit(c067d58c,0,cba0ed38) at fork_exit+0xac
fork_trampoline() at fork_trampoline+0x8
--- trap 0x1, eip = 0, esp = 0xcba0ed6c, ebp = 0 ---


It's unclear to me where this should be fixed. Since device invalidation
is an inherently asynchronous process that could happen at any time, it
seems to me that GEOM should be a bit more tolerant here.

How-To-Repeat: 
Turn a disk off that has a mounted filesystem and just do a reboot.
Comment 1 Robert Watson freebsd_committer freebsd_triage 2006-11-29 21:51:32 UTC
> It's unclear to me where this should be fixed. Since device invalidation is 
> an inherently asynchronous process that could happen at any time, it seems 
> to me that GEOM should be a bit more tolerant here.

That looks a lot like a UFS/buffer cache panic, not a GEOM panic?

Robert N M Watson
Computer Laboratory
University of Cambridge
Comment 2 Matt Jacob freebsd_committer freebsd_triage 2006-11-29 22:17:15 UTC
> That looks a lot like a UFS/buffer cache panic, not a GEOM panic?

Good point.
Comment 3 Robert Watson freebsd_committer freebsd_triage 2006-11-29 22:45:53 UTC
On Wed, 29 Nov 2006, mjacob@freebsd.org wrote:

> A panic should be the last resort. If I/O is returned indicating the device 
> has gone, a binval on all cached data and a forced close of the file table 
> entry and notification of all user processes is the reasonable thing to do. 
> Most real Unix'es that were hardened from the orginal v7 product learned to 
> do this. FreeBSD hasn't.

This is a panic on shutdown in the file system.  All user processes have 
exited, and UFS is unable to sync cached data to disk, so there is no way to 
report the error to a user process.

> As I've repeatedly said, mostly to deaf ears in FreeBSD, a device error 
> should never be the cause for panic *unless* there is absolutely no way to 
> notify user processes of the error *and* data corruption may have silently 
> occurred. Inconvenience to an existing design is not really a good argument.

The context of your panic note appear to be during system shutdown during the 
final syncing of vnode data before unmount -- is this not the case?

> A read error to a device that has disappeared shouldn't cause a panic, even 
> with a filesystem mounted. A write error to same shouldn't cause a panic - 
> the error propagates back up the stack to the actual I/O invocation. If it 
> was writebehind or dirty paging activity that can no longer be associated 
> with any thread, then a panic is a policy decision that only the invoker of 
> the I/O can make. Not the device driver. Not the volume manager (which is 
> what GEOM is).

There are certainly situations where FreeBSD panics rather than tolerating 
invalid file system data, but I believe those problems are entirely at the 
file system layer.  There is a kernel printf from GEOM, but the panic occurs 
in the buffer cache code, presumably when UFS discovers life sucks more than 
it thought.  I'd like to see UFS grow more tolerant of this sort of thing, and 
simply lose the data rather than panicking.

That said, I think the more pressing issue is actually with FAT, since 
reliable server configurations frequently run UFS over RAID, but most FAT 
devices are not only not reliable, but also removeable, which we currently 
fail to tolerate at all when the FAT file system is mounted.  A practice run 
on tolerating device removal for FAT would probably prepare us to address the 
UFS issues more competently, as well as shake out issues in VM, etc, that 
might arise.  For example, I believe we currently fail rather poorly when 
paging in data from a failing swap device.  Certainly there's no good way to 
get out of the situation, but I think we perform one of the less good bad 
ways.

Robert N M Watson
Computer Laboratory
University of Cambridge
Comment 4 Matt Jacob freebsd_committer freebsd_triage 2006-11-29 23:08:54 UTC
> This is a panic on shutdown in the file system.  All user processes have 
> exited, and UFS is unable to sync cached data to disk, so there is no way to 
> report the error to a user process.

Yes- but it is also true that this would happen at a time other than 
reboot. In fact, I rebooted rather than try and run with a dead disk 
mounted and much to my annoyance I *still* couldn't avoid a panic. My 
only other choice would have been to do a 'reboot -n'. Bad in either 
case.

>
> There are certainly situations where FreeBSD panics rather than tolerating 
> invalid file system data, but I believe those problems are entirely at the 
> file system layer.  There is a kernel printf from GEOM, but the panic occurs 
> in the buffer cache code, presumably when UFS discovers life sucks more than 
> it thought.  I'd like to see UFS grow more tolerant of this sort of thing, 
> and simply lose the data rather than panicking.

Yes.

> That said, I think the more pressing issue is actually with FAT, since 
> reliable server configurations frequently run UFS over RAID, but most FAT 
> devices are not only not reliable, but also removeable, which we currently 
> fail to tolerate at all when the FAT file system is mounted.  A practice run 
> on tolerating device removal for FAT would probably prepare us to address the 
> UFS issues more competently, as well as shake out issues in VM, etc, that 
> might arise.  For example, I believe we currently fail rather poorly when 
> paging in data from a failing swap device.  Certainly there's no good way to 
> get out of the situation, but I think we perform one of the less good bad 
> ways.

Uhh- this conversation just took a rather bizaare twist. It's not just a 
question of making UFS more fault tolerant- UFS is sort of a dead horse 
by now and RAID may not help when it's a channel failure (e.g., fibre 
channel or iSCSI). I'd rather see efforts put into ZFS (and fixing the 
XFS port to actually work)- but that is besides the point. It's more of 
a case to make sure that we don't panic when we don't have to. Now we do 
too much.

But these are very good points- thanks for the review of my somewhat 
botched bug report.
Comment 5 Mark Linimon freebsd_committer freebsd_triage 2009-05-18 03:59:31 UTC
Responsible Changed
From-To: freebsd-bugs->freebsd-fs

Analysis showed that this is either a UFS or buffer cache problem.
Comment 6 Jaakko Heinonen freebsd_committer freebsd_triage 2011-05-14 18:10:55 UTC
State Changed
From-To: open->feedback

Can you still reproduce this on a supported release?
Comment 7 Jaakko Heinonen freebsd_committer freebsd_triage 2011-06-19 09:38:39 UTC
State Changed
From-To: feedback->closed

Feedback timeout.