Bug 191348 - [mps] LSI2308 with WD3000FYYZ drives disappears after hotswapping
Summary: [mps] LSI2308 with WD3000FYYZ drives disappears after hotswapping
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.0-RELEASE
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-bugs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-06-24 21:35 UTC by Michał Margula
Modified: 2019-05-31 06:07 UTC (History)
8 users (show)

See Also:


Attachments
Debug log after removing drive (3.97 KB, text/plain)
2014-06-26 07:21 UTC, Michał Margula
no flags Details
Debug log after reinsterting same drive in same bay (8.81 KB, text/plain)
2014-06-26 07:21 UTC, Michał Margula
no flags Details
msleep patch to mps_sas_lsi.c (FreeBSD 10.0) (244 bytes, patch)
2014-08-08 22:56 UTC, Stephen McConnell
no flags Details | Diff
Spinup wait sysctl change (2.98 KB, patch)
2014-10-09 22:03 UTC, Stephen McConnell
no flags Details | Diff
mps_debug.log (24.21 KB, text/plain)
2015-09-29 09:26 UTC, Karli Sjöberg
no flags Details
debug logs 0x1F (7.84 KB, application/x-gzip)
2015-10-08 09:04 UTC, Karli Sjöberg
no flags Details
kernel-critical log (64.87 KB, application/octet-stream)
2015-10-14 13:45 UTC, Karli Sjöberg
no flags Details
kern.log with dev.mps.0.debug_level=4, drive removed and reinserted but no detect (41.43 KB, text/plain)
2016-02-10 21:49 UTC, Graham Allan
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Michał Margula 2014-06-24 21:35:11 UTC
We have a Supermicro SSG-6047R-E1R36L server with motherboard
Super X9DRD-7LN4F-JBOD equipped with LSI2308 SAS controller.
We use Western Digital 3TB RE4 disks (WD3000FYYZ). 

Everythign works fine until we remove any disk and then reinstall
it. It doesn't get detected anymore, in logs we get:

mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
_mapping_get_dev_info: failed to compute the hashed SAS Address for SATA device with handle 0x000d
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
failure at /usr/src/sys/dev/mps/mps_sas_lsi.c:667/mpssas_add_device()! Could not get ID for device with handle 0x000d
mpssas_fw_work: failed to add device with handle 0xd

We tried upgrading LSI firmware and it did not help. There is no
new firmware for our disks. According to WD this disk is compatible
with LSI2308.

According to Supermicro resleller support - mps driver has trouble
with parsing ioctl return data. We also tried mpslsi driver from
LSI site with no luck.

Also it only happens with those WD disks. We have 11 of them and
every one has same problem. But if we try other brand or SSD disk
or even WD 1TB RE3 disk it won't cause any trouble. Also when
using FreeBSD 9.2 we get:

mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
da3 at mps0 bus 0 scbus0 target 9 lun 0
da3: <ATA WDC WD3000FYYZ-0 1K02> Fixed Direct Access SCSI-6 device 
da3: 600.000MB/s transfers
da3: Command Queueing enabled
da3: 2861588MB (5860533168 512 byte sectors: 255H 63S/T 364801C)
ses0: da3,pass3: Element descriptor: 'Slot 02'
ses0: da3,pass3: SAS Device Slot Element: 1 Phys at Slot 1
ses0:  phy 0: SATA device
ses0:  phy 0: parent 5003048000e517bf addr 5003048000e5178d

So as you can see there is still error but it seems that mps driver
in FreeBSD 9.2 is more forgiving.

To sum it up:
- FreeBSD 10.0 + WD3000FYYZ + mps driver - FAIL
- FreeBSD 10.0 + WD3000FYYZ + mpslsi driver - FAIL
- FreeBSD 10.0 + any other drive + mps driver - OK
- FreeBSD 9.2 + WD3000FYYZ + mps driver - OK
- FreeBSD 9.2 + anu other drive + mps driver - OK

If there is anything else you would like to know please let me know.

Thank you!
Comment 1 Michał Margula 2014-06-26 07:21:03 UTC
Created attachment 144141 [details]
Debug log after removing drive
Comment 2 Michał Margula 2014-06-26 07:21:25 UTC
Created attachment 144142 [details]
Debug log after reinsterting same drive in same bay
Comment 3 Michał Margula 2014-07-29 16:26:24 UTC
Unfortunately same issue is in FreeBSD 9.3-RELEASE:

# uname -a 
FreeBSD  9.3-RELEASE FreeBSD 9.3-RELEASE #0 r260512: Thu Jul 10 23:44:39 UTC 2004     root@snap.freebsd.org:/usr/obj/usr/srcv/sys/GENERIC  amd64

# dmesg | grep mps
mps0: <LSI SAS2308> port 0xe000-0xe0ff mem 0xfba40000-0xfba4ffff,0xfba00000-0xfba3ffff irq 34 at device 0.0 on pci3
mps0: Firmware: 19.00.00.00, Driver: 16:00.00.00-fbsd
mps0: IOCCapabilities: 5285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc>

Unafected 9.2-RELEASE:
# uname -a
FreeBSD boromir.uznam.net.pl 9.2-RELEASE-p10 FreeBSD 9.2-RELEASE-p10 #0: Tue Jul  8 10:48:24 UTC 2014     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
# dmesg | grep mps
mps0: <LSI SAS2308> port 0xe000-0xe0ff mem 0xfba40000-0xfba4ffff,0xfba00000-0xfba3ffff irq 34 at device 0.0 on pci3
mps0: Firmware: 19.00.00.00, Driver: 14.00.00.01-fbsd
mps0: IOCCapabilities: 5285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc>

So only difference I see is that 14.00.00.01-fbsd is fine and 16:00.00.00-fbsd is broken.
Comment 4 Stephen McConnell freebsd_committer freebsd_triage 2014-07-30 22:05:15 UTC
Hi Michal, this is Stephen McConnell from Avago Tech (formally LSI Corp.).  I looked at this, but I'm not sure what I can do next.  I was not able to reproduce the problem on my system.

I used the 16.00.00.00-fbsd driver and v10 of FreeBSD and I was able to find a WD 3TB SATA 6GB/S drive to test with and everything was OK.  I also tested with my latest driver.  I used a 2208 instead of a 2308.  Honestly, I'm not sure offhand what the difference is, but I bet that doesn't matter.

The IOCStatus that is coming back is a 0x47, which is a SCSI Protocol Error.  According to the MPI spec, this means "An unrecoverable protocol error has caused termination of the SCSI I/O."  I'm not exactly sure why that would happen, but it sounds like a disk problem.  The driver will try 5 times and then fail.  You can see in the 9.2 driver, the same error happens but only once.  The next retry works.

It seems to be a timing issue or something.  Maybe the enclosure is causing a problem with this particular drive due to the time it takes to get ready or spin up?  If that's the case we could try putting a delay between the retries of a few seconds.  That shouldn't cause any problems since retries should be rare.

Does this sound reasonable?  If so, can you make the change and test it or do you need my help?

Stephen McConnell
Comment 5 Michał Margula 2014-07-31 07:54:36 UTC
Hello,

Thank you for your reply and time. It is possible that chassis causes trouble
because it has 36 disk bays (it came completly assembled from Supermicro).

I would love to try with extended timeouts but I have no idea how to do that.
Could you explain how to make that change? I am new to FreeBSD environment
(I come from Linux world).

Change goes in to source code or is it some kind of tunable parameter?

Thank you!
Comment 6 Stephen McConnell freebsd_committer freebsd_triage 2014-08-08 22:56:00 UTC
Created attachment 145541 [details]
msleep patch to mps_sas_lsi.c (FreeBSD 10.0)
Comment 7 Stephen McConnell freebsd_committer freebsd_triage 2014-08-08 22:59:50 UTC
Can you apply the attached msleep patch (in a tar file)?  This will cause the driver to sleep for 3 seconds if there is an IOC Status in the get_sata_identify function.  You will probably need to rebuild the kernel so that the patched driver will be loaded when you boot.  You should see a message that the driver is sleeping for 3 seconds and then finished sleeping.  This only will happen if the IOC Status is non-zero, as happens in your system.  If you need help with this, let me know.
Comment 8 Michał Margula 2014-08-18 11:36:21 UTC
Hello,

First of all - thank you for your help and time spent on that issue. Also I am sorry that it took so long for me to respond, but I had upgrade that box to FreeBSD 10.0 (it was 9.2 because of that issue) and then recompiled kernel.

It seems that issue is fixed by that delay of 3 seconds. I see that "sleeping" message everytime I disconnect and reconnect drive. But now drive is detected properly after replacing. 

Currently I have some issues with "zpool status" showing those drives as faulted but it may be some kind of my fault when trying to online them. I will get back to you as soon drives get resilvered and I can do some more testing.

BTW - is it possible that such delay could be introduced in drivers by default
(or maybe as module option) or will I have to recompile kernel everytime?

Thank you once more!
Comment 9 Michał Margula 2014-08-19 06:49:31 UTC
I can confirm that this patch fixes issue. Will it be incorporated into driver in some way?

Thank you once more!
Comment 10 Stephen McConnell freebsd_committer freebsd_triage 2014-10-09 22:03:22 UTC
Created attachment 148149 [details]
Spinup wait sysctl change

Michal, sorry about the delay in getting back to you on this.  I have made a new patch for this that I'd like you to try before I get it reviewed and commit the fix. It's the spinup_wait.patch file.  There is is new sysctl variable that you can use to set the spinup wait time (default is 3 seconds), and you should be able to see the wait message if you set the XINFO bit in the debug level (assuming a wait is required).  After you let me know your results I'll get it reviewed by a couple other people before I commit it.

Thanks,
Steve
Comment 11 Michał Margula 2014-10-15 10:45:42 UTC
Hello,

I really appreciate and thank you for your effort. Today I applied patch to 10.0-RELEASE-p9. It still works flawlessly - I can remove disk and reinsert them freely, but few things:

- to apply that patch I had to change version inside patch (mpsvar.h file) because your contains version 19.00.00.00-fbsd and mine in that FreeBSD release is 16.00.00.00-fbsd

- sysctl does not work properly - I set hw.mps.spinup_wait_time=5 in /boot/loader.conf and it shows in sysctl properly:

root@boromir:~ # sysctl dev.mps.0.spinup_wait_time
dev.mps.0.spinup_wait_time: 5

But in dmesg I see (after a reboot of course):

Oct 15 12:01:47 boromir kernel: Sleeping 3 seconds
Oct 15 12:01:50 boromir kernel: finished Sleeping 3 seconds

Also can't change that setting while running (but it may be proper behaviour, I don't know FreeBSD that much):

root@boromir:~ # sysctl dev.mps.0.spinup_wait_time=5
sysctl: oid 'dev.mps.0.spinup_wait_time' is read only

If you need anything else, please let me know. Once again - big thanks.
Comment 12 Royce Williams 2015-01-13 06:05:03 UTC
Will this patch be applied for 10.2-RELEASE?  I do not yet see it in head or stable.
Comment 13 Stephen McConnell freebsd_committer freebsd_triage 2015-01-15 22:12:50 UTC
Royce, I got a question from you about this bug in my email, but I don't see it in this report for some reason.  The question was:

"Will this patch be applied for 10.2-RELEASE?  I do not yet see it in head or stable."

I do plan on getting this into head and putting back to 9 as well.  I'll be working on getting this reviewed and committed in the next few days.
Comment 14 Karli Sjöberg 2015-02-19 07:02:55 UTC
Chipping in a 'me too' here:

mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x47
mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
failure at /usr/src/sys/dev/mps/mps_sas_lsi.c:670/mpssas_add_device()! Could not get ID for device with handle 0x0020
mpssas_fw_work: failed to add device with handle 0x20

And you can´t replace a drive because any replacement drive never shows up in the OS.

FreeBSD 10.1-STABLE r278568M

Any estimate when this´ll be in 10-STABLE?

Best Regards
Karli Sjöberg
Comment 15 Stephen McConnell freebsd_committer freebsd_triage 2015-02-19 23:33:35 UTC
I have sent the changes to my mentors for review and hope to get this committed next week.
Comment 16 commit-hook freebsd_committer freebsd_triage 2015-02-24 22:08:45 UTC
A commit references this bug:

Author: slm
Date: Tue Feb 24 22:07:48 UTC 2015
New revision: 279253
URL: https://svnweb.freebsd.org/changeset/base/279253

Log:
  - Updated all files with 2015 Avago copyright, and updated LSI's copyright
    dates.

  - Changed all of the PCI device strings from LSI to Avago Technologies (LSI).

  - Added a sysctl variable to control how StartStopUnit behavior works. User can
    select to spin down disks based on if disk is SSD or HDD.

  - Inquiry data is required to tell if a disk will support SSU at shutdown or
    not. Due to the addition of mpssas_async, which gets Advanced Info but not
    Inquiry data, the setting of supports_SSU was moved to the
    mpssas_scsiio_complete function, which snoops for any Inquiry commands. And,
    since disks are shutdown as a target and not a LUN, this process was
    simplified by basing it on targets and not LUNs.

  - Added a sysctl variable that sets the amount of time to retry after sending a
    failed SATA ID command. This helps with some bad disks and large disks that
    require a lot of time to spin up. Part of this change was to add a callout to
    handle timeouts with the SATA ID command. The callout function is called
    mpssas_ata_id_timeout(). (Fixes PR 191348)

  - Changed the way resets work by allowing I/O to continue to devices that are
    not currently under a reset condition. This uses devq's instead of simq's and
    makes use of the MPSSAS_TARGET_INRESET flag. This change also adds a function
    called mpssas_prepare_tm().

  - Some changes were made to reduce code duplication when getting a SAS address
    for a SATA disk.

  - Fixed some formatting and whitespace.

  - Bump version of mps driver to 20.00.00.00-fbsd

  PR:		191348
  Reviewed by:	ken, scottl
  Approved by:	ken, scottl
  MFC after:	2 weeks

Changes:
  head/sys/dev/mps/mpi/mpi2.h
  head/sys/dev/mps/mpi/mpi2_cnfg.h
  head/sys/dev/mps/mpi/mpi2_hbd.h
  head/sys/dev/mps/mpi/mpi2_history.txt
  head/sys/dev/mps/mpi/mpi2_init.h
  head/sys/dev/mps/mpi/mpi2_ioc.h
  head/sys/dev/mps/mpi/mpi2_ra.h
  head/sys/dev/mps/mpi/mpi2_raid.h
  head/sys/dev/mps/mpi/mpi2_sas.h
  head/sys/dev/mps/mpi/mpi2_targ.h
  head/sys/dev/mps/mpi/mpi2_tool.h
  head/sys/dev/mps/mpi/mpi2_type.h
  head/sys/dev/mps/mps.c
  head/sys/dev/mps/mps_config.c
  head/sys/dev/mps/mps_ioctl.h
  head/sys/dev/mps/mps_mapping.c
  head/sys/dev/mps/mps_mapping.h
  head/sys/dev/mps/mps_pci.c
  head/sys/dev/mps/mps_sas.c
  head/sys/dev/mps/mps_sas.h
  head/sys/dev/mps/mps_sas_lsi.c
  head/sys/dev/mps/mps_user.c
  head/sys/dev/mps/mpsvar.h
Comment 17 Karli Sjöberg 2015-04-08 07:01:39 UTC
Hi!

Any chance to see this MFC'd soon?

From the svnweb link it says to MFC after two weeks and it´s been six weeks (2015-04-08) already so...

Best Regards
Karli Sjöberg
Comment 18 Stephen McConnell freebsd_committer freebsd_triage 2015-04-15 21:49:54 UTC
Done for stable/10.  Sorry it took so long.
Comment 19 commit-hook freebsd_committer freebsd_triage 2015-05-28 18:24:38 UTC
A commit references this bug:

Author: slm
Date: Thu May 28 18:24:28 UTC 2015
New revision: 283661
URL: https://svnweb.freebsd.org/changeset/base/283661

Log:
  The wrong commit message was given with r283632.  This is the correct message.

  - Updated all files with 2015 Avago copyright, and updated LSI's copyright
    dates.

  - Changed all of the PCI device strings from LSI to Avago Technologies (LSI).

  - Added a sysctl variable to control how StartStopUnit behavior works. User can
    select to spin down disks based on if disk is SSD or HDD.

  - Inquiry data is required to tell if a disk will support SSU at shutdown or
    not. Due to the addition of mpssas_async, which gets Advanced Info but not
    Inquiry data, the setting of supports_SSU was moved to the
    mpssas_scsiio_complete function, which snoops for any Inquiry commands. And,
    since disks are shutdown as a target and not a LUN, this process was
    simplified by basing it on targets and not LUNs.

  - Added a sysctl variable that sets the amount of time to retry after sending a
    failed SATA ID command. This helps with some bad disks and large disks that
    require a lot of time to spin up. Part of this change was to add a callout to
    handle timeouts with the SATA ID command. The callout function is called
    mpssas_ata_id_timeout(). (Fixes PR 191348)

  - Changed the way resets work by allowing I/O to continue to devices that are
    not currently under a reset condition. This uses devq's instead of simq's and
    makes use of the MPSSAS_TARGET_INRESET flag. This change also adds a function
    called mpssas_prepare_tm().

  - Some changes were made to reduce code duplication when getting a SAS address
    for a SATA disk.

  - Fixed some formatting and whitespace.

  - Bump version of mps driver to 9.255.01.00-fbsd

  PR:		191348
  Reviewed by:	ken, scottl
  Approved by:	ken, scottl
  MFC after:	1 week

Changes:
  head/sys/dev/mpr/mpi/mpi2.h
  head/sys/dev/mpr/mpi/mpi2_cnfg.h
  head/sys/dev/mpr/mpi/mpi2_hbd.h
  head/sys/dev/mpr/mpi/mpi2_history.txt
  head/sys/dev/mpr/mpi/mpi2_init.h
  head/sys/dev/mpr/mpi/mpi2_ioc.h
  head/sys/dev/mpr/mpi/mpi2_ra.h
  head/sys/dev/mpr/mpi/mpi2_raid.h
  head/sys/dev/mpr/mpi/mpi2_sas.h
  head/sys/dev/mpr/mpi/mpi2_targ.h
  head/sys/dev/mpr/mpi/mpi2_tool.h
  head/sys/dev/mpr/mpi/mpi2_type.h
  head/sys/dev/mpr/mpr.c
  head/sys/dev/mpr/mpr_config.c
  head/sys/dev/mpr/mpr_ioctl.h
  head/sys/dev/mpr/mpr_mapping.c
  head/sys/dev/mpr/mpr_mapping.h
  head/sys/dev/mpr/mpr_pci.c
  head/sys/dev/mpr/mpr_sas.c
  head/sys/dev/mpr/mpr_sas.h
  head/sys/dev/mpr/mpr_sas_lsi.c
  head/sys/dev/mpr/mpr_user.c
  head/sys/dev/mpr/mprvar.h
Comment 20 commit-hook freebsd_committer freebsd_triage 2015-06-04 16:28:03 UTC
A commit references this bug:

Author: slm
Date: Thu Jun  4 16:27:24 UTC 2015
New revision: 283990
URL: https://svnweb.freebsd.org/changeset/base/283990

Log:
  MFC: r283661
  - Updated all files with 2015 Avago copyright, and updated LSI's copyright
    dates.

  - Changed all of the PCI device strings from LSI to Avago Technologies (LSI).

  - Added a sysctl variable to control how StartStopUnit behavior works. User can
    select to spin down disks based on if disk is SSD or HDD.

  - Inquiry data is required to tell if a disk will support SSU at shutdown or
    not. Due to the addition of mprssas_async, which gets Advanced Info but not
    Inquiry data, the setting of supports_SSU was moved to the
    mprsas_scsiio_complete function, which snoops for any Inquiry commands. And,
    since disks are shutdown as a target and not a LUN, this process was
    simplified by basing it on targets and not LUNs.

  - Added a sysctl variable that sets the amount of time to retry after sending a
    failed SATA ID command. This helps with some bad disks and large disks that
    require a lot of time to spin up. Part of this change was to add a callout to
    handle timeouts with the SATA ID command. The callout function is called
    mprsas_ata_id_timeout(). (Fixes PR 191348)

  - Changed the way resets work by allowing I/O to continue to devices that are
    not currently under a reset condition. This uses devq's instead of simq's and
    makes use of the MPSSAS_TARGET_INRESET flag. This change also adds a function
    called mprsas_prepare_tm().

  - Some changes were made to reduce code duplication when getting a SAS address
    for a SATA disk.

  - Fixed some formatting and whitespace.

  - Bump version of mpr driver to 9.255.01.00-fbsd

  PR:		191348

Changes:
_U  stable/10/
  stable/10/sys/dev/mpr/mpi/mpi2.h
  stable/10/sys/dev/mpr/mpi/mpi2_cnfg.h
  stable/10/sys/dev/mpr/mpi/mpi2_hbd.h
  stable/10/sys/dev/mpr/mpi/mpi2_history.txt
  stable/10/sys/dev/mpr/mpi/mpi2_init.h
  stable/10/sys/dev/mpr/mpi/mpi2_ioc.h
  stable/10/sys/dev/mpr/mpi/mpi2_ra.h
  stable/10/sys/dev/mpr/mpi/mpi2_raid.h
  stable/10/sys/dev/mpr/mpi/mpi2_sas.h
  stable/10/sys/dev/mpr/mpi/mpi2_targ.h
  stable/10/sys/dev/mpr/mpi/mpi2_tool.h
  stable/10/sys/dev/mpr/mpi/mpi2_type.h
  stable/10/sys/dev/mpr/mpr.c
  stable/10/sys/dev/mpr/mpr_config.c
  stable/10/sys/dev/mpr/mpr_ioctl.h
  stable/10/sys/dev/mpr/mpr_mapping.c
  stable/10/sys/dev/mpr/mpr_mapping.h
  stable/10/sys/dev/mpr/mpr_pci.c
  stable/10/sys/dev/mpr/mpr_sas.c
  stable/10/sys/dev/mpr/mpr_sas.h
  stable/10/sys/dev/mpr/mpr_sas_lsi.c
  stable/10/sys/dev/mpr/mpr_user.c
  stable/10/sys/dev/mpr/mprvar.h
Comment 21 John Baldwin freebsd_committer freebsd_triage 2015-08-07 19:17:13 UTC
Can this be closed now?
Comment 22 Karli Sjöberg 2015-09-29 06:51:43 UTC
No it can´t, it´s not really fixed.

We have upgraded several of our systems to have this driver and also flashed the firmware of our HBA's to P19. Tried to flash with firmware 20.00.04.00 to match the 20.00.00.00 driver as well, but then ZFS went nuts displaying checksum errors all over. Reverting to P19 fixed that.

I have captured what happened the last time a drive (WD40EZRX) went bye-bye:

Sep 27 01:39:18 zfs1-1 kernel: (da9:mps0:0:16:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 length 0 SMID 368 command timeout cm 0xfffffe0000cb8300 ccb 0xfffff80302ab4800
Sep 27 01:39:18 zfs1-1 kernel: (noperiph:mps0:0:4294967295:0): SMID 1 Aborting command 0xfffffe0000cb8300
Sep 27 01:39:18 zfs1-1 kernel: mps0: Sending reset from mpssas_send_abort for target ID 16
Sep 27 01:39:18 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 3f d8 00 00 08 00 length 4096 SMID 411 command timeout cm 0xfffffe0000cbbb70 ccb 0xfffff80302355800
Sep 27 01:39:18 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4c d8 00 00 80 00 length 65536 SMID 378 command timeout cm 0xfffffe0000cb9020 ccb 0xfffff802e6b06000
Sep 27 01:39:18 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4d d8 00 00 80 00 length 65536 SMID 404 command timeout cm 0xfffffe0000cbb240 ccb 0xfffff800670db000
Sep 27 01:39:18 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4d 58 00 00 80 00 length 65536 SMID 885 command timeout cm 0xfffffe0000ce2990 ccb 0xfffff801eefbb000
Sep 27 01:39:18 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8d 38 00 00 08 00 length 4096 SMID 234 command timeout cm 0xfffffe0000cad320 ccb 0xfffff8022997c000
Sep 27 01:39:20 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8e 50 00 01 00 00 length 131072 SMID 79 command timeout cm 0xfffffe0000ca07b0 ccb 0xfffff801ee18f800
Sep 27 01:39:20 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8d 50 00 01 00 00 length 131072 SMID 218 command timeout cm 0xfffffe0000cabe20 ccb 0xfffff80067d7c800
Sep 27 01:39:21 zfs1-1 kernel: mps0: mpssas_prepare_remove: Sending reset for target ID 16
Sep 27 01:39:21 zfs1-1 kernel: da9 at mps0 bus 0 scbus0 target 16 lun 0
Sep 27 01:39:21 zfs1-1 kernel: da9: <ATA WDC WD40EZRX-00S 0A80> s/n      WD-WCC4E4YSXNYH detached
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8e 50 00 01 00 00 length 131072 SMID 79 terminated ioc 804b scsi 0 state c xfer 0
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8d 50 00 01 00 00 length 131072 SMID 218 terminated ioc 804b scsi 0 state c xf(da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8e 50 00 01 00 00 
Sep 27 01:39:22 zfs1-1 kernel: er 0
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8d 38 00 00 08 00 length 4096 SMID 234 terminated ioc 804b scsi 0 state c xfer(da9: 0
Sep 27 01:39:22 zfs1-1 kernel: mps0:0:  (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4d 58 00 00 80 00 length 65536 SMID 885 terminated ioc 804b scsi 0 state c xfer16: 0
Sep 27 01:39:22 zfs1-1 kernel: 0):      (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4c d8 00 00 80 00 length 65536 SMID 378 terminated ioc 804b scsi 0 state c xferError 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: 0
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8d 50 00 01 00 00 
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4d d8 00 00 80 00 length 65536 SMID 404 terminated ioc 804b scsi 0 state c xfer(da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: 0
Sep 27 01:39:22 zfs1-1 kernel: (da9:    (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 3f d8 00 00 08 00 length 4096 SMID 411 terminated ioc 804b scsi 0 state c xfermps0:0: 0
Sep 27 01:39:22 zfs1-1 kernel: 16:mps0: 0): IOCStatus = 0x4b while resetting device 0x14
Sep 27 01:39:22 zfs1-1 kernel: Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: mps0: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 8d 38 00 00 08 00 
Sep 27 01:39:22 zfs1-1 kernel: Unfreezing devq for target ID 16
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: mps0: (da9:Unfreezing devq for target ID 16
Sep 27 01:39:22 zfs1-1 kernel: mps0:0:16:0): Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4d 58 00 00 80 00 
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4c d8 00 00 80 00 
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): READ(10). CDB: 28 00 13 0e 4d d8 00 00 80 00 
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): WRITE(10). CDB: 2a 00 1e 5f 3f d8 00 00 08 00 
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Unconditionally Re-queue Request
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 
Sep 27 01:39:22 zfs1-1 kernel: ctl_datamove: tag 0x183baa00 on (0:9:0:0) aborted
Sep 27 01:39:22 zfs1-1 kernel: ctl_datamove: tag 0x433baa00 on (0:9:0:0) aborted
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): CAM status: Command timeout
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): Error 5, Periph was invalidated
Sep 27 01:39:22 zfs1-1 kernel: (da9:mps0:0:16:0): Periph destroyed
Sep 27 01:39:21 zfs1-1 devd: Executing 'logger -p kern.notice -t ZFS 'vdev is removed, pool_guid=11769113696885915207 vdev_guid=10111278074591061297''
Sep 27 01:39:21 zfs1-1 ZFS: vdev is removed, pool_guid=11769113696885915207 vdev_guid=10111278074591061297

This server is running 10.1-STABLE r281643, close to 10.2-RELEASE. When reinserting a new SATA drive that has never previously been in the system, nothing prints in the logs and that bay is "blocked" until you reboot the server. We have also added 'dev.mps.0.spinup_wait_time="5"' to loader.conf and it haven´t made any difference. The number of drives in this system are only 14 so I don´t think extending the timeout makes any difference in this case.

Oddly enough, I had the opportunity to test inserting a SAS drive and it successfully showed up in the OS, so whatever happens is only affecting SATA.

Any way of further debugging this serious problem would be greatly appreciated!

/K
Comment 23 Karli Sjöberg 2015-09-29 09:26:41 UTC
Created attachment 161521 [details]
mps_debug.log
Comment 24 Karli Sjöberg 2015-09-29 09:27:51 UTC
Attaching an event of trying to reseat a drive in a currently "blocked" port. Set "sysctl dev.mps.0.debug_level=4", called "mps_debug.log"

/K
Comment 25 Stephen McConnell freebsd_committer freebsd_triage 2015-10-06 19:38:58 UTC
The last few entries in the bug log looks like a different issue to me than the SATA spinup issue. But, in response to that latest problem:

The log entries show that the AttachedDevHandle for Phy 10 is 0.  That's odd.  It's possible that this is just a bug in the output of the driver, but if the DevHandle is really 0, I'm not exactly sure what would happen.  The DevHandle is created by Firmware, and I really doubt if it's creating a DevHandle of 0, so I'm not sure what's going on there.

Also, the LinkRate value of 5 means "SMP reset in progress", which also seems strange.

And, the fact that a SAS drive will be discovered, but not a SATA drive doesn't make a lot of sense to me.

If you can set the debug_level to 0x1F and then attach logs of what happens when you attach a SATA drive AND a SAS drive, maybe that will give me a clue as to what could be going on.  A full log from boot to after the disk insertions would be good.

Thanks.
Comment 26 Karli Sjöberg 2015-10-08 09:04:24 UTC
Created attachment 161820 [details]
debug logs 0x1F
Comment 27 Karli Sjöberg 2015-10-08 09:10:52 UTC
Set "sysctl dev.mps.0.debug_level=0x1F".

Reinserted SATA drive, captured in file "reinsert_sata.out". No disk was recognized.

Then reinserted SAS drive, captured in file "reinsert_sas.out". It was recognized as da9. Never mind the errors, we knew that drive was bad, but for the purpose of this test, it didn´t matter.

Then we reinserted the same SATA drive back in and rebooted, captured in "/var/run/dmesg.boot".

All files were put in the archive "debug logs 0x1F.tar.gz"

Hope you can make some sense out of all this!

Best Regards
Karli Sjöberg
Comment 28 Stephen McConnell freebsd_committer freebsd_triage 2015-10-12 17:32:50 UTC
Hi Karli, I don't see anything in the logs that makes me think this is the same issue as the original issue.  Can you explain why you think so?  The original problem was a spinup delay that the driver needed to account for.  There were messages regarding SATA Identify in the original logs.

For your current issue, I don't see anything in the logs that makes me think this is a driver problem.  What is happening is that the driver is getting SAS Topology Change List events when you insert the SATA disk.  But, in order for the driver to enumerate that disk, the PhyStatus must by "Target Added".  In your case, the PhyStatus is always "PHY Link Status Change" and never "Target Added".  So, this tells me that the Firmware is never getting enough information from that disk to add it.  All I can think of is that something is wrong with your disk.  Do you have this issue with any other disk?  Do you know that this disk is operating normally in other cases?

Right now, I don't see any problem with the driver for this.
Comment 29 Karli Sjöberg 2015-10-13 09:41:47 UTC
Look, if you rather want me to open up a new bug report, I can do that, but the end result is the same as in this report; a disk gets kicked for whatever reason, you reinsert a new drive, but it never shows up.

OS timeout is:
dev.mps.[0-9].spinup_wait_time=5

We experience this problem on very different hardware. There are a couple of different SuperMicro motherboards, X9SRH-F, X8SIL-F, X9DBU-F, an HP DL180 G6, and a Sun Fire X4140. Most have one or several SuperMicro SC837E26-RJBOD1 but the HP just have the internal bays and the X4140 has two standard Sun JBOD's, can't remember their names right now.

So very varying in nature, except for the HBA's, that are all of the same maker; LSI models 9200, 9201, 9211 and several SuperMicro AOC-USAS2-L8i (LSI2008).

The hard drives have also been varying. It´s happened with Samsung, Western Digital and Seagate.

And yes, if you insert the "kicked out" drive somewhere else, it works. It shows up and reports no SMART errors.

Why does the SAS drive show up while any other SATA drive doesn´t, if it´s not firmware/driver related? Who do I blame?:)

/K
Comment 30 Stephen McConnell freebsd_committer freebsd_triage 2015-10-13 15:35:43 UTC
> --- Comment #29 from Karli.Sjoberg@slu.se --- Look, if you rather want me to
> open up a new bug report, I can do that, but the end result is the same as in
> this report; a disk gets kicked for whatever reason, you reinsert a new drive,
> but it never shows up.

I don't care if you open another bug or not.  I'm just saying that this appears to be a different issue and it's best for both of us to be on the same page I think.

> 
> OS timeout is:
> dev.mps.[0-9].spinup_wait_time=5

This is a driver variable.  It's not actually an OS timeout.  This is a wait time between checking if a drive is finished spinning up.  This was necessary in order to work around the problem where disks took a long time to spin up (the actual original issue here).  By the time they spun up, the driver had given up waiting.  This is exactly what I mean by saying that we should try to focus on the correct problem, and not mix it in with this original problem, which seems quite different to me.  It confuses things.

> 
> We experience this problem on very different hardware. There are a couple
> of different SuperMicro motherboards, X9SRH-F, X8SIL-F, X9DBU-F, an HP
> DL180 G6, and a Sun Fire X4140. Most have one or several SuperMicro
> SC837E26-RJBOD1 but the HP just have the internal bays and the X4140 has
> two standard Sun JBOD's, can't remember their names right now.
> 
> So very varying in nature, except for the HBA's, that are all of the same
> maker; LSI models 9200, 9201, 9211 and several SuperMicro AOC-USAS2-L8i
> (LSI2008).
> 
> The hard drives have also been varying. It´s happened with Samsung,
> Western Digital and Seagate.
> 
> And yes, if you insert the "kicked out" drive somewhere else, it works. It
> shows up and reports no SMART errors.
> 
> Why does the SAS drive show up while any other SATA drive doesn´t, if it´s
> not firmware/driver related? Who do I blame?:)

I'm not sure why the SAS drive shows up or who to blame yet.  But, from what I've seen in the logs, it doesn't look like it's the driver, although I'm not completely sure yet. (In fact, if it was the driver I don't think a SAS drive would show up either.)  I hope you can get from my previous explanation why I don't think it's the driver.  The driver can only react to the firmware events that it receives.  If the driver does not get a proper event to add a disk, it can't do it.  It looks like the driver is not getting the events that it needs.  It's possible that it's a Firmware problem, but I think we would have seen a lot more complaints from customers if that were true.  If you can give me some exact, simple reproduction steps I can try to recreate it here.  The less number of drives and smaller topology, the better for me since I don't have a lot of equipment at hand.

> 
> /K
Comment 31 Karli Sjöberg 2015-10-13 16:42:33 UTC
(In reply to Stephen McConnell from comment #30)

OK, yeah we´re on the same page. The drive doesn´t show up but therein ends the similarities.

OK, driver timeout is more appropriate but lies at the OS level, as opposed to setting in HBA or HDD firmware, is what I meant.

There are no simple reproduction steps I can provide because I can´t really make it happen at will, but it has happened quite frequently as of late. What I can tell you are how the systems are setup.

You start with some hardware, add a HBA of either model 9200, 9201, 9211 or AOC-USAS2-L8(i|e) flashed with IT FW, connect them through some kind of expander and add at least 14 drives. That´s the least amount of drives where I´ve seen this issue occur. Install latest FreeBSD release and opt for "Auto ZFS", apply some pressure (preferably external over NFS, iSCSI, SMB or something) for a few days and then watch the magic happen.

Lots of concurrent IO seems to trigger it more frequently.

/K
Comment 32 Stephen McConnell freebsd_committer freebsd_triage 2015-10-13 16:50:41 UTC
OK. I might be able to set something up and run some I/O to see what happens.  In the meantime, can you run a test where you set the debug_level to 0x1F and then send me the complete log?  I don't know if that will be too much debug info in the log, but what I want to see is what happens before the beginning of the initial command timeout all the way through the re-insertion of the disk, including system time.
Comment 33 Karli Sjöberg 2015-10-13 18:59:17 UTC
> --- Comment #32 from Stephen McConnell <slm@freebsd.org> ---
> OK. I might be able to set something up and run some I/O to see what happens. 
> In the meantime, can you run a test where you set the debug_level to 0x1F and
> then send me the complete log?  I don't know if that will be too much debug
> info in the log, but what I want to see is what happens before the beginning of
> the initial command timeout all the way through the re-insertion of the disk,
> including system time.

I think I can swing that. I'll update as soon as I have something.

/K
Comment 34 Karli Sjöberg 2015-10-14 13:44:26 UTC
I did it! I don´t think I can provide you with much better data than this!

Today we installed a new SC837E26-RJBOD1 around 13:20 in the log and inserted and pulled the external SAS cable a couple of times for good measure. Then at around 13:34 we tried inserting the first disk (WD40EZRX) to test out the internal cabling, and then it happened:

mpssas_get_sata_identify: error reading SATA PASSTHRU; iocstatus = 0x804b
mps3: Sleeping 5 seconds after SATA ID error to wait for spinup

Just kept repeating over and over again. Tried reinserting it in bay 1 didn´t help. So we tried pulling it out from bay 1 and insert it into bay 2 but with the same result.

So we pulled it out and inserted a Seagate ST2000DM001 into bay 1. It didn´t show up, bay "blocked". Reinserted it into bay 2 made it appear at 13:45. Also tried putting it in bay 3, also worked.

Pulled that one out and inserted a Samsung HD103SI first in bay 1, didn´t work, "blocked". Then tried in bay 2 and 3 made it appear.

Then we tried another WD40EZRX in bay 1, didn´t work. Bay 2 and 3 worked however. We went back to the first "bad" WD40EZRX and tried bay 1, 2 and 3, still didn´t work.

There´s also trace here of me pulling out a WD RE SAS WD4001FYYG from another SC837E26-RJBOD1 to test with in the "blocked" bay 1 at 14:09.

Here I hypothesized that this first bad WD40EZRX permanently taints every bay it´s inserted in so we tried reinserting a drive that previously showed up, at least in bay 2 and 3, the ST2000DM001. Inserting it in bay 1 did nothing, as before. But now I would have thought that it actually wouldn´t show up in bay 2 or 3 either since we had tried inserting the "bad" drive in there. But the ST2000DM001 actually did show up there, in bay 2 and 3, that is. Bay 1 is however "blocked" for any of the SATA drives.

As a last test we took the WD RE SAS WD4001FYYG drive and inserted it into both bay 1 and 2 which worked, as expected, at 13:13.

Full log uploaded named "kernel-critical log"

Best Regards
Karli Sjöberg
Comment 35 Karli Sjöberg 2015-10-14 13:45:33 UTC
Created attachment 162037 [details]
kernel-critical log
Comment 36 Adam Stylinski 2015-12-17 05:43:12 UTC
Chiming in here with a carefully qualified "me too".  

I have an M1015 crossflashed to 9211 IT firmware, (without an expander, but using adapters to break out to a separate jbod), and during heavy IO I'm seeing errors on write commands of similar nature as this:
length 131072 SMID 79 terminated ioc 804b scsi 0 state c xfer 0.  

These drives are 2TB RE4's.  

I was initially at P17 but tried updating to P19 to see if it'd make a difference with this issue (it hasn't).  While it hasn't seemed to cause any failures, it is a bit worrisome.  I hadn't seen any such issue with reads (such as with a weekly scheduled scrub).  

Only seem to see the issue on the IBM branded controller, others are true 9211's flashed to P17 IT firmware.  However, other drives mainly consist of WD blacks.  Half tempted to buy or swap controllers around to see if I can find the consistency but I'm a little bit afraid to roll the dice with my data like that.
Comment 37 Adam Stylinski 2015-12-19 18:06:46 UTC
OK disregard my previous comment.  For one my issue probably isn't even the same, my drives aren't disappearing.  For two, the issue I had, which emitted similar / the same write command timeouts, was actually the same as this guy's:
http://christopher-technicalmusings.blogspot.com/2012/03/freebsd-scsi-sense-errors-did-you-check.html (it was a cabling issue).  Sorry for adding noise, this clearly isn't the same issue.
Comment 38 Graham Allan 2016-02-10 21:46:51 UTC
I'm not sure if I'm still seeing this same issue here (FreeBSD 10.2-p7). I have a server with 4 attached SC847-JBOS1 drives chassis, the HBA is 9207-8e with 20.00.04.00 firmware.

I set dev.mps.0.spinup_wait_time="5" a while back, and had to remove/replace 42 drives from one chassis. The system detected all but one drive - I can swap drives between that and another slot, and it's that particular slot (rather than the drive) which is persistently not detected.

I'm sure it will be detected after a reboot, but before doing that, I set dev.mps.0.debug_level=4 and pulled/replaced the drive. I'll attach the output (along with some earlier bits containing the hardware config etc) in case it reveals anything - I have no idea if it's really related to this issue.

Graham
Comment 39 Graham Allan 2016-02-10 21:49:39 UTC
Created attachment 166854 [details]
kern.log with dev.mps.0.debug_level=4, drive removed and reinserted but no detect
Comment 40 Eitan Adler freebsd_committer freebsd_triage 2018-05-20 23:51:03 UTC
For bugs matching the following conditions:
- Status == In Progress
- Assignee == "bugs@FreeBSD.org"
- Last Modified Year <= 2017

Do
- Set Status to "Open"
Comment 41 Scott Aitken 2019-05-31 05:49:16 UTC
(In reply to Graham Allan from comment #39)

Graham dis anything progress on this?  I have exactly the same issue.  I think I'll post to -questions and raise another ticket if required.

Thanks
Comment 42 Kubilay Kocak freebsd_committer freebsd_triage 2019-05-31 06:07:36 UTC
(In reply to Scott Aitken from comment #41)

Hi Scott,

Please don't create a separate issue if this one describes the same issue you've observed. Instead, please provide additional information regarding your system environment, including:

- Exact FreeBSD version (uname -a)
- Information about how long and for what versions this has been an issue (if applicable)
- Exact symptoms and steps to reproduce
- /var/run/dmesg.boot information (as an attachment)
- pciconf -lv output (as an attachment)