Bug 270409 - NVMe: ABORTED - BY REQUEST for TOSHIBA THNSN51T02DUK
Summary: NVMe: ABORTED - BY REQUEST for TOSHIBA THNSN51T02DUK
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: CURRENT
Hardware: amd64 Any
: --- Affects Only Me
Assignee: freebsd-bugs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-03-22 19:44 UTC by Thierry Thomas
Modified: 2023-11-19 14:27 UTC (History)
4 users (show)

See Also:


Attachments
Verbose dmesg after a successful boot (64.77 KB, text/plain)
2023-04-01 14:35 UTC, Thierry Thomas
no flags Details
Verbose dmesg after an unsuccessful boot (64.87 KB, text/plain)
2023-04-01 14:37 UTC, Thierry Thomas
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Thierry Thomas freebsd_committer freebsd_triage 2023-03-22 19:44:56 UTC
I just got a second hand laptop. It came with Windows 10 installed, and 1 hard disk and 2 SSD (NVMe) were detected.

These two SSD are:
- THNSN51T02DUK NVMe TOSHIBA
	Bus nbr 1, Target Id 0, LUN 0

- THNSN51T02DUK NVMe TOSHIBA
	Bus nbr 3, Target Id 0, LUN 0

Then I replaced Windows by FreeBSD -CURRENT, and these NVMe cannot be used:

# grep -i nvme /var/run/dmesg.boot
nvme0: <Generic NVMe Device> mem 0xdd600000-0xdd603fff irq 16 at device 0.0 on pci3
nvme1: <Generic NVMe Device> mem 0xdd400000-0xdd403fff irq 16 at device 0.0 on pci6

# cat /boot/loader.conf
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
nvme_load="YES"
nvd_load="YES"

# nvmecontrol devlist
nvme0: IDENTIFY (06) sqid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000
nvme0: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0
nvme1: IDENTIFY (06) sqid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000
nvme1: ABORTED - BY REQUEST (00/07) crd:0 m:0 dnr:0 sqid:0 cid:0 cdw0:0

# ls -l /dev/nvme*
crw-------  1 root  wheel  0x36 Mar 22 18:49 /dev/nvme0
crw-------  1 root  wheel  0x37 Mar 22 18:49 /dev/nvme1

# ls -l /dev/nda*
ls: /dev/nda*: No such file or directory

Full dmesg and devinfo outputs are on https://wiki.freebsd.org/Laptops/Dell_Alienware_17R4.

Any tip to make them usable?
Comment 1 Thierry Thomas freebsd_committer freebsd_triage 2023-04-01 14:30:07 UTC
I have tried many things: BIOS tweaking, with or without
hw.nvme.use_nvd, with vmd_load, etc., but no satisfying result ATM.

Note: initially, a device /dev/ntfs was shown and locked one NVMe. It was caused by KDE, and freed when I booted the machine without starting KDE.

This is very strange: at some time /dev/nvd0 and /dev/nvd1 were there,
nvmecontrol identify was OK for the two devices, gpart destroy of the
Windows stuff, gpart create and gpart add -t freebsd-zfs were
successfull, and I created two zpools on them.
    
But they are not persistent! After a reboot, I see sometimes only one of
them, and sometimes none…

Remark 1: when I can see a zpool, this is always the same. And yes, when
the machine ran Windows, both SSDs were always active.

Remark 2: it seems that chances to get at least one NVMe are greater after a cold boot, and chances to see none of them are greater after a reboot.
Comment 2 Thierry Thomas freebsd_committer freebsd_triage 2023-04-01 14:35:45 UTC
Created attachment 241246 [details]
Verbose dmesg after a successful boot

After this boot the two NVMe were usable.
Comment 3 Thierry Thomas freebsd_committer freebsd_triage 2023-04-01 14:37:25 UTC
Created attachment 241247 [details]
Verbose dmesg after an unsuccessful boot

After this boot, none of the NVMe were shown. Same configuration than for the previous file.
Comment 4 Thierry Thomas freebsd_committer freebsd_triage 2023-04-01 14:39:49 UTC
(In reply to Thierry Thomas from comment #1)

For completeness, here is the /boot/loader.conf giving the better results (as described in comment #1):

kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
nvme_load="YES"
nvd_load="YES"
#vmd_load="YES"
hw.vga.textmode=1
nvidia-modeset="YES"
# No success with nda (4)
#hw.nvme.use_nvd=0
# See PR 264172
#hw.pci.enable_pcie_hp=1
Comment 5 Zhenlei Huang freebsd_committer freebsd_triage 2023-04-13 17:26:00 UTC
(In reply to Thierry Thomas from comment #4)
I'm not sure, but since you're not using `nda(4)` can you comment out `nvme_load="YES"` in your /boot/loader.conf ?

I've Intel Optane M.2 SSD using nvd(4) driver and it works great (on 13.1).
Comment 6 Thierry Thomas freebsd_committer freebsd_triage 2023-04-13 18:19:24 UTC
(In reply to Zhenlei Huang from comment #5)
You are right: this is not necessary, because nvme module is already in kernel.

Then with or without it does not change the result.
Comment 7 Thierry Thomas freebsd_committer freebsd_triage 2023-07-23 17:15:53 UTC
Marcus Oliveira reported a similar problem in
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=262969#c15

But the question remains: with the same PSU, why are these NVMe usable on Windows and not on FreeBSD?
Comment 8 Marcus Oliveira 2023-07-23 19:47:45 UTC
I might add that the problem also raises on Linux... I've tried all distributions and hypervisors you could think of. It only works on Windows.

Marcus Oliveira