Created attachment 224121 [details] Photograph of the bug. FreeBSD-13.0-RELEASE-amd64-memstick.img HP ProBook 440 G7 <https://support.hp.com/gb-en/document/c06474914> Comparative test results: <https://gist.github.com/grahamperrin/5eca8231fa7e6a94a1f55991bcd7f3c4#freebsd-130-release-amd64-memstickimg> > below BIOS DRIVE D: is disk1, a flickering cursor.
> HP ProBook 440 G7 Adjacent bug 255073 for the same hardware not booting in UEFI mode.
I have the analogous case. FreeBSD-13.0-RELEASE-i386-mini-memstick.img HP EliteBook 2570p BIOS Mode Legacy For a moment these strings: Consoles: internal video/keyboard BIOS Drive C: disk0 BIOS Drive D: disk1 are displayed and then the notebook reboots. FreeBSD-12.2-RELEASE-i386-mini-memstick.img - the same behavior (except for the picture displayed has additional string at the start about BTX loader). FreeBSD-11.2-RELEASE-i386-mini-memstick.img loads successfully.
Thank you, (In reply to spell from comment #2) > … > FreeBSD-13.0-RELEASE-i386-mini-memstick.img > HP EliteBook 2570p > BIOS Mode Legacy > For a moment these strings: > … are displayed and then the notebook reboots. In the moment(s) before the reboot, is a flickering cursor visible? ---- With the HP ProBook 440 G7, which does not automatically reboot, the flickering is _very_ rapid and barely perceptible. <https://h20195.www2.hp.com/v2/getpdf.aspx/c06424517.pdf> QuickSpecs <https://support.hp.com/gb-en/document/c06474914> specifications <https://support.hp.com/gb-en/product/hp-probook-440-g7-notebook-pc/29090063> ---- HP EliteBook 2570p <https://support.hp.com/gb-en/document/c03412731> specifications <https://support.hp.com/gb-en/product/hp-elitebook-2570p-notebook-pc/5259393/> > Intel HD Graphics 4000
(In reply to Graham Perrin from comment #3) > In the moment(s) before the reboot, is a flickering cursor visible? No at all. That is very short moment, I hardly managed to read those three strings. My cam also can't catch it. Thank to you too for the PR.
(In reply to spell from comment #4) Latest BIOS?
(In reply to Toomas Soome from comment #5) > Latest BIOS? Believe yes, I've updated it recently in a service center. Do you need more info about the version?
(In reply to spell from comment #6) We got disk names, meaning the biosdisk.c probe functions did ok (more or less). After biosdisk, the zfs probe is run, and most likely it is causing the system to hung, because there are no other messages. Bios version does not help too much (I do not have the hw anyhow). Normally at such case we start with ellimination and inserting diag printouts.
(In reply to Toomas Soome from comment #7) What we should do next is to try to investigate why exactly are we get stuck and this would require building debug loader. I can do this for you. Before that, I'd like you to test if you can get boot: prompt -- when system is starting, on very first spinner, press space. You should get boot: prompt, there you can enter status or ?/ or ?/boot to list directory contents. Also you can enter file name like: /boot/loader to start next boot phase.
(In reply to Toomas Soome from comment #8) Don't see any spinner but if tapping Space at a right moment I do enter boot prompt and can run /boot/loader or whatever. Please build loader with eliminations/debug printouts.
(In reply to Graham Perrin from comment #0) > … Photograph of the bug. … Compare with the photograph at bug 257722 comment 3
(In reply to Graham Perrin from bug 260735 comment 4, where there was some CSM) > … There's a little more, which I should keep separate … As keyword 'uefi' applies to bug 255073, should any keyword apply to this bug 255072 for legacy boot? For what it's worth: in this case I treat legacy as distinct from CSM.
(In reply to Toomas Soome from comment #7) In my case the image boots successfully when I choose IDE instead of AHCI in BIOS settings. This works for both 13.0-RELEASE and 12.3-RELEASE. Can you please investigate this direction? Thank you.
(In reply to spell from comment #12) It has appeared that the same loader (from 12.3-RELEASE) installed on HDD successfully loads in AHCI mode.
(In reply to Toomas Soome from comment #7) > We got disk names, meaning the biosdisk.c probe functions did ok (more or less). Eventually appeared that they are still not. Trying to boot in any possible combinations has shown that boot process crashes exactly when all these three conditions are met: 1) Flash drive is inserted into USB port. 2) AHCI mode is chosen in BIOS settings. 3) The loader sees the Flash device (as drive D) (this occurs when "USB legacy support" is chosen in BIOS settings). In any other cases the loader boots succseffully, whether it runs from Flash drive or HDD.
(In reply to spell from comment #14) Ok, does the same happen with UEFI boot (assuming, this system does support UEFI)? Otherwise, we would need to build boot loader with debug printouts to see what exactly is going on there.
(In reply to Toomas Soome from comment #15) When UEFI: Flash drive with 12.3-RELEASE-i386 image does not appear in BIOS boot menu at all, so no any boot occurs. Flash drive with 12.3-RELEASE-amd64 image is visible in BIOS boot menu and does not crash when boots (but after loader's menu the video becomes corrupt, some mess of dots is displayed).
(In reply to spell from comment #16) Ok, i386 image is 32-bit (I guess), so it wont work with 64-bit UEFI. GFX mixup after kernel is loaded and started is another issue, perhaps fixed in 13/current, but that needs to be tested. But this did prove the problem is only related to BIOS version - it does smell like we do get some bad value while attempting to identify properties (sector and device size) of that usb flash stick. You could test if you have latest BIOS version, too - it may fix it. Otherwise - when exactly does it crash - can you get to boot: propmpt (press space when you see first spinner) or you do get crash before you even get to loader itself?
(In reply to Toomas Soome from comment #17) > it does smell like we do get some bad value while attempting to identify properties (sector and device size) of that usb flash stick Yes, but only when AHCI mode. So, who fails here - BIOS or loader? > You could test if you have latest BIOS version, too - it may fix it. My BIOS version seems to be the latest one. What can I test exactly? > when exactly does it crash - can you get to boot: propmpt (press space when you see first spinner) or you do get crash before you even get to loader itself? When I enter boot2 prompt, I choose default loader and get: Consoles: internal video/keyboard BIOS Drive C: disk0 BIOS Drive D: disk1 and right here the notebook reboots. If I choose another loader (I've added to this Flash drive the /boot11 directory with loader from 11.2-RELEASE), then no crash occurs.
(In reply to spell from comment #18) Who fails, depends on the nature of actual error. Assuming the better part of the machines can boot, it points towards the BIOS, but without knowing the exact error mechanics, we can not exclude some corner case in loader code. The disk list you see, is produced in bd_init() from stand/i386/libi386/biosdisk.c, so the crash has to happen in bd_int13probe() and that usually means something bad was happening either in bd_get_diskinfo_ext() or bd_get_diskinfo_std(), in any case, adding few printf() there would allow us to identify where exactly, and what are the values there causing the crash. Unfortunately, this has to be done on your system, where the crash is happening.
(In reply to Toomas Soome from comment #19) > adding few printf() there would allow us to identify where exactly, and what are the values there causing the crash. I've added tons of printf()'s and breakpoints through all that stack of functions and finally reached as far as bd_edd_io(). Exactly it do fail. I've added printf() with all arguments at the beginning of bd_edd_io() and don't see obvious differences between argument sets that do work and argument sets that crush the function. Please help me further. Another thing that I've observed is though the crashes look everytime similar but occur not always exactly after the N's invocation of bd_edd_io(), saying more precisely not after exact bcache_ops variable value (which is incremented in bcache_strategy()). Two adjacent boots with no code and hardware modifications can give different (but close) bcache_ops values right before crashes.
So is it a read or a write? And is this the first such I/O or not? And can you force it to use bd_chs_io instead to see if that helps (though if the geometry isn't quite right, chs mode will be epic fail later in the boot process).
(In reply to Warner Losh from comment #21) > So is it a read or a write? It is always read. Replacing bd_edd_io() with bd_chs_io() didn't help. > is this the first such I/O or not? Not. I've added my own counters to bd_edd_io() and bd_chs_io() and see that crash may occur e.g. upon 10th or 26th invocation of any of these two functions (if counting only readings the Flash drive, not the HDD).
(In reply to spell from comment #22) Could you post the disk properties -- actual ones you see from OS tool like gpart or such, and what you get from probing in biosdisk.c (sector size, number of sectors; I guess it is detecting EDD). The disk IO in early loader is about detecting partition type and reading partition table - what type of partitioning is used on that disk? In case of GPT, we read disk start *and* disk end to be sure there is no corruption. Secondly, disk IO is from the time we attempt to discover zfs pools, that will read every candidate partition start and end (pool config has 4 copies). After that we have hopefully established our boot file system and will start to read loader files. Usually, when there is a problem with disk IO, we see failure while detecting partitioning or, while probing for zfs pools. So, what to look for: certainly sector number for read - if we do fit inside the disk. Reading past disk end can crash many BIOS systems. Second possible issue is if the disk read will read more than we have buffer space - memory corruption. Possible way to test this guess would be to read 1 sector at a time. We use low memory buffer space for realmode INT13 calls and that memory area is 16k, so single sector read will (hopefully) not trash past that buffer end...
(In reply to Toomas Soome from comment #23) gpart show /dev/da0 => 1 2002941 da0 MBR (978M) 1 1600 1 efi (800K) 1601 803216 2 freebsd [active] (392M) 804817 1198125 - free - (585M) This is 12.3-RELEASE-amd64 image. disk_ioctl() returns the same 2002941 sectors and sector size 512. According to my printf() info, probing disks appears ok. The crash occurs on zfs probing stage, in the last iteration of the cycle: for (i = 0; devsw[i] != NULL; i++) in loader's main.c, when i is 5 and devsw[i]->dv_name is zfs. This is my printout with printf()'s in this cycle: BTX loader 1.00 BTX version is 1.02 Consoles: internal video/keyboard main.c: dv_name: fd dv_type=5 main.c: dv_name: cd dv_type=3 main.c: dv_name: disk dv_type=1 BIOS drive C: is disk0 BIOS drive D: is disk1 main.c: dv_name: net dv_type=2 main.c: dv_name: vdisk dv_type=1 main.c: dv_name: zfs dv_type=4 Zfs probing firstly probes HDD and here always is ok, and then probes Flash drive and crashes on it (if AHCI mode set). >Possible way to test this guess would be to read 1 sector at a time. How to do this?
(In reply to spell from comment #24) for 1 sector reads; bd_realstrategy() is allocating bounce buffer with: bio_size = min(BIO_BUFFER_SIZE, size); use 512 for BIO_BUFFER_SIZE. It would be good to get the sector number and size for last read, however. The curious thing is, you have GPT, with freebsd partition (zfs probe does check it), but after freebsd partition, there is still free space, so we should not get past disk end, except if zfs probe is trying "whole disk" first and we got wrong disk size from INT 13.
(In reply to spell from comment #20) > though the crashes look everytime similar but occur not always exactly after the N's invocation of bd_edd_io() Occasionaly the reason of this behavior has been revealed. The exact moment of the crash depends on how quick I walk through all my breakpoints. If I do it slow enough (one Enter pressing a second providing one bd_edd_io() a second) I even can pass all zfs probe stage and process further. Otherwize the crash occurs earlier or later. So the matter is not in geometry or layout, right?
Can't repeat the experiment so it was probably temporary coincidence. So that is still the question. (In reply to Toomas Soome from comment #25) > bio_size = min(BIO_BUFFER_SIZE, size); > use 512 for BIO_BUFFER_SIZE. This helps. (1024 does not.)
(In reply to Toomas Soome from comment #25) > It would be good to get the sector number and size for last read, however. They differ because the crash occurs in different moments. Two last crashes have occured on sector numbers (dblk variable) 1953 and 2001640. Read size in both cases is 4096. > bio_size = min(BIO_BUFFER_SIZE, size); > use 512 for BIO_BUFFER_SIZE. Thank you for the hint, this has led me to discover that the buffer ptr does matter somehow. I've replaced BIO_BUFFER_SIZE with V86_IO_BUFFER_SIZE, commented out bio_alloc() and bio_free() calls and used dumb "bbuf = bio_buffer;" instead (since no any LIFO queue on bio_alloc()/bio_free() presents here). Such loader still crashes as usual, but when I just replace "bbuf = bio_buffer;" with "bbuf = PTOV(V86_IO_BUFFER);" the crash does not occur. Please suggest what to do next.
Seems, this may be useful. bio_buffer variable at my book has address 0x5a6b4 and PTOV(V86_IO_BUFFER) equals to 0xffffe000. loader's smap gives also: SMAP type=01 base=0000000000000000 len=000000000009dc00 SMAP type=02 base=00000000ffb00000 len=0000000000500000 So bio_buffer resides in usable memory block (type=01) and PTOV(V86_IO_BUFFER) is in reserved (type=02) memory block.
(In reply to spell from comment #28) just remind me, what version of freebsd is this, current? the bbuf assignment test is suggesting we do get some sort of buffer overrun there. ok, V86_IO_BUFFER is at 0x8000 and with size 0x1000 (4KB), BIO_BUFFER_SIZE is 0x4000 (16KB), the buffer is allocated from bss segment (see bio.c bio_buffer[BIO_BUFFER_SIZE]. so, both areas should be safe - in low memory and therefore usable by BIOS INT calls. Now the catch there is, the btx (our V86 mode "kernel") is at 0x9000, and loader is at 0xA000 (code start, followed by data, bss segments and then stack). So, if the INT will write past 0x8000 + 0x1000, it will corrupt BTX; if INT will write past end of bio_buffer, it will corrupt next variable in BSS. So, if you are using IO size 512, then both buffer spaces should be just fine. If the INT call will actually use more of that memory, then we may be in trouble. I guess the only way to detect how much buffer memory was actually used, can be detected by storing know value into entire buffer, and test how big are it is where the buffer is changed. With no buffer overrun, we would expect exactly the IO size to be changed...
(In reply to spell from comment #29) PTOV and VTOP are translating physical address to virtual and vice versa; physical 0xA000 is virtual 0x0. So virtual 0xffffe000 is physical 0x00008000
(In reply to Toomas Soome from comment #30) > just remind me, what version of freebsd is this, current? 12.3. Initially I started with 13.0 and noticed that visually its loader crashes the samely as 12.2's does (and later as 12.3 does), and I've stuck with the 12.3. > So virtual 0xffffe000 is physical 0x00008000 Got it, thank you. > So, if the INT will write past 0x8000 + 0x1000, it will corrupt BTX; This never happens in my experiments (or goes seamlessly). Using V86_IO_BUFFER always is successful. > if INT will write past end of bio_buffer, it will corrupt next variable in BSS. If no buffer overrun when using V86_IO_BUFFER (that is 4K large), how can it happen when using bio_buffer (that is 16K large) if all other conditions are the same? Also I am trying to decrypt the symptom that the crash occurs not in the same point of loader run. Seems that the bio_buffer area is somehow used by BIOS concurrently with using it by v86int() (just to remind - the loader crashes only when AHCI mode set in BIOS settings), or the INT runs somehow differently depending on IDE/AHCI mode.
(In reply to spell from comment #32) Hm. So, enforcing IO size to 1 sector (512B) does not help, but using buffer at 0x8000 does? that is interesting. Btw, did you see comment in bd_io()? It is about proliant and large disk, but it *may* explain the randomness factor... I still wonder if we could determine the size of corruption - note, we can increase the buffer area in BSS for test purposes.
(In reply to Toomas Soome from comment #33) > So, enforcing IO size to 1 sector (512B) does not help, but using buffer at 0x8000 does? Enforcing IO size to 512 bytes does help. That is why I ever have paid attention to buffer location variations. > I still wonder if we could determine the size of corruption - note, we can increase the buffer area in BSS for test purposes. Increasing bio_buffer size to BIO_BUFFER_SIZE*4 didn't help. Trying to work out with the bd_io_workaround() and the comment about it and to detect possible buffer overrun...
(In reply to Toomas Soome from comment #30) >If the INT call will actually use more of that memory, then we may be in >trouble. I guess the only way to detect how much buffer memory was actually >used, can be detected by storing know value into entire buffer, and test >how big are it is where the buffer is changed. Can't implement this test because bd_edd_io() does not return (the crash occurs inside of it), so I can't check the buffer state after this crashing INT. Is there any way to look into the INT of itself? >did you see comment in bd_io()? >It is about proliant and large disk Can you please explain what a buffer overrun happens on that Proliant and how does bd_io_workaround() solve the poblem and whether it just alleviates (as said in the comment) or totally excludes the buffer overrun?
Seems caught it. The crash occurs inside bd_edd_io(), wich calls BTX-owned int 31h which in turn calls BIOS-owned int 13h, and it comes this last int is the one who do fail. The matter why this is so difficult to catch it is it crashes randomly. With no obvious differences in environment it may succeed or crash, approximately 99/1. 11.2 loader crashes too, though very rarely. The rule is: more int 13h during loader run - more crash chance. By default 11.2 loader does not enter zfs probing stage and so requests only two or three int 13h per disk. With zfs probing (which is on by default in 12.3) the count of these requests is about a hundred, so the chance to crash is much bigger. Didn't try all the functions of int 13h but at least CMD_READ_LBA, CMD_READ_CHS and one of CMD_CHECK_EDD, CMD_EXT_PARAM lead to the crash. These are my tests that proof this statement: 12.3 loader: I've added for(i=0; i<100; i++) with identical bd_edd_io() calls right after the original bd_edd_io() call, and the crash occurs inside of this bunch of calls (every time on different i value). 11.2 loader: The same with bd_int13probe(). The loader crashes every time on this or that i value.
I too was hit by this strange boot UEFI bug. I've spent hours troubleshooting it and thought something to do with the video if connected to the PC. I use piKVM to see what's going on. Soon as I disconnect the video it boots normally. I am using pfsense with FreeBSD 14.0-CURRENT. It happened with the previous version of FreeBSD. Turns out my fix was update the BIOS on the Dell OptiPlex 7070 to 1.21.0. Previous version was 1.8.4 as you can see it's over two years old. I generally don't update the BIOS unless a serious security concern or a bug. Apparently it was a bug that bit me. Netgate pfSense Plus 23.05-RELEASE (amd64) built on Mon May 22 15:04:36 UTC 2023 FreeBSD 14.0-CURRENT Vendor: Dell Inc. Version: 1.21.0 Release Date: Fri Apr 7 2023 I was not able to use the legacy BIOS boot for nvme. It only works in UEFI. Hope this helps for those use Dell machines.
(In reply to NoahD from comment #37) > FreeBSD 14.0-CURRENT Which version, exactly?
(In reply to Graham Perrin from comment #38) FreeBSD 14.0-CURRENT amd64 That's all it says when I use uname -mrs
(In reply to NoahD from comment #39) uname -aKU
(In reply to Graham Perrin from comment #40) FreeBSD pfsense.darkkdomain.lan 14.0-CURRENT FreeBSD 14.0-CURRENT #1 plus-RELENG_23_05-n256102-7cd3d043045: Mon May 22 15:33:52 UTC 2023 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-23_05-main/obj/amd64/LkEyii3W/var/jenkins/workspace/pfSense-Plus-snapshots-23_05-main/sources/FreeBSD-src-plus-RELENG_23_05/amd64.amd64/sys/pfSense amd64 1400085 1400085
(In reply to NoahD from comment #37) > … not able to use the legacy BIOS boot for nvme. … (In reply to NoahD from comment #41) > … 14.0-CURRENT #1 plus-RELENG_23_05-n256102-7cd3d043045: Mon > May 22 15:33:52 UTC 2023 … Can you try the most recent 14.0-CURRENT? <https://github.com/freebsd/freebsd-src/commit/bdc81eeda05d3af80254f6aac95759b07f13f2b7> (2023-06-13) was eye-catching, although I can't tell whether it's relevant to a case such as yours. ---- bdc81eeda05d3af80254f6aac95759b07f13f2b7 is not yet in <https://github.com/pfsense/FreeBSD-src/commits/devel-main>.
Booting from the image FreeBSD-14.0-ALPHA4-amd64-20230901-4c3f144478d4-265026-mini-memstick.img gives the same effect on my HP EliteBook as I've reported previously - reboots while AHCI mode is set in BIOS settings and boots normally while IDE mode is set. Graham, Toomas, while I've started participating in this bugreport with rather superficial notes but finally I've investigated the bug to the extent I believe I had found the core issue - this is some bug in BIOS code (the int13h piece). I tried to show how I've ended up with this conclusion, though probably some reasons I've missed. As of a workaround, don't imagine if it is possible though this issue is some very opaque race condition that is hardly catchable and seems fires up upon some asynchronous events on the bus from other devices, and it's a pity that it presents in HP (and, as reported, Dell) products :( (though they may share the code and consequently the issue).
I just received one of these (840 G5). The 15-CURRENT memstick.img boots nicely in EFI mode. In legacy mode attempting to boot a clone of my Acer laptop's disk, I believe (because it flashes a couple of lines before it boots again) I see the same as in the photograph of the bug. I did update the firmware to the 2024 (1.28.0) version of the firmware, which made no difference. Certainly if someone wants to use this hardware one must use UEFI secure boot mode. Though I would prefer to simply clone my Acer SSD to an SSD I could install in the new machine, saving me a lot of setup work. * I've usually cloned one machine to the next to the next over the years. It served me well in the Solaris and OSF/1 days and did when I started using FreeBSD. It's a huge time saving approach to clone a drive, pop it in a new machine and have it work with only a few minor tweaks. This is certainly a legacy mode issue.
(In reply to Cy Schubert from comment #44) Unfortunately, with those legacy cases, we really can not fix it just by reading and changing the code -- we need to understand under which conditions the problem is appearing and then figure how to fix it. Meaning, debug printouts need to be inserted and it is long cycle of tests.... and unfortunately, those tests need to be performed on that specific hardware. At the end of the day, it is question if we want to invest time debugging BIOS code while UEFI version is working:)
(In reply to Toomas Soome from comment #45) As a developer I totally get it. I did manage to capture a photo of seven "disk0:Read NNN sectors..." messages before it disappeared.
Each one of these is likely a silent crash. Likely caused by the boot loader getting too big for the specific machine in question, but maybe some other regression that's not hitting other machines. I suspect that these will never be fixed since the common solution is "upgrade to UEFI". If you want / need it fixed, though, your best bet is to bisect the boot loader from "last known working one" to the first commit that breaks it. That will suggest what to put in by way of debugging. It's something that can be done asynchronously to developer involvement. And will narrow things so we can figure it out. Tracing it to a release isn't helpful enough. There's too many commits between working and not working to help. Since these messages are from /boot/loader, recovery isn't terrible.
the other thing to try is to boot /boot/loader_4th or /boot/loader_simp from the boot2 boot blocks (you have to interrupt at the right place). Those are smaller and might help determine if it's size or code bug.
(In reply to Warner Losh from comment #48) You were right, partially. I tested using my disaster recover USB disk (with MBR), which contains a fully installed FreeBSD 15-CURRENT on UFS2, with a ZFS pool containing backups for use in case something goes horribly wrong. It booted with loader_4th. It did not boot with loader_simp, having the same problem as loader_lua.
(In reply to Toomas Soome from comment #45) > Unfortunately, with those legacy cases, we really can not fix it just by > reading and changing the code -- we need to understand under which conditions > the problem is appearing and then figure how to fix it. Meaning, debug > printouts need to be inserted and it is long cycle of tests.... and > unfortunately, those tests need to be performed on that specific hardware. BTW, I had done this investigation already, with all those printouts, long cycle of tests and finally locating the exact point and conditions when boot fails. The issue is obviously in BIOS code and has stochastic behavior as it occurs due to (seems) some race condition. My comprehensive tests had shown that in no way the FreeBSD loader code causes the failure (I've tested 11.2-RELEASE and 12.3-RELEASE). The failure occurs exclusively inside of bd_edd_io() or bd_int13probe(), though far not every call of these functions. This conclusion is based on result of debug printf()'s: right before these calls and right after of them. Sometimes (randomly) these functions do not return (no exit debug message), approximately once per several tens of calls (proved by print()'s with incrementing number after each such call, and this number is always random one), without any regularity, so it is like some race condition. bd_edd_io() and bd_int13probe() call BIOS INT 13H, so it appears to be the cause of the failure. The reason why 11.2 loader seems to not having this issue (it boots normally) is just the fact it calls INT 13H just several times (about 6 or so) and almost always the race condition does not occur and boot goes normally (proved by incrementing count of bd_int13probe() calls - boot fails). Contrarily, 12.3 loader, due to zfs probing, has over hundred of INT 13H calls so earlier or sooner INT 13H runs into the race condition and boot fails. Just to remember, the failure occurs when three conditions are met: 1) Legacy boot 2) AHCI mode 3) USB flash drive inserted In these conditions, boot fails regardless of the device from which boot is preformed, whether it is USB Drive or HDD. So conclusion is: BIOS 13H handler incorrectly handles HDD+FlashDrive combination in AHCI mode.