[The ". . ."s are omitted lines.] (only tested on rpi2 with -mcpu=cortex-a7 in use) kyua report --results-filter broken --results-file /usr/tests --verbose reports for sys/geom: ===> sys/geom/class/eli/integrity_copy_test:main Result: broken: Test case timed out Duration: 1200.089s Metadata: allowed_architectures is empty allowed_platforms is empty description is empty has_cleanup = false required_configs is empty required_disk_space = 0 required_files is empty required_memory = 0 required_programs is empty required_user = root timeout = 1200 Standard output: 1..5520 ok 1 - small 1 aalgo=hmac/md5 ealgo=aes keylen=0 sec=512 . . . ok 4442 - small 2 aalgo=hmac/md5 ealgo=blowfish-cbc keylen=448 sec=512 ok 4443 - big 1 aalgo=hmac/md5 ealgo=blowfish-cbc keylen=448 sec=512 ok 4444 - big 2 aalgo=hmac/md5 ealgo=blowfish-cbc keylen=448 sec=512 Standard error: Subprocess timed out; sending KILL signal... ===> sys/geom/class/eli/onetime_a_test:main Result: broken: Test case timed out Duration: 600.041s Metadata: allowed_architectures is empty allowed_platforms is empty description is empty has_cleanup = false required_configs is empty required_disk_space = 0 required_files is empty required_memory = 0 required_programs is empty required_user = root timeout = 600 Standard output: 1..1380 ok 1 - aalgo=hmac/md5 ealgo=aes keylen=0 sec=512 . . . ok 794 - aalgo=hmac/ripemd160 ealgo=blowfish-cbc keylen=0 sec=4096 Standard error: Subprocess timed out; sending KILL signal...
(In reply to Mark Millard from comment #0) A 11.0 -r302331 test had two more sys/geom tests timeout (so they can sometimes occur): sys/geom/class/eli/integrity_data_test:main -> broken: Test case timed out [600.142s] sys/geom/class/eli/integrity_hmac_test:main -> broken: Test case timed out [600.100s]
What was the frequency on the rpi2's processor again?
(In reply to Ngie Cooper from comment #2) This is an Raspberry Pi 2B context. I'm not sure what FreeBSD is doing for frequency control. As it sits at the moment it reports: hw.cpufreq.turbo: 1 hw.cpufreq.sdram_freq: 450000000 hw.cpufreq.core_freq: 250000000 hw.cpufreq.arm_freq: 900000000 . . . dev.bcm2835_cpufreq.0.freq_settings: 900/-1 600/-1 dev.bcm2835_cpufreq.0.%parent: cpu0 dev.bcm2835_cpufreq.0.%pnpinfo: dev.bcm2835_cpufreq.0.%location: dev.bcm2835_cpufreq.0.%driver: bcm2835_cpufreq dev.bcm2835_cpufreq.0.%desc: CPU Frequency Control dev.bcm2835_cpufreq.%parent: dev.cpu.0.freq_levels: 900/-1 600/-1 dev.cpu.0.freq: 900 I will note that I have the root file system on a USB SSD, although the kernel is from the SD card. The USB SSD combination is operationally much faster than the SD card. If there was significant I/O to the SD card things would take even longer. While technically the USB SSD is a fast USB3 SSD on a USB3 capable hub the rpi2 only has USB2 and does not fully utilize USB2 capacity form what I've seen. (But I've done no formal benchmarks so far.)
(In reply to Ngie Cooper from comment #2) For the USB SSD that I normally use as the root file system on the rpi2 here is what benchmarks/bonnie reports: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 4096 12024 99.8 23417 23.4 8551 10.9 12825 100.0 16128 11.9 522.9 7.9 which is for "/dev/ufs/RPI2rootfs on /" below: # mount /dev/ufs/RPI2rootfs on / (ufs, NFS exported, local, noatime, soft-updates) devfs on /dev (devfs, local) /dev/mmcsd0s1 on /boot/msdos (msdosfs, local, noatime) /dev/ufs/RPI2Arootfs on /mnt (ufs, local, noatime, soft-updates) # df -m Filesystem 1M-blocks Used Avail Capacity Mounted on /dev/ufs/RPI2rootfs 440365 40251 364885 10% / devfs 0 0 0 100% /dev /dev/mmcsd0s1 49 7 42 15% /boot/msdos /dev/ufs/RPI2Arootfs 26763 13215 11406 54% /mnt I will report what bonnie reports about the SD card after it finishes ("/dev/ufs/RPI2Arootfs on /mnt").
(In reply to Mark Millard from comment #1) For the SD card that I normally use boot the kernel on the rpi2 here is what benchmarks/bonnie reports: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 4096 7383 60.2 9573 7.8 5896 5.9 13077 100.0 17884 8.8 457.6 5.3 It is the write speeds that are notably slower than when using the USB SSD.
The timeout would likely need to be increased for a number of long-running tests, at the cost of failing sooner with faster running platforms. Not having a means of dynamically detecting platform performance baked in to ATF/Kyua is the first item that probably should be tackled. I have no interest in doing this though, because upstreaming changes to ATF/Kyua is annoyingly bureaucratic.
(In reply to Ngie Cooper from comment #6) Is it worth just forking it?