pydf reported values are 8 times larger than those produced by df and di. df- hT Filesystem Type Size Used Avail Capacity Mounted on /dev/nda1p2 ufs 98G 14G 76G 16% / /dev/ada0p4 ufs 3.9G 311M 3.3G 9% /var di -g Filesystem Mount Gibis Used Avail %Used fs Type /dev/nda1p2 / 97.8 14.3 75.7 23% ufs /dev/ada0p4 /var 3.9 0.3 3.3 16% ufs pydf Filesystem Size Used Avail Use% Mounted on /dev/nda1p2 783G 114G 606G 14.6 [####..............] / /dev/ada0p4 31G 2486M 26G 7.9 [##................] /var Is python processing those numbers in a non-standard way?
Played a bit with command options: With --block-size=8589934592 (which is 8*1024^3) pydf shows correct values in GiB, but only for UFS. The reported values for ZFS must be divided by 32 to get the correct numbers. In Linux pydf reports the correct values without any --block-size option for all types of filesystems: ufs, zfs, ext4, btrfs, ntfs etc.
^Triage: Thank you for the report. This appears to be an issue that might be better reported upstream, along with the relevent FreeBSD / filesystem, version information
Hmm, the upstream development seems to have been stalled since 2015. The README.txt says: "pydf was written for linux, using specific linux features. The fact it runs on other systems is pure coincidence..." So, little chance for fixes to make it work in BSD! Probably better give up using it.