| Summary: | net/glusterfs: glusterfs volume status not showing online | ||
|---|---|---|---|
| Product: | Ports & Packages | Reporter: | markhamb |
| Component: | Individual Port(s) | Assignee: | freebsd-ports-bugs (Nobody) <ports-bugs> |
| Status: | Closed FIXED | ||
| Severity: | Affects Only Me | CC: | bgorbutt, craig001, daniel, flo, freebsd, gnoma_86, kunishima, mefystofel, r00t |
| Priority: | --- | Keywords: | needs-patch, needs-qa |
| Version: | Latest | Flags: | linimon:
maintainer-feedback?
(craig001) |
| Hardware: | amd64 | ||
| OS: | Any | ||
|
Description
markhamb
2017-11-14 20:25:21 UTC
Hello Folks Thanks for bringing this to my attention, will look into it and report back shortly. There are a couple of issues with GlusterFS that need upstreaming and getting more hands on to correct. Hopefully have something worth while soon. Kind Regards Craig Butler Any updates on this? Anything I can do to help move this forwards? Best, -Markham I have the same issue. Started volume but it's still offline. Is there any progress? +1 Same here under FreeBSD 11.1-RELEASE-p8 with glusterfs-3.11.1_4 installed. The reason why you see N/As is because 'gluster volume status' relies on RDMA (libverbs in particular, which as far as I understood doesn't exist in FreeBSD). If you try to compile glusterfs manually you'll notice that: ... checking for ibv_get_device_list in -libverbs... no checking for rdma_create_id in -lrdmacm... no ... Which will result in: GlusterFS configure summary =========================== ... Infiniband verbs : no ... And consequently will produce the following in the logs: E [rpc-transport.c:283:rpc_transport_load] 0-rpc-transport: Cannot open "/usr/local/lib/glusterfs/3.13.2/rpc-transport/rdma.so" W [rpc-transport.c:287:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine I'm not sure about the support of userspace access to RDMA in FreeBSD. According to https://wiki.freebsd.org/InfiniBand you can try to add WITH_OFED=yes to /etc/src.conf and build/installworld. I am also experiencing this issue on FreeBSD 11.1-RELEASE-P11 running GlusterFS 3.11.1. At first I thought when I ran into this issue, it was just a minor inconvenience of the UI, because the cluster still functioned properly with read/write operations. However... It turns out that "gluster volume heal" commands instantly fail because it checks the "online" status of each node first before issuing the command. This makes it impossible to issue certain administrative commands on the cluster at all. Also, the Gluster version available in FreeBSD is significantly outdated and contains a known memory leak bug that has already been addressed and fixed upstream. If needed, I can open a second issue about this here. This issue needs isolation and a patch (if relevant) to progress. Since the port is out of date, I would also encourage attempting to update the port to a later version, and test reproduction of the issue again, to identify whether it has been fixed upstream. Hello, Any update on this? I'm running on glusterfs-3.11.1_6 and still have the same issue. Not being able to run gluster volume heal is a huge problem since in case of possible data corruption you will not be able to fix it. Thanks The quick fix for this problem is to make sure you have procfs mounted as per https://www.freebsd.org/doc/en_US.ISO8859-1/articles/linux-users/procfs.html After doing so, it shows the brick as online. This was tested on Gluster 7.6 and probably works for 8.0 (https://reviews.freebsd.org/D25037) root@moon:~ # mount proc root@moon:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/da0p2 18G 2.4G 15G 14% / devfs 1.0K 1.0K 0B 100% /dev gluster 19G 461M 18G 2% /gluster /dev/fuse 19G 655M 18G 3% /mnt/replicated procfs 4.0K 4.0K 0B 100% /proc root@moon:~ # service glusterd restart Stopping glusterd. Waiting for PIDS: 490, 490. Starting glusterd. root@moon:~ # gluster volume status Status of volume: replicated Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick sun:/gluster/replicated N/A N/A N N/A Brick earth:/gluster/replicated N/A N/A N N/A Brick moon:/gluster/replicated 49152 0 Y 831 Self-heal Daemon on localhost N/A N/A Y 877 Self-heal Daemon on earth N/A N/A N N/A Self-heal Daemon on sun.gluster.morante.com N/A N/A N N/A Task Status of Volume replicated ------------------------------------------------------------------------------ There are no active volume tasks The core of the problem appears to be in the "gf_is_pid_running" function in https://github.com/gluster/glusterfs/blob/v7.6/libglusterfs/src/common-utils.c#L4098 It needs to be patched with a FreeBSD equivalent that doesn't use "proc" I've made patches to remove the procfs requirement https://github.com/tuaris/freebsd-glusterfs7/tree/master/net/glusterfs7/files Thank you very much for the update and for testing it on such recent versions of glusterfs. However in FreeBSD ports and pkg_ng repositories we still have Glusterfs version 3.11 so your update doesn't really help. root@hal9000:/usr/ports/net/glusterfs # cat distinfo TIMESTAMP = 1499632037 SHA256 (glusterfs-3.11.1.tar.gz) = c7e0502631c9bc9da05795b666b74ef40a30a0344f5a2e205e65bd2faefe1442 SIZE (glusterfs-3.11.1.tar.gz) = 9155001 root@hal9000:/usr/ports/net/glusterfs # pkg search gluster glusterfs-3.11.1_7 GlusterFS distributed file system root@hal9000:/usr/ports/net/glusterfs # So unless there's a link to the instructions how to build and install such later versions on FreeBSD your fix is useless. Don't get me wrong, thank you so much for the effort but still I would like to be able to install it and to use it on FreeBSD as a real user. Thank you The port is currently in the process of being updated to 8.0. In the mean time you can use the Github repo I linked to, to test version 7.6 A commit references this bug: Author: flo Date: Wed Jul 29 20:34:02 UTC 2020 New revision: 543674 URL: https://svnweb.freebsd.org/changeset/ports/543674 Log: Update to 8.0, this is a collaborative effort between Daniel Morante and myself. - update to 8.0 - make it possible to mount gluster volumes on boot [1] - reset maintainer [1], I would have set it to ports@ but Daniel volunteered to maintain the port - add pkg-message to point out that procfs is required for some operations like "gluster volume status" which is also required for self healing. [2] This version works although I still see the same memory leak as with the 3.X series. PR: 236112 [1], 223671 [2] Submitted by: Daniel Morante <daniel@morante.net>, flo Obtained from: https://github.com/tuaris/freebsd-glusterfs7 Differential Revision: D25037 Changes: head/net/glusterfs/Makefile head/net/glusterfs/distinfo head/net/glusterfs/files/glusterd.in head/net/glusterfs/files/patch-configure head/net/glusterfs/files/patch-configure.ac head/net/glusterfs/files/patch-contrib_fuse-lib_mount.c head/net/glusterfs/files/patch-extras_Makefile.in head/net/glusterfs/files/patch-libglusterfs_src_common-utils.c head/net/glusterfs/files/patch-libglusterfs_src_syscall.c head/net/glusterfs/files/patch-xlators_mgmt_glusterd_src_Makefile.am head/net/glusterfs/pkg-message head/net/glusterfs/pkg-plist Should be fixed. |