Bug 251047 - Multicast: Not possible to add MFC entry's when MAXVIFS = 64
Summary: Multicast: Not possible to add MFC entry's when MAXVIFS = 64
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: Unspecified
Hardware: Any Any
: --- Affects Some People
Assignee: freebsd-net (Nobody)
URL:
Keywords: IntelNetworking, needs-qa
Depends on:
Blocks:
 
Reported: 2020-11-11 13:12 UTC by Louis
Modified: 2020-11-17 02:44 UTC (History)
3 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Louis 2020-11-11 13:12:07 UTC
*Problem description*
When using MAXVIFS=32 multicast is OK. However(!) it is failing with MAXVIFS = 64. 
The problem is visual, when trying to add MFC entry's. 
The call ^setsockopt(socket, IPPROTO_IP, MRT_ADD_MFC, (char *)&mc, sizeof(mc))^
result in ^Invalid argument^ for unkown reason. 

Given that people need more than 32 VIFS nowadays, that is a problem.
Assuming MAXVIFS > 32 is supported, it is a bug.

I discovered the problem using FreeBSD12.1 now using 12.2-FINAL. 
I did not test this on previous OS versions.



** Successful tests (for reference!) ** 
- Tests by ^troglobit^ with pimd (version3 beta) in conjunction with FreeBSD12.2-final using MAXVIFS32 on a vm where successful
- Tests on pfSense version2.5 based on FreebSD 12.2 compiled with MAXVIFS32 where successful
- 20201111 After the test, I contacted Netgate (pfSense developer) which revert the number of VIFS back to 32 ==> Problem gone !! 



** Failing test **
- pfSense router running version2.5 based on FreebSD 12.2, FreebSD compiled with MAXVIFS = 64
 


>>>> Here the test results <<<<

*Test setup details*
- all tests are based on pimd repository 20201105 and FreeBSD 12.2-Release/Stable
- it has no use testing with the formal pimd2, since it is NOT compatible with FreeBSD12
- pimd is using ip_mroute.h mfcctl structs
- the tests which failed where performed using MAXVIFS64 builds, the succesful testes builds using MAXVIFS32
- pfSence and pimd setup where identical during the tests
- extra tests where executed with debugstatements added to pimd 
- my pfSense setup has Intel x520 for 2x10G, 2x Intel 1G as 1G-lagg, em0 as internet connection 
- all trafic is using VLAN's 



*Test result summary (most important events, os calls and debug info)*

# pimd version 3.0-beta1 starting ...


# Open the IGMP-socket
IGMP socket created. Id 3: Device not configured (igmp_socket = socket(AF_INET, SOCK_RAW, IPPROTO_IGMP);
==> OK


# Installing VIFS (vlan interfaces)
VIF #4: Installing ix1.116 (192.168.116.1 on subnet 192.168.116) rate 0
VIF #3: Installing ix0.14 (192.168.14.1 on subnet 192.168.14) rate 0
VIF #2: Installing lagg0.16 (192.168.1.1 on subnet 192.168.1) rate 0
==> OK


# More actions (for info)
Local Cand-RP address 192.168.14.1, priority 0, interval 60 sec
Local static RP: 169.254.0.1, group 232.0.0.0/8
Local static RP: 192.168.14.15, group 239.0.0.0/8
==> OK 


# Setting up MRT_Table etc 
v = 1
/* Try to enable or disable multicast forwarding in the kernel  */
setsockopt(socket, IPPROTO_IP, MRT_INIT, (char *)&v, sizeof(int))
setsockopt(socket, IPPROTO_IP, MRT_PIM, (char *)&v, sizeof(int))
MRT_TABLE set for socketid 3, sizeof^int^ 4
==> Probably OK


#  /* Tell kernel to add, start this vif */
k_add_vif(igmp_socket, vifi, &uvifs[vifi]); 
logit(LOG_INFO, 0, "VIF #%u: now in service, interface %s UP", vifi, v->uv_name);
VIF #3: now in service, interface ix0.14 UP
VIF #2: now in service, interface lagg0.16 UP
==> OK


# Here the problem related to adding MFC entry's
>>> setsockopt(socket, IPPROTO_IP, MRT_ADD_MFC, (char *)&mc, sizeof(mc)) <<<
sizeof_mc = 44 inputstr = ^lagg0.16^
socketid = 3 uvifs = 2272400 oifs = 7615154 no_of_vifs = 6
==> The Problem occurs here

>>>>
Failed adding MFC entry src 192.168.1.36 grp 239.255.255.250 from lagg0.16 to lagg0.26, ix0.14, ix1.116, register_vif0: Invalid argument
<<<<

Failed adding MFC entry vivi_t 2

All Add MFC entry's failed this way!
Comment 1 Andrey V. Elsukov freebsd_committer 2020-11-13 13:46:25 UTC
If you get EINVAL error, this means that your kernel interface doesn't match to what expects an application.
You need to rebuild application with correct ip_mroute.h where MAXVIFS is defined. Also it is possible, that your application has own definition of this constant, so you need to check application sources for this case.
Comment 2 Louis 2020-11-15 07:41:50 UTC
Thanks for your reaction, which made me realize that I had to do more than one change, to get a changed maxvifs working.

I think I managed that. Not in the proper way yet, and surely not the way it IMHO should work.

Problem is that the actual package has to be recompiled for every machine having a different number of vifs. And you need to know the number of vifs of the (unkown) target machine at compile time. 
And that where ...... IMHO the package should be machine independent !

So what I would like to archive is:
- start the program
- do an OS-call to read/obtain the maxvifs number
- open structures based on the obtained maxvifs
- start the actions where the program is designed for

Do not know if this is possible, but compiling the package maxvif number / os-instance depended .....is bad !!

So you would help me with an OS-call/method providing the maxvifs number on the actual OS-instance .....

Of course this is just a question. As far as I can see now, the problem is not in the os.
Comment 3 Andrey V. Elsukov freebsd_committer 2020-11-16 11:07:57 UTC
(In reply to Louis from comment #2)

It seems there is no easy way, but the kernel has sysctl variable that depends from MAXVIFS:

# sysctl -o net.inet.ip.viftable
net.inet.ip.viftable: Format:S,vif[MAXVIFS] Length:1792 Dump:0x00d068f601f8ffff0000000000000000...

The size of struct vif is known, so you can determine MAXVIFS.
Comment 4 Louis 2020-11-16 12:54:17 UTC
Thanks,

It does not win the beauty price, but it could work.
Stupid, that there is not proper call for that,

Louis
Comment 5 Kubilay Kocak freebsd_committer freebsd_triage 2020-11-17 02:44:32 UTC
Is there a suitable enhancement proposal that can be made here?

Also, is this issue limited or scoped only to Intel network hardware?