Created attachment 183643 [details] Patch to enable configuring --with-tuning=large BIND 9.10 added a built-time option --with-tuning=large. This option allows operators to tune BIND for better performance in high-memory machines, by setting various constants and defaults to values more appropriate in such usage. Note that except for the MAXSOCKETS control (which can be set with "named -S") these settings are only available at compile time.
Could you expand on what "large" means ? How much memory are we talking about ? 1G ? 10G ? 100G ?
Here is KB-article from isc bind developers https://kb.isc.org/article/AA-01314/0/-with-tuninglarge-about-using-this-build-time-option.html this option change some BIND internals and allow use more system resources
Well, yeah, I saw all those, but it does not answer my question, what does high-memory mean. With it, does it need 1G of RAM ? 10G ? more ?
I also can't find exact answer how much additional memory this optimization need.
It would be nice to know exactly when it is useful, would you mind asking the ISC about it ?
KB-article have answer: This option allows operators to tune BIND for better performance in high-memory machines,
I am not adding an option saying something vague like this. "high memory" does not mean anything. And it means a new different thing every three to six months. When I got my first computer at home, it had very high memory, it was an Atari 1040 STF, it has a whole megabyte of ram, I could copy en entire 720k floppy disk in a ramdisk. Then a few years later, I had a 386SX16 with 2MB of RAM, and I couldn't find anything that would use all of it, and then...
this option help improve performance over increased memory usage. How much - depends on bind configuration (number of listeners, cache size, etc). In article every changed parameters have explanation.
(In reply to Mathieu Arnold from comment #7) > I am not adding an option saying something vague like this. > > "high memory" does not mean anything. And it means a new different thing > every three to six months. > > When I got my first computer at home, it had very high memory, it was an > Atari 1040 STF, it has a whole megabyte of ram, I could copy en entire 720k > floppy disk in a ramdisk. > Then a few years later, I had a 386SX16 with 2MB of RAM, and I couldn't find > anything that would use all of it, and then... Investigation on the 9.11 branch indicates that the beginning threshold of "high/large memory" is 12Gb Certain compiled-in constants and default settings can be increased to values better suited to large servers with abundant memory resources (e.g, 64-bit servers with 12G or more of memory) by specifying "--with-tuning=large" on the configure command line. This can improve performance on big servers, but will consume more memory and may degrade performance on smaller This information is quoted from Announcements, and Changelogs -- again; from the 9.11 branch. HTH --Chris
Further reading/investigation seems to indicate that enabling/allowing this on 9.10 (fixed in 10.2rc1) may not be such a good idea[1] o Large-system tuning (configure --with-tuning=large) caused problems on some platforms by setting a socket receive buffer size that was too large. This is now detected and corrected at run time. [RT #37187] 1) https://ftp.isc.org/isc/bind9/9.10.2rc1/doc/arm/notes.html --Chris