Created attachment 189360 [details]
The Bind port is not having a option for large tuning:
Below patch add's this option. I've used it several times on amd64 with 16 cores, and the difference when processing millions of zones is huge.
So I hope it could be added.
I've also used the same addition at bind912 (but it's EOL is much sooner).
I have already had a request for this in #220150 which ended up as a feedback timeout.
As soon as you define "high memory" I'll be glad to move this further.
(In reply to Mathieu Arnold from comment #1)
> I have already had a request for this in #220150 which ended up as a
> feedback timeout.
> As soon as you define "high memory" I'll be glad to move this further.
Investigation on the 9.11 branch indicates that the
beginning threshold of "high/large memory" is 12Gb
Certain compiled-in constants and default settings can be
increased to values better suited to large servers with abundant
memory resources (e.g, 64-bit servers with 12G or more of memory)
by specifying "--with-tuning=large" on the configure command
line. This can improve performance on big servers, but will
consume more memory and may degrade performance on smaller
This information is quoted from Announcements, and Changelogs --
again; from the 9.11 branch.
A trainer in part by ISC and The Spamhaus Project
seems to indicate that 8GB ram may be a/the lower
threshold for definition of "high memory". Altho the
trainer is primarily geared to RPZ, it's "suggested"
config file contains -with-tuning=large
o 8 Core CPU with at least a 2.4gHz clock speed
o 8 GB of RAM
o Servers should be bare metal - not virtualized
It also indicates that the ISC intended the tuning
knob for busy _recursive_ servers.
> I have already had a request for this
Then evidently I'm not the first requesting it :)
> As soon as you define "high memory" I'll be glad to move this further.
Already did that: "memory required to run millions of zones".
So not the average ISP, but I can imagine several other kind of DNS operators who benefit significant.
Like the other extreme: large zones occupying around 15 GB (without any DNSSEC yet), and a transaction event occurring, causing the used memory to duplicate, ...while you have only 32GB of DDR to do the job... and a swapfile is no considerable option.
As I'm around this point, it's a no-brainer that this option -at least very likely- pushes the limit that make the difference between a crash / need to upgrade hardware to 64GB DDR, OR having a system that does it's job swift and sound.
And I've already clearly noticed (but didn't exactly measure) significant improvement at the countless XFR's, thanks to this option, but still i.c.w. other proper configuration, and so not entirely clear which tweak was doing what.
As you've mentioned reading the article, you must have noticed the improvements is not in memory consumption only. Therefor I find it awkward you're hammering that.
> "high memory" does not mean anything. And it means a new different thing every three to six months.
Undeniable and correct fact - but still not really (or really not) an argument.
- 'high definition' (in HD video) is -paradoxical- also not defined, doesn't keep people -maybe even you- from using that -eh- "standard".
- I still have to meet the first person who can tell me the difference between cloud and virtualization. Still we have jails.
- The pronounce of SQL also differs from person to person. We still have SQL ports, and this doesn't hold these camps from EVEN cooperating with eachother.
> would you mind asking the ISC about it ?
Not a problem, I will point them to this and the other URL.
Contrary, maybe you could get in touch with ISC's software engineer Francis Dupont from Rennes?
Although I'm not sure he's the right person for this matters, I presume he can elaborate and convince (and even comfortable in your -beautiful- native language - with much better nuances than in English).
> Atari 1040 STF
Bind 9.11 wasn't written for that platform. But from the experience you illustrate, I'd say you understand plain upscale hardware is never-ending, but IMHO that's more of a reason to prefer to optimize.
I'm grateful for your contributions, time and effort, you're a valuable expert, but man, why are you so often so obstructive?
Sure I realize being conservative is often smart, not float with every marketing buzzword or package deal. But on this matter I'm seriously trying to comprehend your perspective.
If you could explain, then I might agree -or oppose-, but then at least I could respect your point, which I now fail to see, and conclusively have the feeling you make a problem of what is a solution.
What is a problem, are dropped packages, causing your zone to be undesired delayed and having to downscale simultaneously mutations (and re-queue and re-submit calls - and meanwhile validating data integrity).
...unneeded bottleneck, as improvement was available, but avoided for IMHO no valid reason.
Remind: it's just an OPTION (even in the Makefile in capitals), label it "experimental", or mark it "use at own risk" (also in capitals, if you wish). Such isn't unique in port options descriptions.
> I am not adding an option saying something vague like this.
The 5 bullet-points in the referred article weren't vague at all, but 5 hard arguments (memory even not being one of them), including hard numbers.
Correct me if I'm wrong, but I observe you neglect those. I can't believe you've missed those, and so your positioning makes me agitated. While maybe rightly done so, without explanation I don't understand.
If you despite decline to add, then I -and evidently others- have to insert it the less convenient way.
Not a huge problem to do so, far less effort than writing this much too extensive reply, and, as I state my port-options preferences in /etc/make.conf anyway, 3 lines in stead of a oneliner.
I guess something like this would simply do the same job:
Leo, your last message is pretty rude.
Now, what I said was that I am not going to add an option saying "enable if you have large memory" because large is vague, and you don't know if it means 2G, 16G, or 128G.
I went back to the ISC knowledge base article you cite, it does contain 5 points, yes, all of those describe **what** it does. It never describes why it is needed or when to enable it.
Now, here, in the comments, I have seen a couple of numbers, 8GB and 12GB, which is it ?
(In reply to Mathieu Arnold from comment #5)
> . . .
> Now, here, in the comments, I have seen a couple of numbers, 8GB and 12GB,
> which is it ?
IMHO based on all the information I read above, and my own experiences;
I'd argue for 12Gb. Honestly, I haven't purchased/built anything with
less 16Gb in the last 10 yrs. Recent FreeBSD doesn't run optimal under
16. Anyway. I don't think given ISC indicates it in their Changelogs,
and announcements, setting 12Gb as the lowwater mark would be at all
unreasonable. I humbly suggest those numbers they indicate, reasonably
satisfies your requirement. No? :-)
All the best.
A commit references this bug:
Date: Fri Jan 12 12:58:52 UTC 2018
New revision: 458822
Add a TUNING_LARGE option.
Tunes certain compiled-in constants and default settings to
values better suited to large servers with 12/16GB+ of memory.
This can improve performance on such servers, but will consume
more memory and may degrade performance on smaller systems.
Sponsored by: Absolight