BSD sort has an option to configure the size of the memory buffer: -S size, --buffer-size=size Use size for the maximum size of the memory buffer. Size modifiers %,b,K,M,G,T,P,E,Z,Y can be used. If a memory limit is not explicitly specified, sort takes up to about 90% of available memory. If the file size is too big to fit into the memory buffer, the temporary disk files are used to perform the sorting. Looking at the source I see no reference to the default 90% except in the manual page. It appears that sort(1) use by default 50% of main memory, which is more reasonable for a server OS. $ sysctl hw.physmem hw.physmem: 68576686080 $ cat /dev/null | sort --debug Memory to be used for sorting: 34288343040 $ expr 34288343040 \* 2 68576686080 the 50% is apparently from free_memory / 2; sort.c: available_free_memory = free_memory / 2; However, this is not right for percentage modifiers. $ cat /dev/null | sort -S 100% --debug Memory to be used for sorting: 34288343040 so, 100% use only 32GB (50%) of main memory, and cat /dev/null | sort -S 20% --debug Memory to be used for sorting: 6857668608 use only 6.5GB (10%) of main memory.