FreeBSD Bugzilla – Attachment 14938 Details for
Bug 27885
tuning(7) has some misspelling
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
file.diff
file.diff (text/plain), 7.33 KB, created by
tadayuki
on 2001-06-05 05:10:01 UTC
(
hide
)
Description:
file.diff
Filename:
MIME Type:
Creator:
tadayuki
Created:
2001-06-05 05:10:01 UTC
Size:
7.33 KB
patch
obsolete
>--- tuning.7.orig Mon Jun 4 22:29:50 2001 >+++ tuning.7 Mon Jun 4 23:21:34 2001 >@@ -16,7 +16,7 @@ > .Xr disklabel 8 > to lay out your filesystems on a hard disk it is important to remember > that hard drives can transfer data much more quickly from outer tracks >-then they can from inner tracks. To take advantage of this you should >+than they can from inner tracks. To take advantage of this you should > try to pack your smaller filesystems and swap closer to the outer tracks, > follow with the larger filesystems, and end with the largest filesystems. > It is also important to size system standard filesystems such that you >@@ -116,7 +116,7 @@ > in the partition table) will increase I/O performance in the partitions > where you need it the most. Now it is true that you might also need I/O > performance in the larger partitions, but they are so large that shifting >-them more towards the edge of the disk will not lead to a significnat >+them more towards the edge of the disk will not lead to a significant > performance improvement whereas moving /var to the edge can have a huge impact. > Finally, there are safety concerns. Having a small neat root partition that > is essentially read-only gives it a greater chance of surviving a bad crash >@@ -159,7 +159,7 @@ > recovery times after a crash. Do not use this option > unless you are actually storing large files on the partition, because if you > overcompensate you can wind up with a filesystem that has lots of free >-space remaining but cannot accomodate any more files. Using >+space remaining but cannot accommodate any more files. Using > 32768, 65536, or 262144 bytes/inode is recommended. You can go higher but > it will have only incremental effects on fsck recovery times. For > example, >@@ -187,10 +187,10 @@ > Softupdates drastically improves meta-data performance, mainly file > creation and deletion. We recommend turning softupdates on on all of your > filesystems. There are two downsides to softupdates that you should be >-aware of: First, softupdates guarentees filesystem consistency in the >+aware of: First, softupdates guarantees filesystem consistency in the > case of a crash but could very easily be several seconds (even a minute!) > behind updating the physical disk. If you crash you may lose more work >-then otherwise. Secondly, softupdates delays the freeing of filesystem >+than otherwise. Secondly, softupdates delays the freeing of filesystem > blocks. If you have a filesystem (such as the root filesystem) which is > close to full, doing a major update of it, e.g. > .Em make installworld, >@@ -209,11 +209,11 @@ > time. You should only stripe partitions that require serious I/O performance... > typically /var, /home, or custom partitions used to hold databases and web > pages. Choosing the proper stripe size is also >-important. Filesystems tend to store meta-data on power-of-2 boundries >-and you usually want to reduce seeking rather then increase seeking. This >+important. Filesystems tend to store meta-data on power-of-2 boundaries >+and you usually want to reduce seeking rather than increase seeking. This > means you want to use a large off-center stripe size such as 1152 sectors > so sequential I/O does not seek both disks and so meta-data is distributed >-across both disks rather then concentrated on a single disk. If >+across both disks rather than concentrated on a single disk. If > you really need to get sophisticated, we recommend using a real hardware > raid controller from the list of > .Fx >@@ -249,7 +249,7 @@ > the VM Page Cache to cache the directories. The advantage is that all of > memory is now available for caching directories. The disadvantage is that > the minimum in-core memory used to cache a directory is the physical page >-size (typically 4K) rather then 512 bytes. We recommend turning this option >+size (typically 4K) rather than 512 bytes. We recommend turning this option > on if you are running any services which manipulate large numbers of files. > Such services can include web caches, large mail systems, and news systems. > Turning on this option will generally not reduce performance even with the >@@ -270,7 +270,7 @@ > improve bandwidth utilization by increasing the default at the cost of > eating up more kernel memory for each connection. We do not recommend > increasing the defaults if you are serving hundreds or thousands of >-simultanious connections because it is possible to quickly run the system >+simultaneous connections because it is possible to quickly run the system > out of memory due to stalled connections building up. But if you need > high bandwidth over a fewer number of connections, especially if you have > gigabit ethernet, increasing these defaults can make a huge difference. >@@ -280,7 +280,7 @@ > without eating too much kernel memory. Note that the route table, see > .Xr route 8 , > can be used to introduce route-specific send and receive buffer size >-defaults. As an additional mangagement tool you can use pipes in your >+defaults. As an additional management tool you can use pipes in your > firewall rules, see > .Xr ipfw 8 , > to limit the bandwidth going to or from particular IP blocks or ports. >@@ -296,9 +296,9 @@ > We recommend that you turn on (set to 1) and leave on the > .Em net.inet.tcp.always_keepalive > control. The default is usually off. This introduces a small amount of >-additional network bandwidth but guarentees that dead tcp connections >+additional network bandwidth but guarantees that dead tcp connections > will eventually be recognized and cleared. Dead tcp connections are a >-particular problem on systems accesed by users operating over dialups, >+particular problem on systems accessed by users operating over dialups, > because users often disconnect their modems without properly closing active > connections. > .Pp >@@ -339,7 +339,7 @@ > willing to allocate. Each cluster represents approximately 2K of memory, > so a value of 1024 represents 2M of kernel memory reserved for network > buffers. You can do a simple calculation to figure out how many you need. >-If you have a web server which maxes out at 1000 simultanious connections, >+If you have a web server which maxes out at 1000 simultaneous connections, > and each connection eats a 16K receive and 16K send buffer, you need > approximate 32MB worth of network buffers to deal with it. A good rule of > thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768. So for this case >@@ -413,7 +413,7 @@ > .Sh CPU, MEMORY, DISK, NETWORK > The type of tuning you do depends heavily on where your system begins to > bottleneck as load increases. If your system runs out of cpu (idle times >-are pepetually 0%) then you need to consider upgrading the cpu or moving to >+are perpetually 0%) then you need to consider upgrading the cpu or moving to > an SMP motherboard (multiple cpu's), or perhaps you need to revisit the > programs that are causing the load and try to optimize them. If your system > is paging to swap a lot you need to consider adding more memory. If your >@@ -436,7 +436,7 @@ > .Xr firewall 7 > we describe a firewall protecting internal hosts with a topology where > the externally visible hosts are not routed through it. Use 100BaseT rather >-then 10BaseT, or use 1000BaseT rather then 100BaseT, depending on your needs. >+than 10BaseT, or use 1000BaseT rather than 100BaseT, depending on your needs. > Most bottlenecks occur at the WAN link (e.g. modem, T1, DSL, whatever). > If expanding the link is not an option it may be possible to use ipfw's > .Sy DUMMYNET
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 27885
: 14938