|
Lines 53-59
Link Here
|
| 53 |
allocation at interrupt time.</para> |
53 |
allocation at interrupt time.</para> |
| 54 |
|
54 |
|
| 55 |
<para>If a process attempts to access a page that does not exist in its |
55 |
<para>If a process attempts to access a page that does not exist in its |
| 56 |
page table but does exist in one of the paging queues ( such as the |
56 |
page table but does exist in one of the paging queues (such as the |
| 57 |
inactive or cache queues), a relatively inexpensive page reactivation |
57 |
inactive or cache queues), a relatively inexpensive page reactivation |
| 58 |
fault occurs which causes the page to be reactivated. If the page |
58 |
fault occurs which causes the page to be reactivated. If the page |
| 59 |
does not exist in system memory at all, the process must block while |
59 |
does not exist in system memory at all, the process must block while |
|
Lines 61-67
Link Here
|
| 61 |
|
61 |
|
| 62 |
<para>FreeBSD dynamically tunes its paging queues and attempts to |
62 |
<para>FreeBSD dynamically tunes its paging queues and attempts to |
| 63 |
maintain reasonable ratios of pages in the various queues as well as |
63 |
maintain reasonable ratios of pages in the various queues as well as |
| 64 |
attempts to maintain a reasonable breakdown of clean v.s. dirty pages. |
64 |
attempts to maintain a reasonable breakdown of clean vs. dirty pages. |
| 65 |
The amount of rebalancing that occurs depends on the system's memory |
65 |
The amount of rebalancing that occurs depends on the system's memory |
| 66 |
load. This rebalancing is implemented by the pageout daemon and |
66 |
load. This rebalancing is implemented by the pageout daemon and |
| 67 |
involves laundering dirty pages (syncing them with their backing |
67 |
involves laundering dirty pages (syncing them with their backing |
|
Lines 89-95
Link Here
|
| 89 |
can be stacked on top of each other. For example, you might have a |
89 |
can be stacked on top of each other. For example, you might have a |
| 90 |
swap-backed VM object stacked on top of a file-backed VM object in |
90 |
swap-backed VM object stacked on top of a file-backed VM object in |
| 91 |
order to implement a MAP_PRIVATE mmap()ing. This stacking is also |
91 |
order to implement a MAP_PRIVATE mmap()ing. This stacking is also |
| 92 |
used to implement various sharing properties, including, |
92 |
used to implement various sharing properties, including |
| 93 |
copy-on-write, for forked address spaces.</para> |
93 |
copy-on-write, for forked address spaces.</para> |
| 94 |
|
94 |
|
| 95 |
<para>It should be noted that a <literal>vm_page_t</literal> can only be |
95 |
<para>It should be noted that a <literal>vm_page_t</literal> can only be |
|
Lines 106-117
Link Here
|
| 106 |
system's idea of clean/dirty. For example, when the VM system decides |
106 |
system's idea of clean/dirty. For example, when the VM system decides |
| 107 |
to synchronize a physical page to its backing store, the VM system |
107 |
to synchronize a physical page to its backing store, the VM system |
| 108 |
needs to mark the page clean before the page is actually written to |
108 |
needs to mark the page clean before the page is actually written to |
| 109 |
its backing s tore. Additionally, filesystems need to be able to map |
109 |
its backing store. Additionally, filesystems need to be able to map |
| 110 |
portions of a file or file metadata into KVM in order to operate on |
110 |
portions of a file or file metadata into KVM in order to operate on |
| 111 |
it.</para> |
111 |
it.</para> |
| 112 |
|
112 |
|
| 113 |
<para>The entities used to manage this are known as filesystem buffers, |
113 |
<para>The entities used to manage this are known as filesystem buffers, |
| 114 |
<literal>struct buf</literal>'s, and also known as |
114 |
<literal>struct buf</literal>'s, or |
| 115 |
<literal>bp</literal>'s. When a filesystem needs to operate on a |
115 |
<literal>bp</literal>'s. When a filesystem needs to operate on a |
| 116 |
portion of a VM object, it typically maps part of the object into a |
116 |
portion of a VM object, it typically maps part of the object into a |
| 117 |
struct buf and the maps the pages in the struct buf into KVM. In the |
117 |
struct buf and the maps the pages in the struct buf into KVM. In the |
|
Lines 127-134
Link Here
|
| 127 |
to hold mappings and does not limit the ability to cache data. |
127 |
to hold mappings and does not limit the ability to cache data. |
| 128 |
Physical data caching is strictly a function of |
128 |
Physical data caching is strictly a function of |
| 129 |
<literal>vm_page_t</literal>'s, not filesystem buffers. However, |
129 |
<literal>vm_page_t</literal>'s, not filesystem buffers. However, |
| 130 |
since filesystem buffers are used placehold I/O, they do inherently |
130 |
since filesystem buffers are used to placehold I/O, they do inherently |
| 131 |
limit the amount of concurrent I/O possible. As there are usually a |
131 |
limit the amount of concurrent I/O possible. However, as there are usually a |
| 132 |
few thousand filesystem buffers available, this is not usually a |
132 |
few thousand filesystem buffers available, this is not usually a |
| 133 |
problem.</para> |
133 |
problem.</para> |
| 134 |
</sect2> |
134 |
</sect2> |
|
Lines 147-159
Link Here
|
| 147 |
<literal>vm_entry_t</literal> structures. Page tables are directly |
147 |
<literal>vm_entry_t</literal> structures. Page tables are directly |
| 148 |
synthesized from the |
148 |
synthesized from the |
| 149 |
<literal>vm_map_t</literal>/<literal>vm_entry_t</literal>/ |
149 |
<literal>vm_map_t</literal>/<literal>vm_entry_t</literal>/ |
| 150 |
<literal>vm_object_t</literal> hierarchy. Remember when I mentioned |
150 |
<literal>vm_object_t</literal> hierarchy. Recall that I mentioned |
| 151 |
that physical pages are only directly associated with a |
151 |
that physical pages are only directly associated with a |
| 152 |
<literal>vm_object</literal>. Well, that is not quite true. |
152 |
<literal>vm_object</literal>; that is not quite true. |
| 153 |
<literal>vm_page_t</literal>'s are also linked into page tables that |
153 |
<literal>vm_page_t</literal>'s are also linked into page tables that |
| 154 |
they are actively associated with. One <literal>vm_page_t</literal> |
154 |
they are actively associated with. One <literal>vm_page_t</literal> |
| 155 |
can be linked into several <emphasis>pmaps</emphasis>, as page tables |
155 |
can be linked into several <emphasis>pmaps</emphasis>, as page tables |
| 156 |
are called. However, the hierarchical association holds so all |
156 |
are called. However, the hierarchical association holds, so all |
| 157 |
references to the same page in the same object reference the same |
157 |
references to the same page in the same object reference the same |
| 158 |
<literal>vm_page_t</literal> and thus give us buffer cache unification |
158 |
<literal>vm_page_t</literal> and thus give us buffer cache unification |
| 159 |
across the board.</para> |
159 |
across the board.</para> |
|
Lines 166-172
Link Here
|
| 166 |
largest entity held in KVM is the filesystem buffer cache. That is, |
166 |
largest entity held in KVM is the filesystem buffer cache. That is, |
| 167 |
mappings relating to <literal>struct buf</literal> entities.</para> |
167 |
mappings relating to <literal>struct buf</literal> entities.</para> |
| 168 |
|
168 |
|
| 169 |
<para>Unlike Linux, FreeBSD does NOT map all of physical memory into |
169 |
<para>Unlike Linux, FreeBSD does <emphasis>not</emphasis> map all of physical memory into |
| 170 |
KVM. This means that FreeBSD can handle memory configurations up to |
170 |
KVM. This means that FreeBSD can handle memory configurations up to |
| 171 |
4G on 32 bit platforms. In fact, if the mmu were capable of it, |
171 |
4G on 32 bit platforms. In fact, if the mmu were capable of it, |
| 172 |
FreeBSD could theoretically handle memory configurations up to 8TB on |
172 |
FreeBSD could theoretically handle memory configurations up to 8TB on |
|
Lines 186-208
Link Here
|
| 186 |
|
186 |
|
| 187 |
<para>A concerted effort has been made to make the FreeBSD kernel |
187 |
<para>A concerted effort has been made to make the FreeBSD kernel |
| 188 |
dynamically tune itself. Typically you do not need to mess with |
188 |
dynamically tune itself. Typically you do not need to mess with |
| 189 |
anything beyond the <literal>maxusers</literal> and |
189 |
anything beyond the <option>maxusers</option> and |
| 190 |
<literal>NMBCLUSTERS</literal> kernel config options. That is, kernel |
190 |
<option>NMBCLUSTERS</option> kernel config options. That is, kernel |
| 191 |
compilation options specified in (typically) |
191 |
compilation options specified in (typically) |
| 192 |
<filename>/usr/src/sys/i386/conf/<replaceable>CONFIG_FILE</replaceable></filename>. |
192 |
<filename>/usr/src/sys/i386/conf/<replaceable>CONFIG_FILE</replaceable></filename>. |
| 193 |
A description of all available kernel configuration options can be |
193 |
A description of all available kernel configuration options can be |
| 194 |
found in <filename>/usr/src/sys/i386/conf/LINT</filename>.</para> |
194 |
found in <filename>/usr/src/sys/i386/conf/LINT</filename>.</para> |
| 195 |
|
195 |
|
| 196 |
<para>In a large system configuration you may wish to increase |
196 |
<para>In a large system configuration you may wish to increase |
| 197 |
<literal>maxusers</literal>. Values typically range from 10 to 128. |
197 |
<option>maxusers</option>. Values typically range from 10 to 128. |
| 198 |
Note that raising <literal>maxusers</literal> too high can cause the |
198 |
Note that raising <option>maxusers</option> too high can cause the |
| 199 |
system to overflow available KVM resulting in unpredictable operation. |
199 |
system to overflow available KVM resulting in unpredictable operation. |
| 200 |
It is better to leave maxusers at some reasonable number and add other |
200 |
It is better to leave <option>maxusers</option> at some reasonable number and add other |
| 201 |
options, such as <literal>NMBCLUSTERS</literal>, to increase specific |
201 |
options, such as <option>NMBCLUSTERS</option>, to increase specific |
| 202 |
resources.</para> |
202 |
resources.</para> |
| 203 |
|
203 |
|
| 204 |
<para>If your system is going to use the network heavily, you may want |
204 |
<para>If your system is going to use the network heavily, you may want |
| 205 |
to increase <literal>NMBCLUSTERS</literal>. Typical values range from |
205 |
to increase <option>NMBCLUSTERS</option>. Typical values range from |
| 206 |
1024 to 4096.</para> |
206 |
1024 to 4096.</para> |
| 207 |
|
207 |
|
| 208 |
<para>The <literal>NBUF</literal> parameter is also traditionally used |
208 |
<para>The <literal>NBUF</literal> parameter is also traditionally used |
|
Lines 232-239
Link Here
|
| 232 |
|
232 |
|
| 233 |
<para>Run time VM and system tuning is relatively straightforward. |
233 |
<para>Run time VM and system tuning is relatively straightforward. |
| 234 |
First, use softupdates on your UFS/FFS filesystems whenever possible. |
234 |
First, use softupdates on your UFS/FFS filesystems whenever possible. |
| 235 |
<filename>/usr/src/contrib/sys/softupdates/README</filename> contains |
235 |
<filename>/usr/src/sys/ufs/ffs/README.softupdates</filename> contains |
| 236 |
instructions (and restrictions) on how to configure it up.</para> |
236 |
instructions (and restrictions) on how to configure it.</para> |
| 237 |
|
237 |
|
| 238 |
<para>Second, configure sufficient swap. You should have a swap |
238 |
<para>Second, configure sufficient swap. You should have a swap |
| 239 |
partition configured on each physical disk, up to four, even on your |
239 |
partition configured on each physical disk, up to four, even on your |