FreeBSD Bugzilla – Attachment 215413 Details for
Bug 247131
vm: Fix typos
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Fix some typos in sys/vm/
0001-Issue-247131-Fix-some-typos-in-sys-vm.patch (text/plain), 13.79 KB, created by
Sebastien Boisvert
on 2020-06-10 00:50:37 UTC
(
hide
)
Description:
Fix some typos in sys/vm/
Filename:
MIME Type:
Creator:
Sebastien Boisvert
Created:
2020-06-10 00:50:37 UTC
Size:
13.79 KB
patch
obsolete
>From 3d3f2ab71fe1eb151d4c77ce7e99f8744317f83f Mon Sep 17 00:00:00 2001 >From: Sebastien Boisvert <seb@boisvert.info> >Date: Tue, 9 Jun 2020 20:48:57 -0400 >Subject: [PATCH] Issue #247131: Fix some typos in sys/vm/ > >--- > sys/vm/swap_pager.c | 54 +++++++++++++++++++++------------------------ > sys/vm/uma_core.c | 6 ++--- > sys/vm/vm_fault.c | 13 +++++------ > sys/vm/vm_mmap.c | 8 +++---- > sys/vm/vm_page.c | 2 +- > 5 files changed, 39 insertions(+), 44 deletions(-) > >diff --git a/sys/vm/swap_pager.c b/sys/vm/swap_pager.c >index eae12d5a395..61e3499fad1 100644 >--- a/sys/vm/swap_pager.c >+++ b/sys/vm/swap_pager.c >@@ -50,11 +50,11 @@ > * > * Features: > * >- * - on the fly reallocation of swap during putpages. The new system >+ * - on-the-fly reallocation of swap during putpages. The new system > * does not try to keep previously allocated swap blocks for dirty > * pages. > * >- * - on the fly deallocation of swap >+ * - on-the-fly deallocation of swap > * > * - No more garbage collection required. Unnecessarily allocated swap > * blocks only exist for dirty vm_page_t's now and these are already >@@ -135,7 +135,7 @@ __FBSDID("$FreeBSD$"); > > /* > * A swblk structure maps each page index within a >- * SWAP_META_PAGES-aligned and sized range to the address of an >+ * SWAP_META_PAGES-aligned and -sized range to the address of an > * on-disk swap block (or SWAPBLK_NONE). The collection of these > * mappings for an entire vm object is implemented as a pc-trie. > */ >@@ -1075,7 +1075,7 @@ swap_pager_haspage(vm_object_t object, vm_pindex_t pindex, int *before, > } > > /* >- * find backwards-looking contiguous good backing store >+ * find backward-looking contiguous good backing store > */ > if (before != NULL) { > for (i = 1; i < SWB_NPAGES; i++) { >@@ -1112,7 +1112,7 @@ swap_pager_haspage(vm_object_t object, vm_pindex_t pindex, int *before, > * which point any associated swap can be freed. MADV_FREE also > * calls us in a special-case situation > * >- * NOTE!!! If the page is clean and the swap was valid, the caller >+ * NOTE!!! If the page is clean and the swap is valid, the caller > * should make the page dirty before calling this routine. This routine > * does NOT change the m->dirty status of the page. Also: MADV_FREE > * depends on it. >@@ -1128,7 +1128,7 @@ swap_pager_unswapped(vm_page_t m) > vm_object_t obj; > > /* >- * Handle enqueing deferred frees first. If we do not have the >+ * Handle enqueuing deferred frees first. If we do not have the > * object lock we wait for the page daemon to clear the space. > */ > obj = m->object; >@@ -1204,7 +1204,7 @@ swap_pager_getpages_locked(vm_object_t object, vm_page_t *ma, int count, > ("page count %d extends beyond swap block", reqcount)); > > /* >- * Do not transfer any pages other than those that are xbusied >+ * Do not transfer any pages other than those that are busied > * when running during a split or collapse operation. This > * prevents clustering from re-creating pages which are being > * moved into another object. >@@ -1391,7 +1391,7 @@ swap_pager_getpages_async(vm_object_t object, vm_page_t *ma, int count, > * Assign swap (if necessary) and initiate I/O on the specified pages. > * > * We support both OBJT_DEFAULT and OBJT_SWAP objects. DEFAULT objects >- * are automatically converted to SWAP objects. >+ * are automatically converted to OBJT_SWAP objects. > * > * In a low memory situation we may block in VOP_STRATEGY(), but the new > * vm_page reservation system coupled with properly written VFS devices >@@ -1403,7 +1403,7 @@ swap_pager_getpages_async(vm_object_t object, vm_page_t *ma, int count, > * not set to VM_PAGER_PEND. We need to remove the rest on I/O > * completion. > * >- * The parent has soft-busy'd the pages it passes us and will unbusy >+ * The parent has soft-busy'd the pages it passes to us and will unbusy > * those whose rtvals[] entry is not set to VM_PAGER_PEND on return. > * We need to unbusy the rest on I/O completion. > */ >@@ -1580,7 +1580,7 @@ swp_pager_async_iodone(struct buf *bp) > } > > /* >- * remove the mapping for kernel virtual >+ * remove the mapping for kernel virtual memory. > */ > if (buf_mapped(bp)) > pmap_qremove((vm_offset_t)bp->b_data, bp->b_npages); >@@ -1593,7 +1593,7 @@ swp_pager_async_iodone(struct buf *bp) > } > > /* >- * cleanup pages. If an error occurs writing to swap, we are in >+ * cleanup pages. If an error occurs while writing to swap, we are in > * very serious trouble. If it happens to be a disk error, though, > * we may be able to recover by reassigning the swap later on. So > * in this case we remove the m->swapblk assignment for the page >@@ -1765,7 +1765,7 @@ swap_pager_swapoff_object(struct swdevt *sp, vm_object_t object) > > /* > * Look for a page corresponding to the first >- * valid block and ensure that any pending paging >+ * valid block and ensure that all pending paging > * operations on it are complete. If the page is valid, > * mark it dirty and free the swap block. Try to batch > * this operation since it may cause sp to be freed, >@@ -1847,7 +1847,7 @@ swap_pager_swapoff(struct swdevt *sp) > if (object->type != OBJT_SWAP) > continue; > mtx_unlock(&vm_object_list_mtx); >- /* Depends on type-stability. */ >+ /* Depends on type stability. */ > VM_OBJECT_WLOCK(object); > > /* >@@ -1937,8 +1937,8 @@ swp_pager_free_empty_swblk(vm_object_t object, struct swblk *sb) > /* > * SWP_PAGER_META_BUILD() - add swap block to swap meta data for object > * >- * We first convert the object to a swap object if it is a default >- * object. >+ * We first convert the object to a swap object (OBJT_SWAP) if it is >+ * a default object (OBJT_DEFAULT). > * > * The specified swapblk is added to the object's swap metadata. If > * the swapblk is not valid, it is freed instead. Any previously >@@ -2194,7 +2194,7 @@ swp_pager_meta_lookup(vm_object_t object, vm_pindex_t pindex) > } > > /* >- * Returns the least page index which is greater than or equal to the >+ * Returns the lowest page index which is greater than or equal to the > * parameter pindex and for which there is a swap block allocated. > * Returns object's size if the object's type is not swap or if there > * are no allocated swap blocks for the object after the requested >@@ -2268,7 +2268,7 @@ sys_swapon(struct thread *td, struct swapon_args *uap) > > /* > * Swap metadata may not fit in the KVM if we have physical >- * memory of >1GB. >+ * memory of size >1GB. > */ > if (swblk_zone == NULL) { > error = ENOMEM; >@@ -2332,7 +2332,7 @@ swaponsomething(struct vnode *vp, void *id, u_long nblks, > > /* > * nblks is in DEV_BSIZE'd chunks, convert to PAGE_SIZE'd chunks. >- * First chop nblks off to page-align it, then convert. >+ * First chop nblks off to page-align it, then convert it. > * > * sw->sw_nblks is in page-sized chunks now too. > */ >@@ -2364,7 +2364,7 @@ swaponsomething(struct vnode *vp, void *id, u_long nblks, > sp->sw_blist = blist_create(nblks, M_WAITOK); > /* > * Do not free the first blocks in order to avoid overwriting >- * any bsd label at the front of the partition >+ * any bsd label at the front of the partition. > */ > blist_free(sp->sw_blist, howmany(BBSIZE, PAGE_SIZE), > nblks - howmany(BBSIZE, PAGE_SIZE)); >@@ -2376,7 +2376,7 @@ swaponsomething(struct vnode *vp, void *id, u_long nblks, > /* > * We put one uncovered page between the devices > * in order to definitively prevent any cross-device >- * I/O requests >+ * I/O requests. > */ > dvbase = tsp->sw_end + 1; > } >@@ -2660,7 +2660,7 @@ SYSCTL_NODE(_vm, OID_AUTO, swap_info, CTLFLAG_RD | CTLFLAG_MPSAFE, > > /* > * Count the approximate swap usage in pages for a vmspace. The >- * shadowed or not yet copied on write swap blocks are not accounted. >+ * shadowed or not-yet-copied-on-write swap blocks are not accounted. > * The map must be locked. > */ > long >@@ -2709,7 +2709,6 @@ vmspace_swap_count(struct vmspace *vmspace) > * GEOM backend > * > * Swapping onto disk devices. >- * > */ > > static g_orphan_t swapgeom_orphan; >@@ -2852,8 +2851,8 @@ swapgeom_orphan(struct g_consumer *cp) > } > } > /* >- * Drop reference we were created with. Do directly since we're in a >- * special context where we don't have to queue the call to >+ * Drop reference we were created with. Do it directly since we're in >+ * a special context where we don't have to queue the call to > * swapgeom_close_ev(). > */ > cp->index--; >@@ -2952,10 +2951,8 @@ swapongeom(struct vnode *vp) > * VNODE backend > * > * This is used mainly for network filesystem (read: probably only tested >- * with NFS) swapfiles. >- * >+ * with NFS) swap files. > */ >- > static void > swapdev_strategy(struct buf *bp, struct swdevt *sp) > { >@@ -2986,7 +2983,6 @@ swapdev_close(struct thread *td, struct swdevt *sp) > vrele(sp->sw_vp); > } > >- > static int > swaponvp(struct thread *td, struct vnode *vp, u_long nblks) > { >@@ -3036,7 +3032,7 @@ sysctl_swap_async_max(SYSCTL_HANDLER_ARGS) > while (nsw_wcount_async_max != new) { > /* > * Adjust difference. If the current async count is too low, >- * we will need to sqeeze our update slowly in. Sleep with a >+ * we will need to squeeze our update slowly in. Sleep with a > * higher priority than getpbuf() to finish faster. > */ > n = new - nsw_wcount_async_max; >diff --git a/sys/vm/uma_core.c b/sys/vm/uma_core.c >index 6c98d6d9582..b21265e3a02 100644 >--- a/sys/vm/uma_core.c >+++ b/sys/vm/uma_core.c >@@ -268,7 +268,7 @@ struct uma_bucket_zone bucket_zones[] = { > */ > enum zfreeskip { > SKIP_NONE = 0, >- SKIP_CNT = 0x00000001, >+ SKIP_CNT = 0x00000001, > SKIP_DTOR = 0x00010000, > SKIP_FINI = 0x00020000, > }; >@@ -977,9 +977,9 @@ zone_domain_update_wss(uma_zone_domain_t zdom) > > /* > * Routine to perform timeout driven calculations. This expands the >- * hashes and does per cpu statistics aggregation. >+ * hashes and does per-cpu statistics aggregation. > * >- * Returns nothing. >+ * Returns nothing. > */ > static void > zone_timeout(uma_zone_t zone, void *unused) >diff --git a/sys/vm/vm_fault.c b/sys/vm/vm_fault.c >index 15802af6aad..24464a03046 100644 >--- a/sys/vm/vm_fault.c >+++ b/sys/vm/vm_fault.c >@@ -130,7 +130,7 @@ struct faultstate { > boolean_t wired; > > /* Page reference for cow. */ >- vm_page_t m_cow; >+ vm_page_t m_cow; > > /* Current object. */ > vm_object_t object; >@@ -898,7 +898,7 @@ vm_fault_cow(struct faultstate *fs) > if (is_first_object_locked) > VM_OBJECT_WUNLOCK(fs->first_object); > /* >- * Oh, well, lets copy it. >+ * Oh, well, let's copy it. > */ > pmap_copy_page(fs->m, fs->first_m); > vm_page_valid(fs->first_m); >@@ -907,7 +907,7 @@ vm_fault_cow(struct faultstate *fs) > vm_page_unwire(fs->m, PQ_INACTIVE); > } > /* >- * Save the cow page to be released after >+ * Save the COW page to be released after > * pmap_enter is complete. > */ > fs->m_cow = fs->m; >@@ -1011,7 +1011,6 @@ vm_fault_allocate(struct faultstate *fs) > int alloc_req; > int rv; > >- > if ((fs->object->flags & OBJ_SIZEVNLOCK) != 0) { > rv = vm_fault_lock_vnode(fs, true); > MPASS(rv == KERN_SUCCESS || rv == KERN_RESOURCE_SHORTAGE); >@@ -1306,7 +1305,7 @@ vm_fault(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type, > } > > /* >- * See if page is resident >+ * See if page is resident. > */ > fs.m = vm_page_lookup(fs.object, fs.pindex); > if (fs.m != NULL) { >@@ -1513,7 +1512,7 @@ vm_fault(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type, > fs.m = NULL; > > /* >- * Unlock everything, and return >+ * Unlock everything, and return. > */ > fault_deallocate(&fs); > if (hardfault) { >@@ -1948,7 +1947,7 @@ vm_fault_copy_entry(vm_map_t dst_map, vm_map_t src_map, > VM_OBJECT_WUNLOCK(dst_object); > > /* >- * Enter it in the pmap. If a wired, copy-on-write >+ * Enter it in the pmap. If a wired, copy-on-write > * mapping is being replaced by a write-enabled > * mapping, then wire that new mapping. > * >diff --git a/sys/vm/vm_mmap.c b/sys/vm/vm_mmap.c >index 332c90140dc..e5e68f3f370 100644 >--- a/sys/vm/vm_mmap.c >+++ b/sys/vm/vm_mmap.c >@@ -261,7 +261,7 @@ kern_mmap_req(struct thread *td, const struct mmap_req *mrp) > /* > * Enforce the constraints. > * Mapping of length 0 is only allowed for old binaries. >- * Anonymous mapping shall specify -1 as filedescriptor and >+ * Anonymous mapping shall specify -1 as file descriptor and > * zero position for new code. Be nice to ancient a.out > * binaries and correct pos for anonymous mapping, since old > * ld.so sometimes issues anonymous map requests with non-zero >@@ -842,7 +842,7 @@ kern_mincore(struct thread *td, uintptr_t addr0, size_t len, char *vec) > > /* > * Do this on a map entry basis so that if the pages are not >- * in the current processes address space, we can easily look >+ * in the current process's address space, we can easily look > * up the pages elsewhere. > */ > lastvecindex = -1; >@@ -992,7 +992,7 @@ kern_mincore(struct thread *td, uintptr_t addr0, size_t len, char *vec) > } > > /* >- * Pass the page information to the user >+ * Pass the page information to the user. > */ > error = subyte(vec + vecindex, mincoreinfo); > if (error) { >@@ -1556,7 +1556,7 @@ vm_mmap_object(vm_map_t map, vm_offset_t *addr, vm_size_t size, vm_prot_t prot, > } > > /* >- * We currently can only deal with page aligned file offsets. >+ * We currently can only deal with page-aligned file offsets. > * The mmap() system call already enforces this by subtracting > * the page offset from the file offset, but checking here > * catches errors in device drivers (e.g. d_single_mmap() >diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c >index 22eacf423b9..d7e89031305 100644 >--- a/sys/vm/vm_page.c >+++ b/sys/vm/vm_page.c >@@ -419,7 +419,7 @@ sysctl_vm_page_blacklist(SYSCTL_HANDLER_ARGS) > /* > * Initialize a dummy page for use in scans of the specified paging queue. > * In principle, this function only needs to set the flag PG_MARKER. >- * Nonetheless, it write busies the page as a safety precaution. >+ * Nonetheless, its write busies the page as a safety precaution. > */ > static void > vm_page_init_marker(vm_page_t marker, int queue, uint16_t aflags) >-- >2.26.2 >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 247131
: 215413