FreeBSD Bugzilla – Attachment 149303 Details for
Bug 193873
[PATCH] Unify dumpsys() under generic kern_dump.c.
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
v3 of patch. Addresses Andrew's ARM change and feedback from Mark in Differential.
a.patch (text/plain), 585.00 KB, created by
Conrad Meyer
on 2014-11-12 00:45:25 UTC
(
hide
)
Description:
v3 of patch. Addresses Andrew's ARM change and feedback from Mark in Differential.
Filename:
MIME Type:
Creator:
Conrad Meyer
Created:
2014-11-12 00:45:25 UTC
Size:
585.00 KB
patch
obsolete
>From 5ec71c6300ad934e819f0773f18180dff310596a Mon Sep 17 00:00:00 2001 >From: Conrad Meyer <cse.cem@gmail.com> >Date: Tue, 11 Nov 2014 11:54:24 -0500 >Subject: [PATCH v3] Unify dumpsys() under generic kern_dump.c. > >x86, ARM, and MIPS are all relatively similar and straightforward. Some >MD-specific methods are left in dump_machdep.c in each arch to provide >mach-dependent implementations. (Map a region temporarily for dumping, >unmap a region, iterate physical memory segments, flush WB caches.) > >Sparc and PowerPC are weirder. PowerPC had a merged dump/minidump path >that used a different md_pa structure, pmap_md, plumbed through its MMU >interface. So, that was ripped out and replaced with the standard path. > >Sparc uses its own non-ELF dumping header and that makes its dumpsys >different enough that unification wasn't an improvement. However, some >logic shared with other archs (blk_dump == cb_dumpdata) was refactored >away. > >Patch build-tested against: > - ARMv6 / CHROMEBOOK > - AMD64 / GENERIC > - i386 / GENERIC > - MIPS / WZR-300HP > - MIPS64 / SWARM64_SMP > - PPC / MPC85XX (cpu=booke) > - PPC / GENERIC (cpu=aim) > - PPC64 / GENERIC64 (cpu=aim64) > - Sparc64 / GENERIC > >Differential Revision: https://reviews.freebsd.org/D904 >Reviewed by: Justin Hibbits, Mark Johnston >Sponsored by: EMC / Isilon storage division >--- > sys/amd64/include/dump.h | 81 ++++++++ > sys/arm/arm/dump_machdep.c | 330 ++---------------------------- > sys/arm/include/dump.h | 70 +++++++ > sys/conf/files | 1 + > sys/i386/include/dump.h | 81 ++++++++ > sys/kern/kern_dump.c | 398 ++++++++++++++++++++++++++++++++++++ > sys/kern/kern_shutdown.c | 1 + > sys/mips/include/dump.h | 76 +++++++ > sys/mips/include/md_var.h | 1 + > sys/mips/mips/dump_machdep.c | 329 +---------------------------- > sys/powerpc/aim/mmu_oea.c | 154 ++++++-------- > sys/powerpc/aim/mmu_oea64.c | 151 ++++++-------- > sys/powerpc/booke/pmap.c | 215 +++++++++---------- > sys/powerpc/include/dump.h | 69 +++++++ > sys/powerpc/include/pmap.h | 12 -- > sys/powerpc/powerpc/dump_machdep.c | 283 +------------------------ > sys/powerpc/powerpc/mmu_if.m | 43 ++-- > sys/powerpc/powerpc/pmap_dispatch.c | 26 ++- > sys/sparc64/include/dump.h | 77 +++++++ > sys/sparc64/sparc64/dump_machdep.c | 117 +++-------- > sys/sys/conf.h | 1 - > sys/sys/kerneldump.h | 23 +++ > sys/x86/x86/dump_machdep.c | 342 +------------------------------ > 23 files changed, 1199 insertions(+), 1682 deletions(-) > create mode 100644 sys/amd64/include/dump.h > create mode 100644 sys/arm/include/dump.h > create mode 100644 sys/i386/include/dump.h > create mode 100644 sys/kern/kern_dump.c > create mode 100644 sys/mips/include/dump.h > create mode 100644 sys/powerpc/include/dump.h > create mode 100644 sys/sparc64/include/dump.h > >diff --git a/sys/amd64/include/dump.h b/sys/amd64/include/dump.h >new file mode 100644 >index 0000000..90ed55f >--- /dev/null >+++ b/sys/amd64/include/dump.h >@@ -0,0 +1,81 @@ >+/*- >+ * Copyright (c) 2014 EMC Corp. >+ * Copyright (c) 2014 Conrad Meyer <conrad.meyer@isilon.com> >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND >+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE >+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE >+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL >+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS >+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) >+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT >+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY >+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF >+ * SUCH DAMAGE. >+ * >+ * $FreeBSD$ >+ */ >+ >+#ifndef _MACHINE_DUMP_H_ >+#define _MACHINE_DUMP_H_ >+ >+#define KERNELDUMP_VERSION KERNELDUMP_AMD64_VERSION >+#define EM_VALUE EM_X86_64 >+/* 20 phys_avail entry pairs correspond to 10 md_pa's */ >+#define DUMPSYS_MD_PA_NPAIRS 10 >+#define DUMPSYS_NUM_AUX_HDRS 0 >+ >+static inline void >+dumpsys_md_pa_init(void) >+{ >+ >+ dumpsys_gen_md_pa_init(); >+} >+ >+static inline struct dump_pa * >+dumpsys_md_pa_next(struct dump_pa *p) >+{ >+ >+ return (dumpsys_gen_md_pa_next(p)); >+} >+ >+static inline void >+dumpsys_wbinv_all(void) >+{ >+ >+ dumpsys_gen_wbinv_all(); >+} >+ >+static inline void >+dumpsys_unmap_chunk(vm_paddr_t pa, size_t s, void *va) >+{ >+ >+ dumpsys_gen_unmap_chunk(pa, s, va); >+} >+ >+static inline int >+dumpsys_write_aux_headers(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_gen_write_aux_headers(di)); >+} >+ >+static inline int >+dumpsys(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_generic(di)); >+} >+ >+#endif /* !_MACHINE_DUMP_H_ */ >diff --git a/sys/arm/arm/dump_machdep.c b/sys/arm/arm/dump_machdep.c >index 19c19c90..941944b 100644 >--- a/sys/arm/arm/dump_machdep.c >+++ b/sys/arm/arm/dump_machdep.c >@@ -1,407 +1,105 @@ > /*- > * Copyright (c) 2002 Marcel Moolenaar > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, > * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF > * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > #include "opt_watchdog.h" > > #include <sys/param.h> > #include <sys/systm.h> > #include <sys/conf.h> > #include <sys/cons.h> > #include <sys/sysctl.h> > #include <sys/kernel.h> > #include <sys/proc.h> > #include <sys/kerneldump.h> > #ifdef SW_WATCHDOG > #include <sys/watchdog.h> > #endif > #include <vm/vm.h> > #include <vm/pmap.h> >+#include <machine/dump.h> > #include <machine/elf.h> > #include <machine/md_var.h> > #include <machine/pcb.h> > #include <machine/armreg.h> > >-CTASSERT(sizeof(struct kerneldumpheader) == 512); >- > int do_minidump = 1; > SYSCTL_INT(_debug, OID_AUTO, minidump, CTLFLAG_RWTUN, &do_minidump, 0, > "Enable mini crash dumps"); > >-/* >- * Don't touch the first SIZEOF_METADATA bytes on the dump device. This >- * is to protect us from metadata and to protect metadata from us. >- */ >-#define SIZEOF_METADATA (64*1024) >- >-#define MD_ALIGN(x) (((off_t)(x) + PAGE_MASK) & ~PAGE_MASK) >-#define DEV_ALIGN(x) (((off_t)(x) + (DEV_BSIZE-1)) & ~(DEV_BSIZE-1)) >-extern struct pcb dumppcb; >- >-struct md_pa { >- vm_paddr_t md_start; >- vm_paddr_t md_size; >-}; >- >-typedef int callback_t(struct md_pa *, int, void *); >- >-static struct kerneldumpheader kdh; >-static off_t dumplo, fileofs; >- >-/* Handle buffered writes. */ >-static char buffer[DEV_BSIZE]; >-static size_t fragsz; >- >-/* XXX: I suppose 20 should be enough. */ >-static struct md_pa dump_map[20]; >- >-static void >-md_pa_init(void) >+void >+dumpsys_wbinv_all(void) > { >- int n, idx; >- >- bzero(dump_map, sizeof(dump_map)); >- for (n = 0; n < sizeof(dump_map) / sizeof(dump_map[0]); n++) { >- idx = n * 2; >- if (dump_avail[idx] == 0 && dump_avail[idx + 1] == 0) >- break; >- dump_map[n].md_start = dump_avail[idx]; >- dump_map[n].md_size = dump_avail[idx + 1] - dump_avail[idx]; >- } >-} >- >-static struct md_pa * >-md_pa_first(void) >-{ >- >- return (&dump_map[0]); >-} >- >-static struct md_pa * >-md_pa_next(struct md_pa *mdp) >-{ >- >- mdp++; >- if (mdp->md_size == 0) >- mdp = NULL; >- return (mdp); >-} >- >-static int >-buf_write(struct dumperinfo *di, char *ptr, size_t sz) >-{ >- size_t len; >- int error; >- >- while (sz) { >- len = DEV_BSIZE - fragsz; >- if (len > sz) >- len = sz; >- bcopy(ptr, buffer + fragsz, len); >- fragsz += len; >- ptr += len; >- sz -= len; >- if (fragsz == DEV_BSIZE) { >- error = dump_write(di, buffer, 0, dumplo, >- DEV_BSIZE); >- if (error) >- return error; >- dumplo += DEV_BSIZE; >- fragsz = 0; >- } >- } >- >- return (0); >-} >- >-static int >-buf_flush(struct dumperinfo *di) >-{ >- int error; >- >- if (fragsz == 0) >- return (0); >- >- error = dump_write(di, buffer, 0, dumplo, DEV_BSIZE); >- dumplo += DEV_BSIZE; >- fragsz = 0; >- return (error); >-} >- >-extern vm_offset_t kernel_l1kva; >-extern char *pouet2; >- >-static int >-cb_dumpdata(struct md_pa *mdp, int seqnr, void *arg) >-{ >- struct dumperinfo *di = (struct dumperinfo*)arg; >- vm_paddr_t pa; >- uint32_t pgs; >- size_t counter, sz, chunk; >- int c, error; >- >- error = 0; /* catch case in which chunk size is 0 */ >- counter = 0; >- pgs = mdp->md_size / PAGE_SIZE; >- pa = mdp->md_start; >- >- printf(" chunk %d: %dMB (%d pages)", seqnr, pgs * PAGE_SIZE / ( >- 1024*1024), pgs); > > /* > * Make sure we write coherent data. Note that in the SMP case this > * only operates on the L1 cache of the current CPU, but all other CPUs > * have already been stopped, and their flush/invalidate was done as > * part of stopping. > */ > cpu_idcache_wbinv_all(); > cpu_l2cache_wbinv_all(); > #ifdef __XSCALE__ > xscale_cache_clean_minidata(); > #endif >- while (pgs) { >- chunk = pgs; >- if (chunk > MAXDUMPPGS) >- chunk = MAXDUMPPGS; >- sz = chunk << PAGE_SHIFT; >- counter += sz; >- if (counter >> 24) { >- printf(" %d", pgs * PAGE_SIZE); >- counter &= (1<<24) - 1; >- } >- if (pa == (pa & L1_ADDR_BITS)) { >- pmap_kenter_section(0, pa & L1_ADDR_BITS, 0); >- cpu_tlb_flushID_SE(0); >- cpu_cpwait(); >- } >-#ifdef SW_WATCHDOG >- wdog_kern_pat(WD_LASTVAL); >-#endif >- error = dump_write(di, >- (void *)(pa - (pa & L1_ADDR_BITS)),0, dumplo, sz); >- if (error) >- break; >- dumplo += sz; >- pgs -= chunk; >- pa += sz; >- >- /* Check for user abort. */ >- c = cncheckc(); >- if (c == 0x03) >- return (ECANCELED); >- if (c != -1) >- printf(" (CTRL-C to abort) "); >- } >- printf(" ... %s\n", (error) ? "fail" : "ok"); >- return (error); > } > >-static int >-cb_dumphdr(struct md_pa *mdp, int seqnr, void *arg) >+void >+dumpsys_map_chunk(vm_paddr_t pa, size_t chunk __unused, void **va) > { >- struct dumperinfo *di = (struct dumperinfo*)arg; >- Elf_Phdr phdr; >- uint64_t size; >- int error; >- >- size = mdp->md_size; >- bzero(&phdr, sizeof(phdr)); >- phdr.p_type = PT_LOAD; >- phdr.p_flags = PF_R; /* XXX */ >- phdr.p_offset = fileofs; >- phdr.p_vaddr = mdp->md_start; >- phdr.p_paddr = mdp->md_start; >- phdr.p_filesz = size; >- phdr.p_memsz = size; >- phdr.p_align = PAGE_SIZE; > >- error = buf_write(di, (char*)&phdr, sizeof(phdr)); >- fileofs += phdr.p_filesz; >- return (error); >+ if (pa == (pa & L1_ADDR_BITS)) { >+ pmap_kenter_section(0, pa & L1_ADDR_BITS, 0); >+ cpu_tlb_flushID_SE(0); >+ cpu_cpwait(); >+ } >+ *va = (void *)(pa - (pa & L1_ADDR_BITS)); > } > > /* > * Add a header to be used by libkvm to get the va to pa delta > */ >-static int >-dump_os_header(struct dumperinfo *di) >+int >+dumpsys_write_aux_headers(struct dumperinfo *di) > { > Elf_Phdr phdr; > int error; > > bzero(&phdr, sizeof(phdr)); > phdr.p_type = PT_DUMP_DELTA; > phdr.p_flags = PF_R; /* XXX */ > phdr.p_offset = 0; > phdr.p_vaddr = KERNVIRTADDR; > phdr.p_paddr = pmap_kextract(KERNVIRTADDR); > phdr.p_filesz = 0; > phdr.p_memsz = 0; > phdr.p_align = PAGE_SIZE; > >- error = buf_write(di, (char*)&phdr, sizeof(phdr)); >- return (error); >-} >- >-static int >-cb_size(struct md_pa *mdp, int seqnr, void *arg) >-{ >- uint32_t *sz = (uint32_t*)arg; >- >- *sz += (uint32_t)mdp->md_size; >- return (0); >-} >- >-static int >-foreach_chunk(callback_t cb, void *arg) >-{ >- struct md_pa *mdp; >- int error, seqnr; >- >- seqnr = 0; >- mdp = md_pa_first(); >- while (mdp != NULL) { >- error = (*cb)(mdp, seqnr++, arg); >- if (error) >- return (-error); >- mdp = md_pa_next(mdp); >- } >- return (seqnr); >-} >- >-int >-dumpsys(struct dumperinfo *di) >-{ >- Elf_Ehdr ehdr; >- uint32_t dumpsize; >- off_t hdrgap; >- size_t hdrsz; >- int error; >- >- if (do_minidump) >- return (minidumpsys(di)); >- >- bzero(&ehdr, sizeof(ehdr)); >- ehdr.e_ident[EI_MAG0] = ELFMAG0; >- ehdr.e_ident[EI_MAG1] = ELFMAG1; >- ehdr.e_ident[EI_MAG2] = ELFMAG2; >- ehdr.e_ident[EI_MAG3] = ELFMAG3; >- ehdr.e_ident[EI_CLASS] = ELF_CLASS; >-#if BYTE_ORDER == LITTLE_ENDIAN >- ehdr.e_ident[EI_DATA] = ELFDATA2LSB; >-#else >- ehdr.e_ident[EI_DATA] = ELFDATA2MSB; >-#endif >- ehdr.e_ident[EI_VERSION] = EV_CURRENT; >- ehdr.e_ident[EI_OSABI] = ELFOSABI_STANDALONE; /* XXX big picture? */ >- ehdr.e_type = ET_CORE; >- ehdr.e_machine = EM_ARM; >- ehdr.e_phoff = sizeof(ehdr); >- ehdr.e_flags = 0; >- ehdr.e_ehsize = sizeof(ehdr); >- ehdr.e_phentsize = sizeof(Elf_Phdr); >- ehdr.e_shentsize = sizeof(Elf_Shdr); >- >- md_pa_init(); >- >- /* Calculate dump size. */ >- dumpsize = 0L; >- ehdr.e_phnum = foreach_chunk(cb_size, &dumpsize) + 1; >- hdrsz = ehdr.e_phoff + ehdr.e_phnum * ehdr.e_phentsize; >- fileofs = MD_ALIGN(hdrsz); >- dumpsize += fileofs; >- hdrgap = fileofs - DEV_ALIGN(hdrsz); >- >- /* Determine dump offset on device. */ >- if (di->mediasize < SIZEOF_METADATA + dumpsize + sizeof(kdh) * 2) { >- error = ENOSPC; >- goto fail; >- } >- dumplo = di->mediaoffset + di->mediasize - dumpsize; >- dumplo -= sizeof(kdh) * 2; >- >- mkdumpheader(&kdh, KERNELDUMPMAGIC, KERNELDUMP_ARM_VERSION, dumpsize, di->blocksize); >- >- printf("Dumping %llu MB (%d chunks)\n", (long long)dumpsize >> 20, >- ehdr.e_phnum - 1); >- >- /* Dump leader */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- dumplo += sizeof(kdh); >- >- /* Dump ELF header */ >- error = buf_write(di, (char*)&ehdr, sizeof(ehdr)); >- if (error) >- goto fail; >- >- /* Dump program headers */ >- error = foreach_chunk(cb_dumphdr, di); >- if (error >= 0) >- error = dump_os_header(di); >- if (error < 0) >- goto fail; >- buf_flush(di); >- >- /* >- * All headers are written using blocked I/O, so we know the >- * current offset is (still) block aligned. Skip the alignement >- * in the file to have the segment contents aligned at page >- * boundary. We cannot use MD_ALIGN on dumplo, because we don't >- * care and may very well be unaligned within the dump device. >- */ >- dumplo += hdrgap; >- >- /* Dump memory chunks (updates dumplo) */ >- error = foreach_chunk(cb_dumpdata, di); >- if (error < 0) >- goto fail; >- >- /* Dump trailer */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- >- /* Signal completion, signoff and exit stage left. */ >- dump_write(di, NULL, 0, 0, 0); >- printf("\nDump complete\n"); >- return (0); >- >- fail: >- if (error < 0) >- error = -error; >- >- if (error == ECANCELED) >- printf("\nDump aborted\n"); >- else if (error == ENOSPC) >- printf("\nDump failed. Partition too small.\n"); >- else >- printf("\n** DUMP FAILED (ERROR %d) **\n", error); >+ error = dumpsys_buf_write(di, (char*)&phdr, sizeof(phdr)); > return (error); > } >diff --git a/sys/arm/include/dump.h b/sys/arm/include/dump.h >new file mode 100644 >index 0000000..1ce546c >--- /dev/null >+++ b/sys/arm/include/dump.h >@@ -0,0 +1,70 @@ >+/*- >+ * Copyright (c) 2014 EMC Corp. >+ * Copyright (c) 2014 Conrad Meyer <conrad.meyer@isilon.com> >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND >+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE >+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE >+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL >+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS >+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) >+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT >+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY >+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF >+ * SUCH DAMAGE. >+ * >+ * $FreeBSD$ >+ */ >+ >+#ifndef _MACHINE_DUMP_H_ >+#define _MACHINE_DUMP_H_ >+ >+#define KERNELDUMP_VERSION KERNELDUMP_ARM_VERSION >+#define EM_VALUE EM_ARM >+/* XXX: I suppose 20 should be enough. */ >+#define DUMPSYS_MD_PA_NPAIRS 20 >+#define DUMPSYS_NUM_AUX_HDRS 1 >+ >+void dumpsys_wbinv_all(void); >+int dumpsys_write_aux_headers(struct dumperinfo *di); >+ >+static inline void >+dumpsys_md_pa_init(void) >+{ >+ >+ dumpsys_gen_md_pa_init(); >+} >+ >+static inline struct dump_pa * >+dumpsys_md_pa_next(struct dump_pa *p) >+{ >+ >+ return (dumpsys_gen_md_pa_next(p)); >+} >+ >+static inline void >+dumpsys_unmap_chunk(vm_paddr_t pa, size_t s, void *va) >+{ >+ >+ dumpsys_gen_unmap_chunk(pa, s, va); >+} >+ >+static inline int >+dumpsys(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_generic(di)); >+} >+ >+#endif /* !_MACHINE_DUMP_H_ */ >diff --git a/sys/conf/files b/sys/conf/files >index f7a4310..8d671b1 100644 >--- a/sys/conf/files >+++ b/sys/conf/files >@@ -1,4003 +1,4004 @@ > # $FreeBSD$ > # > # The long compile-with and dependency lines are required because of > # limitations in config: backslash-newline doesn't work in strings, and > # dependency lines other than the first are silently ignored. > # > acpi_quirks.h optional acpi \ > dependency "$S/tools/acpi_quirks2h.awk $S/dev/acpica/acpi_quirks" \ > compile-with "${AWK} -f $S/tools/acpi_quirks2h.awk $S/dev/acpica/acpi_quirks" \ > no-obj no-implicit-rule before-depend \ > clean "acpi_quirks.h" > # > # The 'fdt_dtb_file' target covers an actual DTB file name, which is derived > # from the specified source (DTS) file: <platform>.dts -> <platform>.dtb > # > fdt_dtb_file optional fdt fdt_dtb_static \ > compile-with "sh -c 'MACHINE=${MACHINE} $S/tools/fdt/make_dtb.sh $S ${FDT_DTS_FILE} ${.CURDIR}'" \ > no-obj no-implicit-rule before-depend \ > clean "${FDT_DTS_FILE:R}.dtb" > fdt_static_dtb.h optional fdt fdt_dtb_static \ > compile-with "sh -c 'MACHINE=${MACHINE} $S/tools/fdt/make_dtbh.sh ${FDT_DTS_FILE} ${.CURDIR}'" \ > dependency "fdt_dtb_file" \ > no-obj no-implicit-rule before-depend \ > clean "fdt_static_dtb.h" > feeder_eq_gen.h optional sound \ > dependency "$S/tools/sound/feeder_eq_mkfilter.awk" \ > compile-with "${AWK} -f $S/tools/sound/feeder_eq_mkfilter.awk -- ${FEEDER_EQ_PRESETS} > feeder_eq_gen.h" \ > no-obj no-implicit-rule before-depend \ > clean "feeder_eq_gen.h" > feeder_rate_gen.h optional sound \ > dependency "$S/tools/sound/feeder_rate_mkfilter.awk" \ > compile-with "${AWK} -f $S/tools/sound/feeder_rate_mkfilter.awk -- ${FEEDER_RATE_PRESETS} > feeder_rate_gen.h" \ > no-obj no-implicit-rule before-depend \ > clean "feeder_rate_gen.h" > snd_fxdiv_gen.h optional sound \ > dependency "$S/tools/sound/snd_fxdiv_gen.awk" \ > compile-with "${AWK} -f $S/tools/sound/snd_fxdiv_gen.awk -- > snd_fxdiv_gen.h" \ > no-obj no-implicit-rule before-depend \ > clean "snd_fxdiv_gen.h" > miidevs.h optional miibus | mii \ > dependency "$S/tools/miidevs2h.awk $S/dev/mii/miidevs" \ > compile-with "${AWK} -f $S/tools/miidevs2h.awk $S/dev/mii/miidevs" \ > no-obj no-implicit-rule before-depend \ > clean "miidevs.h" > pccarddevs.h standard \ > dependency "$S/tools/pccarddevs2h.awk $S/dev/pccard/pccarddevs" \ > compile-with "${AWK} -f $S/tools/pccarddevs2h.awk $S/dev/pccard/pccarddevs" \ > no-obj no-implicit-rule before-depend \ > clean "pccarddevs.h" > teken_state.h optional sc | vt \ > dependency "$S/teken/gensequences $S/teken/sequences" \ > compile-with "${AWK} -f $S/teken/gensequences $S/teken/sequences > teken_state.h" \ > no-obj no-implicit-rule before-depend \ > clean "teken_state.h" > usbdevs.h optional usb \ > dependency "$S/tools/usbdevs2h.awk $S/dev/usb/usbdevs" \ > compile-with "${AWK} -f $S/tools/usbdevs2h.awk $S/dev/usb/usbdevs -h" \ > no-obj no-implicit-rule before-depend \ > clean "usbdevs.h" > usbdevs_data.h optional usb \ > dependency "$S/tools/usbdevs2h.awk $S/dev/usb/usbdevs" \ > compile-with "${AWK} -f $S/tools/usbdevs2h.awk $S/dev/usb/usbdevs -d" \ > no-obj no-implicit-rule before-depend \ > clean "usbdevs_data.h" > cam/cam.c optional scbus > cam/cam_compat.c optional scbus > cam/cam_periph.c optional scbus > cam/cam_queue.c optional scbus > cam/cam_sim.c optional scbus > cam/cam_xpt.c optional scbus > cam/ata/ata_all.c optional scbus > cam/ata/ata_xpt.c optional scbus > cam/ata/ata_pmp.c optional scbus > cam/scsi/scsi_xpt.c optional scbus > cam/scsi/scsi_all.c optional scbus > cam/scsi/scsi_cd.c optional cd > cam/scsi/scsi_ch.c optional ch > cam/ata/ata_da.c optional ada | da > cam/ctl/ctl.c optional ctl > cam/ctl/ctl_backend.c optional ctl > cam/ctl/ctl_backend_block.c optional ctl > cam/ctl/ctl_backend_ramdisk.c optional ctl > cam/ctl/ctl_cmd_table.c optional ctl > cam/ctl/ctl_frontend.c optional ctl > cam/ctl/ctl_frontend_cam_sim.c optional ctl > cam/ctl/ctl_frontend_internal.c optional ctl > cam/ctl/ctl_frontend_iscsi.c optional ctl > cam/ctl/ctl_scsi_all.c optional ctl > cam/ctl/ctl_tpc.c optional ctl > cam/ctl/ctl_tpc_local.c optional ctl > cam/ctl/ctl_error.c optional ctl > cam/ctl/ctl_util.c optional ctl > cam/ctl/scsi_ctl.c optional ctl > cam/scsi/scsi_da.c optional da > cam/scsi/scsi_low.c optional ct | ncv | nsp | stg > cam/scsi/scsi_pass.c optional pass > cam/scsi/scsi_pt.c optional pt > cam/scsi/scsi_sa.c optional sa > cam/scsi/scsi_enc.c optional ses > cam/scsi/scsi_enc_ses.c optional ses > cam/scsi/scsi_enc_safte.c optional ses > cam/scsi/scsi_sg.c optional sg > cam/scsi/scsi_targ_bh.c optional targbh > cam/scsi/scsi_target.c optional targ > cam/scsi/smp_all.c optional scbus > # shared between zfs and dtrace > cddl/compat/opensolaris/kern/opensolaris.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_cmn_err.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_kmem.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_misc.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_sunddi.c optional zfs compile-with "${ZFS_C}" > # zfs specific > cddl/compat/opensolaris/kern/opensolaris_acl.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_dtrace.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_kobj.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_kstat.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_lookup.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_policy.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_string.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_sysevent.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_taskq.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_uio.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_vfs.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_vm.c optional zfs compile-with "${ZFS_C}" > cddl/compat/opensolaris/kern/opensolaris_zone.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/acl/acl_common.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/avl/avl.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/nvpair/fnvpair.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/nvpair/nvpair.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/nvpair/nvpair_alloc_fixed.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/unicode/u8_textprep.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfeature_common.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfs_comutil.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfs_deleg.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfs_fletcher.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfs_namecheck.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zfs_prop.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zpool_prop.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/common/zfs/zprop_common.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/gfs.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/vnode.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/blkptr.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/bplist.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/bpobj.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/bptree.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/ddt.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/ddt_zap.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_diff.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_object.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_traverse.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_tx.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_zfetch.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dnode.c optional zfs compile-with "${ZFS_C}" \ > warning "kernel contains CDDL licensed ZFS filesystem" > cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_bookmark.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deadlist.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deleg.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_destroy.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dir.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_prop.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_scan.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_userhold.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_synctask.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/gzip.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/lz4.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/lzjb.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/refcount.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/rrwlock.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/sha256.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/spa_config.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/spa_errlog.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/spa_history.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/space_reftree.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/trim_map.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/uberblock.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/unique.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_cache.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_file.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_geom.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_label.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_mirror.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_missing.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_queue.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_raidz.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_root.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zap.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zap_leaf.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zap_micro.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfeature.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_acl.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_byteswap.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ctldir.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_debug.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fm.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_fuid.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_log.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_onexit.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_replay.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_rlock.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_sa.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zil.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zio_checksum.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zio_compress.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zio_inject.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zle.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zrlock.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/fs/zfs/zvol.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/os/callb.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/os/fm.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/os/list.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/os/nvpair_alloc_system.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/adler32.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/deflate.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/inffast.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/inflate.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/inftrees.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/opensolaris_crc32.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/trees.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/zmod.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/zmod_subr.c optional zfs compile-with "${ZFS_C}" > cddl/contrib/opensolaris/uts/common/zmod/zutil.c optional zfs compile-with "${ZFS_C}" > compat/freebsd32/freebsd32_capability.c optional compat_freebsd32 > compat/freebsd32/freebsd32_ioctl.c optional compat_freebsd32 > compat/freebsd32/freebsd32_misc.c optional compat_freebsd32 > compat/freebsd32/freebsd32_syscalls.c optional compat_freebsd32 > compat/freebsd32/freebsd32_sysent.c optional compat_freebsd32 > contrib/altq/altq/altq_cbq.c optional altq > contrib/altq/altq/altq_cdnr.c optional altq > contrib/altq/altq/altq_hfsc.c optional altq > contrib/altq/altq/altq_priq.c optional altq > contrib/altq/altq/altq_red.c optional altq > contrib/altq/altq/altq_rio.c optional altq > contrib/altq/altq/altq_rmclass.c optional altq > contrib/altq/altq/altq_subr.c optional altq > contrib/dev/acpica/common/ahids.c optional acpi acpi_debug > contrib/dev/acpica/common/ahuuids.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbcmds.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbconvert.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbdisply.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbexec.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbfileio.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbhistry.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbinput.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbmethod.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbnames.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbstats.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbtest.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbutils.c optional acpi acpi_debug > contrib/dev/acpica/components/debugger/dbxface.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmbuffer.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmdeferred.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmnames.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmopcode.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmobject.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmresrc.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmresrcl.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmresrcl2.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmresrcs.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmutils.c optional acpi acpi_debug > contrib/dev/acpica/components/disassembler/dmwalk.c optional acpi acpi_debug > contrib/dev/acpica/components/dispatcher/dsargs.c optional acpi > contrib/dev/acpica/components/dispatcher/dscontrol.c optional acpi > contrib/dev/acpica/components/dispatcher/dsfield.c optional acpi > contrib/dev/acpica/components/dispatcher/dsinit.c optional acpi > contrib/dev/acpica/components/dispatcher/dsmethod.c optional acpi > contrib/dev/acpica/components/dispatcher/dsmthdat.c optional acpi > contrib/dev/acpica/components/dispatcher/dsobject.c optional acpi > contrib/dev/acpica/components/dispatcher/dsopcode.c optional acpi > contrib/dev/acpica/components/dispatcher/dsutils.c optional acpi > contrib/dev/acpica/components/dispatcher/dswexec.c optional acpi > contrib/dev/acpica/components/dispatcher/dswload.c optional acpi > contrib/dev/acpica/components/dispatcher/dswload2.c optional acpi > contrib/dev/acpica/components/dispatcher/dswscope.c optional acpi > contrib/dev/acpica/components/dispatcher/dswstate.c optional acpi > contrib/dev/acpica/components/events/evevent.c optional acpi > contrib/dev/acpica/components/events/evglock.c optional acpi > contrib/dev/acpica/components/events/evgpe.c optional acpi > contrib/dev/acpica/components/events/evgpeblk.c optional acpi > contrib/dev/acpica/components/events/evgpeinit.c optional acpi > contrib/dev/acpica/components/events/evgpeutil.c optional acpi > contrib/dev/acpica/components/events/evhandler.c optional acpi > contrib/dev/acpica/components/events/evmisc.c optional acpi > contrib/dev/acpica/components/events/evregion.c optional acpi > contrib/dev/acpica/components/events/evrgnini.c optional acpi > contrib/dev/acpica/components/events/evsci.c optional acpi > contrib/dev/acpica/components/events/evxface.c optional acpi > contrib/dev/acpica/components/events/evxfevnt.c optional acpi > contrib/dev/acpica/components/events/evxfgpe.c optional acpi > contrib/dev/acpica/components/events/evxfregn.c optional acpi > contrib/dev/acpica/components/executer/exconfig.c optional acpi > contrib/dev/acpica/components/executer/exconvrt.c optional acpi > contrib/dev/acpica/components/executer/excreate.c optional acpi > contrib/dev/acpica/components/executer/exdebug.c optional acpi > contrib/dev/acpica/components/executer/exdump.c optional acpi > contrib/dev/acpica/components/executer/exfield.c optional acpi > contrib/dev/acpica/components/executer/exfldio.c optional acpi > contrib/dev/acpica/components/executer/exmisc.c optional acpi > contrib/dev/acpica/components/executer/exmutex.c optional acpi > contrib/dev/acpica/components/executer/exnames.c optional acpi > contrib/dev/acpica/components/executer/exoparg1.c optional acpi > contrib/dev/acpica/components/executer/exoparg2.c optional acpi > contrib/dev/acpica/components/executer/exoparg3.c optional acpi > contrib/dev/acpica/components/executer/exoparg6.c optional acpi > contrib/dev/acpica/components/executer/exprep.c optional acpi > contrib/dev/acpica/components/executer/exregion.c optional acpi > contrib/dev/acpica/components/executer/exresnte.c optional acpi > contrib/dev/acpica/components/executer/exresolv.c optional acpi > contrib/dev/acpica/components/executer/exresop.c optional acpi > contrib/dev/acpica/components/executer/exstore.c optional acpi > contrib/dev/acpica/components/executer/exstoren.c optional acpi > contrib/dev/acpica/components/executer/exstorob.c optional acpi > contrib/dev/acpica/components/executer/exsystem.c optional acpi > contrib/dev/acpica/components/executer/exutils.c optional acpi > contrib/dev/acpica/components/hardware/hwacpi.c optional acpi > contrib/dev/acpica/components/hardware/hwesleep.c optional acpi > contrib/dev/acpica/components/hardware/hwgpe.c optional acpi > contrib/dev/acpica/components/hardware/hwpci.c optional acpi > contrib/dev/acpica/components/hardware/hwregs.c optional acpi > contrib/dev/acpica/components/hardware/hwsleep.c optional acpi > contrib/dev/acpica/components/hardware/hwtimer.c optional acpi > contrib/dev/acpica/components/hardware/hwvalid.c optional acpi > contrib/dev/acpica/components/hardware/hwxface.c optional acpi > contrib/dev/acpica/components/hardware/hwxfsleep.c optional acpi > contrib/dev/acpica/components/namespace/nsaccess.c optional acpi > contrib/dev/acpica/components/namespace/nsalloc.c optional acpi > contrib/dev/acpica/components/namespace/nsarguments.c optional acpi > contrib/dev/acpica/components/namespace/nsconvert.c optional acpi > contrib/dev/acpica/components/namespace/nsdump.c optional acpi > contrib/dev/acpica/components/namespace/nseval.c optional acpi > contrib/dev/acpica/components/namespace/nsinit.c optional acpi > contrib/dev/acpica/components/namespace/nsload.c optional acpi > contrib/dev/acpica/components/namespace/nsnames.c optional acpi > contrib/dev/acpica/components/namespace/nsobject.c optional acpi > contrib/dev/acpica/components/namespace/nsparse.c optional acpi > contrib/dev/acpica/components/namespace/nspredef.c optional acpi > contrib/dev/acpica/components/namespace/nsprepkg.c optional acpi > contrib/dev/acpica/components/namespace/nsrepair.c optional acpi > contrib/dev/acpica/components/namespace/nsrepair2.c optional acpi > contrib/dev/acpica/components/namespace/nssearch.c optional acpi > contrib/dev/acpica/components/namespace/nsutils.c optional acpi > contrib/dev/acpica/components/namespace/nswalk.c optional acpi > contrib/dev/acpica/components/namespace/nsxfeval.c optional acpi > contrib/dev/acpica/components/namespace/nsxfname.c optional acpi > contrib/dev/acpica/components/namespace/nsxfobj.c optional acpi > contrib/dev/acpica/components/parser/psargs.c optional acpi > contrib/dev/acpica/components/parser/psloop.c optional acpi > contrib/dev/acpica/components/parser/psobject.c optional acpi > contrib/dev/acpica/components/parser/psopcode.c optional acpi > contrib/dev/acpica/components/parser/psopinfo.c optional acpi > contrib/dev/acpica/components/parser/psparse.c optional acpi > contrib/dev/acpica/components/parser/psscope.c optional acpi > contrib/dev/acpica/components/parser/pstree.c optional acpi > contrib/dev/acpica/components/parser/psutils.c optional acpi > contrib/dev/acpica/components/parser/pswalk.c optional acpi > contrib/dev/acpica/components/parser/psxface.c optional acpi > contrib/dev/acpica/components/resources/rsaddr.c optional acpi > contrib/dev/acpica/components/resources/rscalc.c optional acpi > contrib/dev/acpica/components/resources/rscreate.c optional acpi > contrib/dev/acpica/components/resources/rsdump.c optional acpi > contrib/dev/acpica/components/resources/rsdumpinfo.c optional acpi > contrib/dev/acpica/components/resources/rsinfo.c optional acpi > contrib/dev/acpica/components/resources/rsio.c optional acpi > contrib/dev/acpica/components/resources/rsirq.c optional acpi > contrib/dev/acpica/components/resources/rslist.c optional acpi > contrib/dev/acpica/components/resources/rsmemory.c optional acpi > contrib/dev/acpica/components/resources/rsmisc.c optional acpi > contrib/dev/acpica/components/resources/rsserial.c optional acpi > contrib/dev/acpica/components/resources/rsutils.c optional acpi > contrib/dev/acpica/components/resources/rsxface.c optional acpi > contrib/dev/acpica/components/tables/tbdata.c optional acpi > contrib/dev/acpica/components/tables/tbfadt.c optional acpi > contrib/dev/acpica/components/tables/tbfind.c optional acpi > contrib/dev/acpica/components/tables/tbinstal.c optional acpi > contrib/dev/acpica/components/tables/tbprint.c optional acpi > contrib/dev/acpica/components/tables/tbutils.c optional acpi > contrib/dev/acpica/components/tables/tbxface.c optional acpi > contrib/dev/acpica/components/tables/tbxfload.c optional acpi > contrib/dev/acpica/components/tables/tbxfroot.c optional acpi > contrib/dev/acpica/components/utilities/utaddress.c optional acpi > contrib/dev/acpica/components/utilities/utalloc.c optional acpi > contrib/dev/acpica/components/utilities/utbuffer.c optional acpi > contrib/dev/acpica/components/utilities/utcache.c optional acpi > contrib/dev/acpica/components/utilities/utcopy.c optional acpi > contrib/dev/acpica/components/utilities/utdebug.c optional acpi > contrib/dev/acpica/components/utilities/utdecode.c optional acpi > contrib/dev/acpica/components/utilities/utdelete.c optional acpi > contrib/dev/acpica/components/utilities/uterror.c optional acpi > contrib/dev/acpica/components/utilities/uteval.c optional acpi > contrib/dev/acpica/components/utilities/utexcep.c optional acpi > contrib/dev/acpica/components/utilities/utglobal.c optional acpi > contrib/dev/acpica/components/utilities/uthex.c optional acpi > contrib/dev/acpica/components/utilities/utids.c optional acpi > contrib/dev/acpica/components/utilities/utinit.c optional acpi > contrib/dev/acpica/components/utilities/utlock.c optional acpi > contrib/dev/acpica/components/utilities/utmath.c optional acpi > contrib/dev/acpica/components/utilities/utmisc.c optional acpi > contrib/dev/acpica/components/utilities/utmutex.c optional acpi > contrib/dev/acpica/components/utilities/utobject.c optional acpi > contrib/dev/acpica/components/utilities/utosi.c optional acpi > contrib/dev/acpica/components/utilities/utownerid.c optional acpi > contrib/dev/acpica/components/utilities/utpredef.c optional acpi > contrib/dev/acpica/components/utilities/utresrc.c optional acpi > contrib/dev/acpica/components/utilities/utstate.c optional acpi > contrib/dev/acpica/components/utilities/utstring.c optional acpi > contrib/dev/acpica/components/utilities/utuuid.c optional acpi acpi_debug > contrib/dev/acpica/components/utilities/utxface.c optional acpi > contrib/dev/acpica/components/utilities/utxferror.c optional acpi > contrib/dev/acpica/components/utilities/utxfinit.c optional acpi > #contrib/dev/acpica/components/utilities/utxfmutex.c optional acpi > contrib/ipfilter/netinet/fil.c optional ipfilter inet \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_auth.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_fil_freebsd.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_frag.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_log.c optional ipfilter inet \ > compile-with "${NORMAL_C} -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_nat.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_proxy.c optional ipfilter inet \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_state.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_lookup.c optional ipfilter inet \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN} -Wno-unused -Wno-error -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_pool.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_htable.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_sync.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/mlfk_ipl.c optional ipfilter inet \ > compile-with "${NORMAL_C} -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_nat6.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_rules.c optional ipfilter inet \ > compile-with "${NORMAL_C} -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_scan.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/ip_dstlist.c optional ipfilter inet \ > compile-with "${NORMAL_C} -Wno-unused -I$S/contrib/ipfilter" > contrib/ipfilter/netinet/radix_ipf.c optional ipfilter inet \ > compile-with "${NORMAL_C} -I$S/contrib/ipfilter" > contrib/libfdt/fdt.c optional fdt > contrib/libfdt/fdt_ro.c optional fdt > contrib/libfdt/fdt_rw.c optional fdt > contrib/libfdt/fdt_strerror.c optional fdt > contrib/libfdt/fdt_sw.c optional fdt > contrib/libfdt/fdt_wip.c optional fdt > contrib/ngatm/netnatm/api/cc_conn.c optional ngatm_ccatm \ > compile-with "${NORMAL_C_NOWERROR} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/api/cc_data.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/api/cc_dump.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/api/cc_port.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/api/cc_sig.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/api/cc_user.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/api/unisap.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/misc/straddr.c optional ngatm_atmbase \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/misc/unimsg_common.c optional ngatm_atmbase \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/msg/traffic.c optional ngatm_atmbase \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/msg/uni_ie.c optional ngatm_atmbase \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/msg/uni_msg.c optional ngatm_atmbase \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/saal/saal_sscfu.c optional ngatm_sscfu \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/saal/saal_sscop.c optional ngatm_sscop \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_call.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_coord.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_party.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_print.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_reset.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_uni.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_unimsgcpy.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > contrib/ngatm/netnatm/sig/sig_verify.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > crypto/blowfish/bf_ecb.c optional ipsec > crypto/blowfish/bf_skey.c optional crypto | ipsec > crypto/camellia/camellia.c optional crypto | ipsec > crypto/camellia/camellia-api.c optional crypto | ipsec > crypto/des/des_ecb.c optional crypto | ipsec | netsmb > crypto/des/des_setkey.c optional crypto | ipsec | netsmb > crypto/rc4/rc4.c optional netgraph_mppc_encryption | kgssapi > crypto/rijndael/rijndael-alg-fst.c optional crypto | geom_bde | \ > ipsec | random | wlan_ccmp > crypto/rijndael/rijndael-api-fst.c optional geom_bde | random > crypto/rijndael/rijndael-api.c optional crypto | ipsec | wlan_ccmp > crypto/sha1.c optional carp | crypto | ipsec | \ > netgraph_mppc_encryption | sctp > crypto/sha2/sha2.c optional crypto | geom_bde | ipsec | random | \ > sctp | zfs > crypto/sha2/sha256c.c optional crypto | geom_bde | ipsec | random | \ > sctp | zfs > crypto/siphash/siphash.c optional inet | inet6 > crypto/siphash/siphash_test.c optional inet | inet6 > ddb/db_access.c optional ddb > ddb/db_break.c optional ddb > ddb/db_capture.c optional ddb > ddb/db_command.c optional ddb > ddb/db_examine.c optional ddb > ddb/db_expr.c optional ddb > ddb/db_input.c optional ddb > ddb/db_lex.c optional ddb > ddb/db_main.c optional ddb > ddb/db_output.c optional ddb > ddb/db_print.c optional ddb > ddb/db_ps.c optional ddb > ddb/db_run.c optional ddb > ddb/db_script.c optional ddb > ddb/db_sym.c optional ddb > ddb/db_thread.c optional ddb > ddb/db_textdump.c optional ddb > ddb/db_variables.c optional ddb > ddb/db_watch.c optional ddb > ddb/db_write_cmd.c optional ddb > #dev/dpt/dpt_control.c optional dpt > dev/aac/aac.c optional aac > dev/aac/aac_cam.c optional aacp aac > dev/aac/aac_debug.c optional aac > dev/aac/aac_disk.c optional aac > dev/aac/aac_linux.c optional aac compat_linux > dev/aac/aac_pci.c optional aac pci > dev/aacraid/aacraid.c optional aacraid > dev/aacraid/aacraid_cam.c optional aacraid scbus > dev/aacraid/aacraid_debug.c optional aacraid > dev/aacraid/aacraid_linux.c optional aacraid compat_linux > dev/aacraid/aacraid_pci.c optional aacraid pci > dev/acpi_support/acpi_wmi.c optional acpi_wmi acpi > dev/acpi_support/acpi_asus.c optional acpi_asus acpi > dev/acpi_support/acpi_asus_wmi.c optional acpi_asus_wmi acpi > dev/acpi_support/acpi_fujitsu.c optional acpi_fujitsu acpi > dev/acpi_support/acpi_hp.c optional acpi_hp acpi > dev/acpi_support/acpi_ibm.c optional acpi_ibm acpi > dev/acpi_support/acpi_panasonic.c optional acpi_panasonic acpi > dev/acpi_support/acpi_sony.c optional acpi_sony acpi > dev/acpi_support/acpi_toshiba.c optional acpi_toshiba acpi > dev/acpi_support/atk0110.c optional aibs acpi > dev/acpica/Osd/OsdDebug.c optional acpi > dev/acpica/Osd/OsdHardware.c optional acpi > dev/acpica/Osd/OsdInterrupt.c optional acpi > dev/acpica/Osd/OsdMemory.c optional acpi > dev/acpica/Osd/OsdSchedule.c optional acpi > dev/acpica/Osd/OsdStream.c optional acpi > dev/acpica/Osd/OsdSynch.c optional acpi > dev/acpica/Osd/OsdTable.c optional acpi > dev/acpica/acpi.c optional acpi > dev/acpica/acpi_acad.c optional acpi > dev/acpica/acpi_battery.c optional acpi > dev/acpica/acpi_button.c optional acpi > dev/acpica/acpi_cmbat.c optional acpi > dev/acpica/acpi_cpu.c optional acpi > dev/acpica/acpi_ec.c optional acpi > dev/acpica/acpi_hpet.c optional acpi > dev/acpica/acpi_isab.c optional acpi isa > dev/acpica/acpi_lid.c optional acpi > dev/acpica/acpi_package.c optional acpi > dev/acpica/acpi_pci.c optional acpi pci > dev/acpica/acpi_pci_link.c optional acpi pci > dev/acpica/acpi_pcib.c optional acpi pci > dev/acpica/acpi_pcib_acpi.c optional acpi pci > dev/acpica/acpi_pcib_pci.c optional acpi pci > dev/acpica/acpi_perf.c optional acpi > dev/acpica/acpi_powerres.c optional acpi > dev/acpica/acpi_quirk.c optional acpi > dev/acpica/acpi_resource.c optional acpi > dev/acpica/acpi_smbat.c optional acpi > dev/acpica/acpi_thermal.c optional acpi > dev/acpica/acpi_throttle.c optional acpi > dev/acpica/acpi_timer.c optional acpi > dev/acpica/acpi_video.c optional acpi_video acpi > dev/acpica/acpi_dock.c optional acpi_dock acpi > dev/adlink/adlink.c optional adlink > dev/advansys/adv_eisa.c optional adv eisa > dev/advansys/adv_pci.c optional adv pci > dev/advansys/advansys.c optional adv > dev/advansys/advlib.c optional adv > dev/advansys/advmcode.c optional adv > dev/advansys/adw_pci.c optional adw pci > dev/advansys/adwcam.c optional adw > dev/advansys/adwlib.c optional adw > dev/advansys/adwmcode.c optional adw > dev/ae/if_ae.c optional ae pci > dev/age/if_age.c optional age pci > dev/agp/agp.c optional agp pci > dev/agp/agp_if.m optional agp pci > dev/aha/aha.c optional aha > dev/aha/aha_isa.c optional aha isa > dev/aha/aha_mca.c optional aha mca > dev/ahb/ahb.c optional ahb eisa > dev/ahci/ahci.c optional ahci > dev/ahci/ahciem.c optional ahci > dev/ahci/ahci_pci.c optional ahci pci > dev/aic/aic.c optional aic > dev/aic/aic_pccard.c optional aic pccard > dev/aic7xxx/ahc_eisa.c optional ahc eisa > dev/aic7xxx/ahc_isa.c optional ahc isa > dev/aic7xxx/ahc_pci.c optional ahc pci \ > compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" > dev/aic7xxx/ahd_pci.c optional ahd pci \ > compile-with "${NORMAL_C} ${NO_WCONSTANT_CONVERSION}" > dev/aic7xxx/aic7770.c optional ahc > dev/aic7xxx/aic79xx.c optional ahd pci > dev/aic7xxx/aic79xx_osm.c optional ahd pci > dev/aic7xxx/aic79xx_pci.c optional ahd pci > dev/aic7xxx/aic79xx_reg_print.c optional ahd pci ahd_reg_pretty_print > dev/aic7xxx/aic7xxx.c optional ahc > dev/aic7xxx/aic7xxx_93cx6.c optional ahc > dev/aic7xxx/aic7xxx_osm.c optional ahc > dev/aic7xxx/aic7xxx_pci.c optional ahc pci > dev/aic7xxx/aic7xxx_reg_print.c optional ahc ahc_reg_pretty_print > dev/alc/if_alc.c optional alc pci > dev/ale/if_ale.c optional ale pci > dev/alpm/alpm.c optional alpm pci > dev/altera/avgen/altera_avgen.c optional altera_avgen > dev/altera/avgen/altera_avgen_fdt.c optional altera_avgen fdt > dev/altera/avgen/altera_avgen_nexus.c optional altera_avgen > dev/altera/sdcard/altera_sdcard.c optional altera_sdcard > dev/altera/sdcard/altera_sdcard_disk.c optional altera_sdcard > dev/altera/sdcard/altera_sdcard_io.c optional altera_sdcard > dev/altera/sdcard/altera_sdcard_fdt.c optional altera_sdcard fdt > dev/altera/sdcard/altera_sdcard_nexus.c optional altera_sdcard > dev/amdpm/amdpm.c optional amdpm pci | nfpm pci > dev/amdsmb/amdsmb.c optional amdsmb pci > dev/amr/amr.c optional amr > dev/amr/amr_cam.c optional amrp amr > dev/amr/amr_disk.c optional amr > dev/amr/amr_linux.c optional amr compat_linux > dev/amr/amr_pci.c optional amr pci > dev/an/if_an.c optional an > dev/an/if_an_isa.c optional an isa > dev/an/if_an_pccard.c optional an pccard > dev/an/if_an_pci.c optional an pci > dev/asr/asr.c optional asr pci \ > compile-with "${NORMAL_C} ${NO_WARRAY_BOUNDS}" > # > dev/ata/ata_if.m optional ata | atacore > dev/ata/ata-all.c optional ata | atacore > dev/ata/ata-dma.c optional ata | atacore > dev/ata/ata-lowlevel.c optional ata | atacore > dev/ata/ata-sata.c optional ata | atacore > dev/ata/ata-card.c optional ata pccard | atapccard > dev/ata/ata-cbus.c optional ata pc98 | atapc98 > dev/ata/ata-isa.c optional ata isa | ataisa > dev/ata/ata-pci.c optional ata pci | atapci > dev/ata/chipsets/ata-ahci.c optional ata pci | ataahci | ataacerlabs | \ > ataati | ataintel | atajmicron | \ > atavia | atanvidia > dev/ata/chipsets/ata-acard.c optional ata pci | ataacard > dev/ata/chipsets/ata-acerlabs.c optional ata pci | ataacerlabs > dev/ata/chipsets/ata-adaptec.c optional ata pci | ataadaptec > dev/ata/chipsets/ata-amd.c optional ata pci | ataamd > dev/ata/chipsets/ata-ati.c optional ata pci | ataati > dev/ata/chipsets/ata-cenatek.c optional ata pci | atacenatek > dev/ata/chipsets/ata-cypress.c optional ata pci | atacypress > dev/ata/chipsets/ata-cyrix.c optional ata pci | atacyrix > dev/ata/chipsets/ata-highpoint.c optional ata pci | atahighpoint > dev/ata/chipsets/ata-intel.c optional ata pci | ataintel > dev/ata/chipsets/ata-ite.c optional ata pci | ataite > dev/ata/chipsets/ata-jmicron.c optional ata pci | atajmicron > dev/ata/chipsets/ata-marvell.c optional ata pci | atamarvell | ataadaptec > dev/ata/chipsets/ata-micron.c optional ata pci | atamicron > dev/ata/chipsets/ata-national.c optional ata pci | atanational > dev/ata/chipsets/ata-netcell.c optional ata pci | atanetcell > dev/ata/chipsets/ata-nvidia.c optional ata pci | atanvidia > dev/ata/chipsets/ata-promise.c optional ata pci | atapromise > dev/ata/chipsets/ata-serverworks.c optional ata pci | ataserverworks > dev/ata/chipsets/ata-siliconimage.c optional ata pci | atasiliconimage | ataati > dev/ata/chipsets/ata-sis.c optional ata pci | atasis > dev/ata/chipsets/ata-via.c optional ata pci | atavia > # > dev/ath/if_ath_pci.c optional ath_pci pci \ > compile-with "${NORMAL_C} -I$S/dev/ath" > # > dev/ath/if_ath_ahb.c optional ath_ahb \ > compile-with "${NORMAL_C} -I$S/dev/ath" > # > dev/ath/if_ath.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_alq.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_beacon.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_btcoex.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_debug.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_keycache.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_led.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_lna_div.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_tx.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_tx_edma.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_tx_ht.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_tdma.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_sysctl.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_rx.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_rx_edma.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/if_ath_spectral.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ah_osdep.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > # > dev/ath/ath_hal/ah.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_hal/ah_eeprom_v1.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_hal/ah_eeprom_v3.c optional ath_hal | ath_ar5211 | ath_ar5212 \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_hal/ah_eeprom_v14.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_hal/ah_eeprom_v4k.c \ > optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_hal/ah_eeprom_9287.c \ > optional ath_hal | ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_hal/ah_regdomain.c optional ath \ > compile-with "${NORMAL_C} ${NO_WSHIFT_COUNT_NEGATIVE} ${NO_WSHIFT_COUNT_OVERFLOW} -I$S/dev/ath" > # ar5210 > dev/ath/ath_hal/ar5210/ar5210_attach.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_beacon.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_interrupts.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_keycache.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_misc.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_phy.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_power.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_recv.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_reset.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5210/ar5210_xmit.c optional ath_hal | ath_ar5210 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar5211 > dev/ath/ath_hal/ar5211/ar5211_attach.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_beacon.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_interrupts.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_keycache.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_misc.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_phy.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_power.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_recv.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_reset.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5211/ar5211_xmit.c optional ath_hal | ath_ar5211 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar5212 > dev/ath/ath_hal/ar5212/ar5212_ani.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_attach.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_beacon.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_eeprom.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_gpio.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_interrupts.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_keycache.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_misc.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_phy.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_power.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_recv.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_reset.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_rfgain.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5212_xmit.c \ > optional ath_hal | ath_ar5212 | ath_ar5416 | ath_ar9160 | ath_ar9280 | \ > ath_ar9285 ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar5416 (depends on ar5212) > dev/ath/ath_hal/ar5416/ar5416_ani.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_attach.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_beacon.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_btcoex.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_cal.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_cal_iq.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_cal_adcgain.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_cal_adcdc.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_eeprom.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_gpio.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_interrupts.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_keycache.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_misc.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_phy.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_power.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_radar.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_recv.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_reset.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_spectral.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar5416_xmit.c \ > optional ath_hal | ath_ar5416 | ath_ar9160 | ath_ar9280 | ath_ar9285 | \ > ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar9130 (depends upon ar5416) - also requires AH_SUPPORT_AR9130 > # > # Since this is an embedded MAC SoC, there's no need to compile it into the > # default HAL. > dev/ath/ath_hal/ar9001/ar9130_attach.c optional ath_ar9130 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9001/ar9130_phy.c optional ath_ar9130 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9001/ar9130_eeprom.c optional ath_ar9130 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar9160 (depends on ar5416) > dev/ath/ath_hal/ar9001/ar9160_attach.c optional ath_hal | ath_ar9160 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar9280 (depends on ar5416) > dev/ath/ath_hal/ar9002/ar9280_attach.c optional ath_hal | ath_ar9280 | \ > ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9280_olc.c optional ath_hal | ath_ar9280 | \ > ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar9285 (depends on ar5416 and ar9280) > dev/ath/ath_hal/ar9002/ar9285_attach.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9285_btcoex.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9285_reset.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9285_cal.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9285_phy.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9285_diversity.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > # ar9287 (depends on ar5416) > dev/ath/ath_hal/ar9002/ar9287_attach.c optional ath_hal | ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9287_reset.c optional ath_hal | ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9287_cal.c optional ath_hal | ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9287_olc.c optional ath_hal | ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > > # ar9300 > contrib/dev/ath/ath_hal/ar9300/ar9300_ani.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_attach.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_beacon.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_eeprom.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal ${NO_WCONSTANT_CONVERSION}" > contrib/dev/ath/ath_hal/ar9300/ar9300_freebsd.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_gpio.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_interrupts.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_keycache.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_mci.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_misc.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_paprd.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_phy.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_power.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_radar.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_radio.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_recv.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_recv_ds.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_reset.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal ${NO_WSOMETIMES_UNINITIALIZED} -Wno-unused-function" > contrib/dev/ath/ath_hal/ar9300/ar9300_stub.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_stub_funcs.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_timer.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_xmit.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > contrib/dev/ath/ath_hal/ar9300/ar9300_xmit_ds.c optional ath_hal | ath_ar9300 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal -I$S/contrib/dev/ath/ath_hal" > > # rf backends > dev/ath/ath_hal/ar5212/ar2316.c optional ath_rf2316 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar2317.c optional ath_rf2317 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar2413.c optional ath_hal | ath_rf2413 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar2425.c optional ath_hal | ath_rf2425 | ath_rf2417 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5111.c optional ath_hal | ath_rf5111 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5112.c optional ath_hal | ath_rf5112 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5212/ar5413.c optional ath_hal | ath_rf5413 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar5416/ar2133.c optional ath_hal | ath_ar5416 | \ > ath_ar9130 | ath_ar9160 | ath_ar9280 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9280.c optional ath_hal | ath_ar9280 | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9285.c optional ath_hal | ath_ar9285 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > dev/ath/ath_hal/ar9002/ar9287.c optional ath_hal | ath_ar9287 \ > compile-with "${NORMAL_C} -I$S/dev/ath -I$S/dev/ath/ath_hal" > > # ath rate control algorithms > dev/ath/ath_rate/amrr/amrr.c optional ath_rate_amrr \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_rate/onoe/onoe.c optional ath_rate_onoe \ > compile-with "${NORMAL_C} -I$S/dev/ath" > dev/ath/ath_rate/sample/sample.c optional ath_rate_sample \ > compile-with "${NORMAL_C} -I$S/dev/ath" > # ath DFS modules > dev/ath/ath_dfs/null/dfs_null.c optional ath \ > compile-with "${NORMAL_C} -I$S/dev/ath" > # > dev/bce/if_bce.c optional bce > dev/bfe/if_bfe.c optional bfe > dev/bge/if_bge.c optional bge > dev/bktr/bktr_audio.c optional bktr pci > dev/bktr/bktr_card.c optional bktr pci > dev/bktr/bktr_core.c optional bktr pci > dev/bktr/bktr_i2c.c optional bktr pci smbus > dev/bktr/bktr_os.c optional bktr pci > dev/bktr/bktr_tuner.c optional bktr pci > dev/bktr/msp34xx.c optional bktr pci > dev/buslogic/bt.c optional bt > dev/buslogic/bt_eisa.c optional bt eisa > dev/buslogic/bt_isa.c optional bt isa > dev/buslogic/bt_mca.c optional bt mca > dev/buslogic/bt_pci.c optional bt pci > dev/bwi/bwimac.c optional bwi > dev/bwi/bwiphy.c optional bwi > dev/bwi/bwirf.c optional bwi > dev/bwi/if_bwi.c optional bwi > dev/bwi/if_bwi_pci.c optional bwi pci > # XXX Work around clang warning, until maintainer approves fix. > dev/bwn/if_bwn.c optional bwn siba_bwn \ > compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" > dev/cardbus/cardbus.c optional cardbus > dev/cardbus/cardbus_cis.c optional cardbus > dev/cardbus/cardbus_device.c optional cardbus > dev/cas/if_cas.c optional cas > dev/cfi/cfi_bus_fdt.c optional cfi fdt > dev/cfi/cfi_bus_nexus.c optional cfi > dev/cfi/cfi_core.c optional cfi > dev/cfi/cfi_dev.c optional cfi > dev/cfi/cfi_disk.c optional cfid > dev/ciss/ciss.c optional ciss > dev/cm/smc90cx6.c optional cm > dev/cmx/cmx.c optional cmx > dev/cmx/cmx_pccard.c optional cmx pccard > dev/cpufreq/ichss.c optional cpufreq > dev/cs/if_cs.c optional cs > dev/cs/if_cs_isa.c optional cs isa > dev/cs/if_cs_pccard.c optional cs pccard > dev/cxgb/cxgb_main.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/cxgb_sge.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_mc5.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_vsc7323.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_vsc8211.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_ael1002.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_aq100x.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_mv88e1xxx.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_xgmac.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_t3_hw.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/common/cxgb_tn1010.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/sys/uipc_mvec.c optional cxgb pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgb/cxgb_t3fw.c optional cxgb cxgb_t3fw \ > compile-with "${NORMAL_C} -I$S/dev/cxgb" > dev/cxgbe/t4_main.c optional cxgbe pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgbe" > dev/cxgbe/t4_netmap.c optional cxgbe pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgbe" > dev/cxgbe/t4_sge.c optional cxgbe pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgbe" > dev/cxgbe/t4_l2t.c optional cxgbe pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgbe" > dev/cxgbe/t4_tracer.c optional cxgbe pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgbe" > dev/cxgbe/common/t4_hw.c optional cxgbe pci \ > compile-with "${NORMAL_C} -I$S/dev/cxgbe" > t4fw_cfg.c optional cxgbe \ > compile-with "${AWK} -f $S/tools/fw_stub.awk t4fw_cfg.fw:t4fw_cfg t4fw_cfg_uwire.fw:t4fw_cfg_uwire t4fw.fw:t4fw -mt4fw_cfg -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "t4fw_cfg.c" > t4fw_cfg.fwo optional cxgbe \ > dependency "t4fw_cfg.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "t4fw_cfg.fwo" > t4fw_cfg.fw optional cxgbe \ > dependency "$S/dev/cxgbe/firmware/t4fw_cfg.txt" \ > compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ > no-obj no-implicit-rule \ > clean "t4fw_cfg.fw" > t4fw_cfg_uwire.fwo optional cxgbe \ > dependency "t4fw_cfg_uwire.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "t4fw_cfg_uwire.fwo" > t4fw_cfg_uwire.fw optional cxgbe \ > dependency "$S/dev/cxgbe/firmware/t4fw_cfg_uwire.txt" \ > compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ > no-obj no-implicit-rule \ > clean "t4fw_cfg_uwire.fw" > t4fw.fwo optional cxgbe \ > dependency "t4fw.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "t4fw.fwo" > t4fw.fw optional cxgbe \ > dependency "$S/dev/cxgbe/firmware/t4fw-1.11.27.0.bin.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "t4fw.fw" > t5fw_cfg.c optional cxgbe \ > compile-with "${AWK} -f $S/tools/fw_stub.awk t5fw_cfg.fw:t5fw_cfg t5fw.fw:t5fw -mt5fw_cfg -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "t5fw_cfg.c" > t5fw_cfg.fwo optional cxgbe \ > dependency "t5fw_cfg.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "t5fw_cfg.fwo" > t5fw_cfg.fw optional cxgbe \ > dependency "$S/dev/cxgbe/firmware/t5fw_cfg.txt" \ > compile-with "${CP} ${.ALLSRC} ${.TARGET}" \ > no-obj no-implicit-rule \ > clean "t5fw_cfg.fw" > t5fw.fwo optional cxgbe \ > dependency "t5fw.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "t5fw.fwo" > t5fw.fw optional cxgbe \ > dependency "$S/dev/cxgbe/firmware/t5fw-1.11.27.0.bin.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "t5fw.fw" > dev/cy/cy.c optional cy > dev/cy/cy_isa.c optional cy isa > dev/cy/cy_pci.c optional cy pci > dev/dc/if_dc.c optional dc pci > dev/dc/dcphy.c optional dc pci > dev/dc/pnphy.c optional dc pci > dev/dcons/dcons.c optional dcons > dev/dcons/dcons_crom.c optional dcons_crom > dev/dcons/dcons_os.c optional dcons > dev/de/if_de.c optional de pci > dev/digi/CX.c optional digi_CX > dev/digi/CX_PCI.c optional digi_CX_PCI > dev/digi/EPCX.c optional digi_EPCX > dev/digi/EPCX_PCI.c optional digi_EPCX_PCI > dev/digi/Xe.c optional digi_Xe > dev/digi/Xem.c optional digi_Xem > dev/digi/Xr.c optional digi_Xr > dev/digi/digi.c optional digi > dev/digi/digi_isa.c optional digi isa > dev/digi/digi_pci.c optional digi pci > dev/dpt/dpt_eisa.c optional dpt eisa > dev/dpt/dpt_pci.c optional dpt pci > dev/dpt/dpt_scsi.c optional dpt > dev/drm/ati_pcigart.c optional drm > dev/drm/drm_agpsupport.c optional drm > dev/drm/drm_auth.c optional drm > dev/drm/drm_bufs.c optional drm > dev/drm/drm_context.c optional drm > dev/drm/drm_dma.c optional drm > dev/drm/drm_drawable.c optional drm > dev/drm/drm_drv.c optional drm > dev/drm/drm_fops.c optional drm > dev/drm/drm_hashtab.c optional drm > dev/drm/drm_ioctl.c optional drm > dev/drm/drm_irq.c optional drm > dev/drm/drm_lock.c optional drm > dev/drm/drm_memory.c optional drm > dev/drm/drm_mm.c optional drm > dev/drm/drm_pci.c optional drm > dev/drm/drm_scatter.c optional drm > dev/drm/drm_sman.c optional drm > dev/drm/drm_sysctl.c optional drm > dev/drm/drm_vm.c optional drm > dev/drm/i915_dma.c optional i915drm > dev/drm/i915_drv.c optional i915drm > dev/drm/i915_irq.c optional i915drm > dev/drm/i915_mem.c optional i915drm > dev/drm/i915_suspend.c optional i915drm > dev/drm/mach64_dma.c optional mach64drm > dev/drm/mach64_drv.c optional mach64drm > dev/drm/mach64_irq.c optional mach64drm > dev/drm/mach64_state.c optional mach64drm > dev/drm/mga_dma.c optional mgadrm > dev/drm/mga_drv.c optional mgadrm > dev/drm/mga_irq.c optional mgadrm > dev/drm/mga_state.c optional mgadrm > dev/drm/mga_warp.c optional mgadrm > dev/drm/r128_cce.c optional r128drm \ > compile-with "${NORMAL_C} ${NO_WUNUSED_VALUE} ${NO_WCONSTANT_CONVERSION}" > dev/drm/r128_drv.c optional r128drm > dev/drm/r128_irq.c optional r128drm > dev/drm/r128_state.c optional r128drm \ > compile-with "${NORMAL_C} ${NO_WUNUSED_VALUE}" > dev/drm/r300_cmdbuf.c optional radeondrm > dev/drm/r600_blit.c optional radeondrm > dev/drm/r600_cp.c optional radeondrm \ > compile-with "${NORMAL_C} ${NO_WUNUSED_VALUE} ${NO_WCONSTANT_CONVERSION}" > dev/drm/radeon_cp.c optional radeondrm \ > compile-with "${NORMAL_C} ${NO_WUNUSED_VALUE} ${NO_WCONSTANT_CONVERSION}" > dev/drm/radeon_cs.c optional radeondrm > dev/drm/radeon_drv.c optional radeondrm > dev/drm/radeon_irq.c optional radeondrm > dev/drm/radeon_mem.c optional radeondrm > dev/drm/radeon_state.c optional radeondrm > dev/drm/savage_bci.c optional savagedrm > dev/drm/savage_drv.c optional savagedrm > dev/drm/savage_state.c optional savagedrm > dev/drm/sis_drv.c optional sisdrm > dev/drm/sis_ds.c optional sisdrm > dev/drm/sis_mm.c optional sisdrm > dev/drm/tdfx_drv.c optional tdfxdrm > dev/drm/via_dma.c optional viadrm > dev/drm/via_dmablit.c optional viadrm > dev/drm/via_drv.c optional viadrm > dev/drm/via_irq.c optional viadrm > dev/drm/via_map.c optional viadrm > dev/drm/via_mm.c optional viadrm > dev/drm/via_verifier.c optional viadrm > dev/drm/via_video.c optional viadrm > dev/ed/if_ed.c optional ed > dev/ed/if_ed_novell.c optional ed > dev/ed/if_ed_rtl80x9.c optional ed > dev/ed/if_ed_pccard.c optional ed pccard > dev/ed/if_ed_pci.c optional ed pci > dev/eisa/eisa_if.m standard > dev/eisa/eisaconf.c optional eisa > dev/e1000/if_em.c optional em \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/if_lem.c optional em \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/if_igb.c optional igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_80003es2lan.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_82540.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_82541.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_82542.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_82543.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_82571.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_82575.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_ich8lan.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_i210.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_api.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_mac.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_manage.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_nvm.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_phy.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_vf.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_mbx.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/e1000/e1000_osdep.c optional em | igb \ > compile-with "${NORMAL_C} -I$S/dev/e1000" > dev/et/if_et.c optional et > dev/en/if_en_pci.c optional en pci > dev/en/midway.c optional en > dev/ep/if_ep.c optional ep > dev/ep/if_ep_eisa.c optional ep eisa > dev/ep/if_ep_isa.c optional ep isa > dev/ep/if_ep_mca.c optional ep mca > dev/ep/if_ep_pccard.c optional ep pccard > dev/esp/esp_pci.c optional esp pci > dev/esp/ncr53c9x.c optional esp > dev/etherswitch/arswitch/arswitch.c optional arswitch > dev/etherswitch/arswitch/arswitch_reg.c optional arswitch > dev/etherswitch/arswitch/arswitch_phy.c optional arswitch > dev/etherswitch/arswitch/arswitch_8216.c optional arswitch > dev/etherswitch/arswitch/arswitch_8226.c optional arswitch > dev/etherswitch/arswitch/arswitch_8316.c optional arswitch > dev/etherswitch/arswitch/arswitch_8327.c optional arswitch > dev/etherswitch/arswitch/arswitch_7240.c optional arswitch > dev/etherswitch/arswitch/arswitch_9340.c optional arswitch > dev/etherswitch/arswitch/arswitch_vlans.c optional arswitch > dev/etherswitch/etherswitch.c optional etherswitch > dev/etherswitch/etherswitch_if.m optional etherswitch > dev/etherswitch/ip17x/ip17x.c optional ip17x > dev/etherswitch/ip17x/ip175c.c optional ip17x > dev/etherswitch/ip17x/ip175d.c optional ip17x > dev/etherswitch/ip17x/ip17x_phy.c optional ip17x > dev/etherswitch/ip17x/ip17x_vlans.c optional ip17x > dev/etherswitch/mdio_if.m optional miiproxy > dev/etherswitch/mdio.c optional miiproxy > dev/etherswitch/miiproxy.c optional miiproxy > dev/etherswitch/rtl8366/rtl8366rb.c optional rtl8366rb > dev/etherswitch/ukswitch/ukswitch.c optional ukswitch > dev/ex/if_ex.c optional ex > dev/ex/if_ex_isa.c optional ex isa > dev/ex/if_ex_pccard.c optional ex pccard > dev/exca/exca.c optional cbb > dev/fatm/if_fatm.c optional fatm pci > dev/fb/fbd.c optional fbd | vt > dev/fb/fb_if.m standard > dev/fb/splash.c optional sc splash > dev/fdt/fdt_clock.c optional fdt fdt_clock > dev/fdt/fdt_clock_if.m optional fdt fdt_clock > dev/fdt/fdt_common.c optional fdt > dev/fdt/fdt_pinctrl.c optional fdt fdt_pinctrl > dev/fdt/fdt_pinctrl_if.m optional fdt fdt_pinctrl > dev/fdt/fdt_slicer.c optional fdt cfi | fdt nand > dev/fdt/fdt_static_dtb.S optional fdt fdt_dtb_static \ > dependency "$S/boot/fdt/dts/${MACHINE}/${FDT_DTS_FILE}" > dev/fdt/simplebus.c optional fdt > dev/fe/if_fe.c optional fe > dev/fe/if_fe_pccard.c optional fe pccard > dev/filemon/filemon.c optional filemon > dev/firewire/firewire.c optional firewire > dev/firewire/fwcrom.c optional firewire > dev/firewire/fwdev.c optional firewire > dev/firewire/fwdma.c optional firewire > dev/firewire/fwmem.c optional firewire > dev/firewire/fwohci.c optional firewire > dev/firewire/fwohci_pci.c optional firewire pci > dev/firewire/if_fwe.c optional fwe > dev/firewire/if_fwip.c optional fwip > dev/firewire/sbp.c optional sbp > dev/firewire/sbp_targ.c optional sbp_targ > dev/flash/at45d.c optional at45d > dev/flash/mx25l.c optional mx25l > dev/fxp/if_fxp.c optional fxp > dev/fxp/inphy.c optional fxp > dev/gem/if_gem.c optional gem > dev/gem/if_gem_pci.c optional gem pci > dev/gem/if_gem_sbus.c optional gem sbus > dev/gpio/gpiobus.c optional gpio \ > dependency "gpiobus_if.h" > dev/gpio/gpioc.c optional gpio \ > dependency "gpio_if.h" > dev/gpio/gpioiic.c optional gpioiic > dev/gpio/gpioled.c optional gpioled > dev/gpio/gpio_if.m optional gpio > dev/gpio/gpiobus_if.m optional gpio > dev/gpio/ofw_gpiobus.c optional fdt gpio > dev/hatm/if_hatm.c optional hatm pci > dev/hatm/if_hatm_intr.c optional hatm pci > dev/hatm/if_hatm_ioctl.c optional hatm pci > dev/hatm/if_hatm_rx.c optional hatm pci > dev/hatm/if_hatm_tx.c optional hatm pci > dev/hifn/hifn7751.c optional hifn > dev/hme/if_hme.c optional hme > dev/hme/if_hme_pci.c optional hme pci > dev/hme/if_hme_sbus.c optional hme sbus > dev/hptiop/hptiop.c optional hptiop scbus > dev/hwpmc/hwpmc_logging.c optional hwpmc > dev/hwpmc/hwpmc_mod.c optional hwpmc > dev/hwpmc/hwpmc_soft.c optional hwpmc > dev/ichsmb/ichsmb.c optional ichsmb > dev/ichsmb/ichsmb_pci.c optional ichsmb pci > dev/ida/ida.c optional ida > dev/ida/ida_disk.c optional ida > dev/ida/ida_eisa.c optional ida eisa > dev/ida/ida_pci.c optional ida pci > dev/ie/if_ie.c optional ie isa nowerror > dev/ie/if_ie_isa.c optional ie isa > dev/ieee488/ibfoo.c optional pcii | tnt4882 > dev/ieee488/pcii.c optional pcii > dev/ieee488/tnt4882.c optional tnt4882 > dev/ieee488/upd7210.c optional pcii | tnt4882 > dev/iicbus/ad7418.c optional ad7418 > dev/iicbus/ds133x.c optional ds133x > dev/iicbus/ds1374.c optional ds1374 > dev/iicbus/ds1672.c optional ds1672 > dev/iicbus/icee.c optional icee > dev/iicbus/if_ic.c optional ic > dev/iicbus/iic.c optional iic > dev/iicbus/iicbb.c optional iicbb > dev/iicbus/iicbb_if.m optional iicbb > dev/iicbus/iicbus.c optional iicbus > dev/iicbus/iicbus_if.m optional iicbus > dev/iicbus/iiconf.c optional iicbus > dev/iicbus/iicsmb.c optional iicsmb \ > dependency "iicbus_if.h" > dev/iicbus/iicoc.c optional iicoc > dev/iicbus/lm75.c optional lm75 > dev/iicbus/pcf8563.c optional pcf8563 > dev/iicbus/s35390a.c optional s35390a > dev/iir/iir.c optional iir > dev/iir/iir_ctrl.c optional iir > dev/iir/iir_pci.c optional iir pci > dev/intpm/intpm.c optional intpm pci > # XXX Work around clang warning, until maintainer approves fix. > dev/ips/ips.c optional ips \ > compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" > dev/ips/ips_commands.c optional ips > dev/ips/ips_disk.c optional ips > dev/ips/ips_ioctl.c optional ips > dev/ips/ips_pci.c optional ips pci > dev/ipw/if_ipw.c optional ipw > ipwbssfw.c optional ipwbssfw | ipwfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_bss.fw:ipw_bss:130 -lintel_ipw -mipw_bss -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "ipwbssfw.c" > ipw_bss.fwo optional ipwbssfw | ipwfw \ > dependency "ipw_bss.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "ipw_bss.fwo" > ipw_bss.fw optional ipwbssfw | ipwfw \ > dependency "$S/contrib/dev/ipw/ipw2100-1.3.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "ipw_bss.fw" > ipwibssfw.c optional ipwibssfw | ipwfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_ibss.fw:ipw_ibss:130 -lintel_ipw -mipw_ibss -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "ipwibssfw.c" > ipw_ibss.fwo optional ipwibssfw | ipwfw \ > dependency "ipw_ibss.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "ipw_ibss.fwo" > ipw_ibss.fw optional ipwibssfw | ipwfw \ > dependency "$S/contrib/dev/ipw/ipw2100-1.3-i.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "ipw_ibss.fw" > ipwmonitorfw.c optional ipwmonitorfw | ipwfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk ipw_monitor.fw:ipw_monitor:130 -lintel_ipw -mipw_monitor -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "ipwmonitorfw.c" > ipw_monitor.fwo optional ipwmonitorfw | ipwfw \ > dependency "ipw_monitor.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "ipw_monitor.fwo" > ipw_monitor.fw optional ipwmonitorfw | ipwfw \ > dependency "$S/contrib/dev/ipw/ipw2100-1.3-p.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "ipw_monitor.fw" > dev/iscsi/icl.c optional iscsi | ctl > dev/iscsi/icl_proxy.c optional iscsi | ctl > dev/iscsi/iscsi.c optional iscsi scbus > dev/iscsi_initiator/iscsi.c optional iscsi_initiator scbus > dev/iscsi_initiator/iscsi_subr.c optional iscsi_initiator scbus > dev/iscsi_initiator/isc_cam.c optional iscsi_initiator scbus > dev/iscsi_initiator/isc_soc.c optional iscsi_initiator scbus > dev/iscsi_initiator/isc_sm.c optional iscsi_initiator scbus > dev/iscsi_initiator/isc_subr.c optional iscsi_initiator scbus > dev/ismt/ismt.c optional ismt > dev/isp/isp.c optional isp > dev/isp/isp_freebsd.c optional isp > dev/isp/isp_library.c optional isp > dev/isp/isp_pci.c optional isp pci > dev/isp/isp_sbus.c optional isp sbus > dev/isp/isp_target.c optional isp > dev/ispfw/ispfw.c optional ispfw > dev/iwi/if_iwi.c optional iwi > iwibssfw.c optional iwibssfw | iwifw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_bss.fw:iwi_bss:300 -lintel_iwi -miwi_bss -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwibssfw.c" > iwi_bss.fwo optional iwibssfw | iwifw \ > dependency "iwi_bss.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwi_bss.fwo" > iwi_bss.fw optional iwibssfw | iwifw \ > dependency "$S/contrib/dev/iwi/ipw2200-bss.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwi_bss.fw" > iwiibssfw.c optional iwiibssfw | iwifw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_ibss.fw:iwi_ibss:300 -lintel_iwi -miwi_ibss -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwiibssfw.c" > iwi_ibss.fwo optional iwiibssfw | iwifw \ > dependency "iwi_ibss.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwi_ibss.fwo" > iwi_ibss.fw optional iwiibssfw | iwifw \ > dependency "$S/contrib/dev/iwi/ipw2200-ibss.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwi_ibss.fw" > iwimonitorfw.c optional iwimonitorfw | iwifw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwi_monitor.fw:iwi_monitor:300 -lintel_iwi -miwi_monitor -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwimonitorfw.c" > iwi_monitor.fwo optional iwimonitorfw | iwifw \ > dependency "iwi_monitor.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwi_monitor.fwo" > iwi_monitor.fw optional iwimonitorfw | iwifw \ > dependency "$S/contrib/dev/iwi/ipw2200-sniffer.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwi_monitor.fw" > dev/iwn/if_iwn.c optional iwn > iwn1000fw.c optional iwn1000fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn1000.fw:iwn1000fw -miwn1000fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn1000fw.c" > iwn1000fw.fwo optional iwn1000fw | iwnfw \ > dependency "iwn1000.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn1000fw.fwo" > iwn1000.fw optional iwn1000fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-1000-39.31.5.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn1000.fw" > iwn100fw.c optional iwn100fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn100.fw:iwn100fw -miwn100fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn100fw.c" > iwn100fw.fwo optional iwn100fw | iwnfw \ > dependency "iwn100.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn100fw.fwo" > iwn100.fw optional iwn100fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-100-39.31.5.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn100.fw" > iwn105fw.c optional iwn105fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn105.fw:iwn105fw -miwn105fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn105fw.c" > iwn105fw.fwo optional iwn105fw | iwnfw \ > dependency "iwn105.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn105fw.fwo" > iwn105.fw optional iwn105fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-105-6-18.168.6.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn105.fw" > iwn135fw.c optional iwn135fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn135.fw:iwn135fw -miwn135fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn135fw.c" > iwn135fw.fwo optional iwn135fw | iwnfw \ > dependency "iwn135.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn135fw.fwo" > iwn135.fw optional iwn135fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-135-6-18.168.6.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn135.fw" > iwn2000fw.c optional iwn2000fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn2000.fw:iwn2000fw -miwn2000fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn2000fw.c" > iwn2000fw.fwo optional iwn2000fw | iwnfw \ > dependency "iwn2000.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn2000fw.fwo" > iwn2000.fw optional iwn2000fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-2000-18.168.6.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn2000.fw" > iwn2030fw.c optional iwn2030fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn2030.fw:iwn2030fw -miwn2030fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn2030fw.c" > iwn2030fw.fwo optional iwn2030fw | iwnfw \ > dependency "iwn2030.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn2030fw.fwo" > iwn2030.fw optional iwn2030fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwnwifi-2030-18.168.6.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn2030.fw" > iwn4965fw.c optional iwn4965fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn4965.fw:iwn4965fw -miwn4965fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn4965fw.c" > iwn4965fw.fwo optional iwn4965fw | iwnfw \ > dependency "iwn4965.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn4965fw.fwo" > iwn4965.fw optional iwn4965fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-4965-228.61.2.24.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn4965.fw" > iwn5000fw.c optional iwn5000fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn5000.fw:iwn5000fw -miwn5000fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn5000fw.c" > iwn5000fw.fwo optional iwn5000fw | iwnfw \ > dependency "iwn5000.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn5000fw.fwo" > iwn5000.fw optional iwn5000fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-5000-8.83.5.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn5000.fw" > iwn5150fw.c optional iwn5150fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn5150.fw:iwn5150fw -miwn5150fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn5150fw.c" > iwn5150fw.fwo optional iwn5150fw | iwnfw \ > dependency "iwn5150.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn5150fw.fwo" > iwn5150.fw optional iwn5150fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-5150-8.24.2.2.fw.uu"\ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn5150.fw" > iwn6000fw.c optional iwn6000fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000.fw:iwn6000fw -miwn6000fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn6000fw.c" > iwn6000fw.fwo optional iwn6000fw | iwnfw \ > dependency "iwn6000.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn6000fw.fwo" > iwn6000.fw optional iwn6000fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-6000-9.221.4.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn6000.fw" > iwn6000g2afw.c optional iwn6000g2afw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000g2a.fw:iwn6000g2afw -miwn6000g2afw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn6000g2afw.c" > iwn6000g2afw.fwo optional iwn6000g2afw | iwnfw \ > dependency "iwn6000g2a.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn6000g2afw.fwo" > iwn6000g2a.fw optional iwn6000g2afw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-6000g2a-17.168.5.2.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn6000g2a.fw" > iwn6000g2bfw.c optional iwn6000g2bfw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6000g2b.fw:iwn6000g2bfw -miwn6000g2bfw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn6000g2bfw.c" > iwn6000g2bfw.fwo optional iwn6000g2bfw | iwnfw \ > dependency "iwn6000g2b.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn6000g2bfw.fwo" > iwn6000g2b.fw optional iwn6000g2bfw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-6000g2b-17.168.5.2.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn6000g2b.fw" > iwn6050fw.c optional iwn6050fw | iwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk iwn6050.fw:iwn6050fw -miwn6050fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "iwn6050fw.c" > iwn6050fw.fwo optional iwn6050fw | iwnfw \ > dependency "iwn6050.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "iwn6050fw.fwo" > iwn6050.fw optional iwn6050fw | iwnfw \ > dependency "$S/contrib/dev/iwn/iwlwifi-6050-41.28.5.1.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "iwn6050.fw" > dev/ixgb/if_ixgb.c optional ixgb > dev/ixgb/ixgb_ee.c optional ixgb > dev/ixgb/ixgb_hw.c optional ixgb > dev/ixgbe/ixgbe.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe -DSMP" > dev/ixgbe/ixv.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_phy.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_api.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_common.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_mbx.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_vf.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_82598.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_82599.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_x540.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_dcb.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_dcb_82598.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/ixgbe/ixgbe_dcb_82599.c optional ixgbe inet \ > compile-with "${NORMAL_C} -I$S/dev/ixgbe" > dev/jme/if_jme.c optional jme pci > dev/joy/joy.c optional joy > dev/joy/joy_isa.c optional joy isa > dev/joy/joy_pccard.c optional joy pccard > dev/kbdmux/kbdmux.c optional kbdmux > dev/ksyms/ksyms.c optional ksyms > dev/le/am7990.c optional le > dev/le/am79900.c optional le > dev/le/if_le_pci.c optional le pci > dev/le/lance.c optional le > dev/led/led.c standard > dev/lge/if_lge.c optional lge > dev/lmc/if_lmc.c optional lmc > dev/malo/if_malo.c optional malo > dev/malo/if_malohal.c optional malo > dev/malo/if_malo_pci.c optional malo pci > dev/mc146818/mc146818.c optional mc146818 > dev/mca/mca_bus.c optional mca > dev/mcd/mcd.c optional mcd isa nowerror > dev/mcd/mcd_isa.c optional mcd isa nowerror > dev/md/md.c optional md > dev/mem/memdev.c optional mem > dev/mem/memutil.c optional mem > dev/mfi/mfi.c optional mfi > dev/mfi/mfi_debug.c optional mfi > dev/mfi/mfi_pci.c optional mfi pci > dev/mfi/mfi_disk.c optional mfi > dev/mfi/mfi_syspd.c optional mfi > dev/mfi/mfi_tbolt.c optional mfi > dev/mfi/mfi_linux.c optional mfi compat_linux > dev/mfi/mfi_cam.c optional mfip scbus > dev/mii/acphy.c optional miibus | acphy > dev/mii/amphy.c optional miibus | amphy > dev/mii/atphy.c optional miibus | atphy > dev/mii/axphy.c optional miibus | axphy > dev/mii/bmtphy.c optional miibus | bmtphy > dev/mii/brgphy.c optional miibus | brgphy > dev/mii/ciphy.c optional miibus | ciphy > dev/mii/e1000phy.c optional miibus | e1000phy > dev/mii/gentbi.c optional miibus | gentbi > dev/mii/icsphy.c optional miibus | icsphy > dev/mii/ip1000phy.c optional miibus | ip1000phy > dev/mii/jmphy.c optional miibus | jmphy > dev/mii/lxtphy.c optional miibus | lxtphy > dev/mii/mii.c optional miibus | mii > dev/mii/mii_bitbang.c optional miibus | mii_bitbang > dev/mii/mii_physubr.c optional miibus | mii > dev/mii/miibus_if.m optional miibus | mii > dev/mii/mlphy.c optional miibus | mlphy > dev/mii/nsgphy.c optional miibus | nsgphy > dev/mii/nsphy.c optional miibus | nsphy > dev/mii/nsphyter.c optional miibus | nsphyter > dev/mii/pnaphy.c optional miibus | pnaphy > dev/mii/qsphy.c optional miibus | qsphy > dev/mii/rdcphy.c optional miibus | rdcphy > dev/mii/rgephy.c optional miibus | rgephy > dev/mii/rlphy.c optional miibus | rlphy > dev/mii/rlswitch.c optional rlswitch > dev/mii/smcphy.c optional miibus | smcphy > dev/mii/smscphy.c optional miibus | smscphy > dev/mii/tdkphy.c optional miibus | tdkphy > dev/mii/tlphy.c optional miibus | tlphy > dev/mii/truephy.c optional miibus | truephy > dev/mii/ukphy.c optional miibus | mii > dev/mii/ukphy_subr.c optional miibus | mii > dev/mii/xmphy.c optional miibus | xmphy > dev/mk48txx/mk48txx.c optional mk48txx > dev/mlx/mlx.c optional mlx > dev/mlx/mlx_disk.c optional mlx > dev/mlx/mlx_pci.c optional mlx pci > dev/mly/mly.c optional mly > dev/mmc/mmc.c optional mmc > dev/mmc/mmcbr_if.m standard > dev/mmc/mmcbus_if.m standard > dev/mmc/mmcsd.c optional mmcsd > dev/mn/if_mn.c optional mn pci > dev/mpr/mpr.c optional mpr > dev/mpr/mpr_config.c optional mpr > # XXX Work around clang warning, until maintainer approves fix. > dev/mpr/mpr_mapping.c optional mpr \ > compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" > dev/mpr/mpr_pci.c optional mpr pci > dev/mpr/mpr_sas.c optional mpr \ > compile-with "${NORMAL_C} ${NO_WUNNEEDED_INTERNAL_DECL}" > dev/mpr/mpr_sas_lsi.c optional mpr > dev/mpr/mpr_table.c optional mpr > dev/mpr/mpr_user.c optional mpr > dev/mps/mps.c optional mps > dev/mps/mps_config.c optional mps > # XXX Work around clang warning, until maintainer approves fix. > dev/mps/mps_mapping.c optional mps \ > compile-with "${NORMAL_C} ${NO_WSOMETIMES_UNINITIALIZED}" > dev/mps/mps_pci.c optional mps pci > dev/mps/mps_sas.c optional mps \ > compile-with "${NORMAL_C} ${NO_WUNNEEDED_INTERNAL_DECL}" > dev/mps/mps_sas_lsi.c optional mps > dev/mps/mps_table.c optional mps > dev/mps/mps_user.c optional mps > dev/mpt/mpt.c optional mpt > dev/mpt/mpt_cam.c optional mpt > dev/mpt/mpt_debug.c optional mpt > dev/mpt/mpt_pci.c optional mpt pci > dev/mpt/mpt_raid.c optional mpt > dev/mpt/mpt_user.c optional mpt > dev/mrsas/mrsas.c optional mrsas > dev/mrsas/mrsas_cam.c optional mrsas > dev/mrsas/mrsas_ioctl.c optional mrsas > dev/mrsas/mrsas_fp.c optional mrsas > dev/msk/if_msk.c optional msk > dev/mvs/mvs.c optional mvs > dev/mvs/mvs_if.m optional mvs > dev/mvs/mvs_pci.c optional mvs pci > dev/mwl/if_mwl.c optional mwl > dev/mwl/if_mwl_pci.c optional mwl pci > dev/mwl/mwlhal.c optional mwl > mwlfw.c optional mwlfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk mw88W8363.fw:mw88W8363fw mwlboot.fw:mwlboot -mmwl -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "mwlfw.c" > mw88W8363.fwo optional mwlfw \ > dependency "mw88W8363.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "mw88W8363.fwo" > mw88W8363.fw optional mwlfw \ > dependency "$S/contrib/dev/mwl/mw88W8363.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "mw88W8363.fw" > mwlboot.fwo optional mwlfw \ > dependency "mwlboot.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "mwlboot.fwo" > mwlboot.fw optional mwlfw \ > dependency "$S/contrib/dev/mwl/mwlboot.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "mwlboot.fw" > dev/mxge/if_mxge.c optional mxge pci > dev/mxge/mxge_eth_z8e.c optional mxge pci > dev/mxge/mxge_ethp_z8e.c optional mxge pci > dev/mxge/mxge_rss_eth_z8e.c optional mxge pci > dev/mxge/mxge_rss_ethp_z8e.c optional mxge pci > dev/my/if_my.c optional my > dev/nand/nand.c optional nand > dev/nand/nand_bbt.c optional nand > dev/nand/nand_cdev.c optional nand > dev/nand/nand_generic.c optional nand > dev/nand/nand_geom.c optional nand > dev/nand/nand_id.c optional nand > dev/nand/nandbus.c optional nand > dev/nand/nandbus_if.m optional nand > dev/nand/nand_if.m optional nand > dev/nand/nandsim.c optional nandsim nand > dev/nand/nandsim_chip.c optional nandsim nand > dev/nand/nandsim_ctrl.c optional nandsim nand > dev/nand/nandsim_log.c optional nandsim nand > dev/nand/nandsim_swap.c optional nandsim nand > dev/nand/nfc_if.m optional nand > dev/ncr/ncr.c optional ncr pci > dev/ncv/ncr53c500.c optional ncv > dev/ncv/ncr53c500_pccard.c optional ncv pccard > dev/netmap/netmap.c optional netmap > dev/netmap/netmap_freebsd.c optional netmap > dev/netmap/netmap_generic.c optional netmap > dev/netmap/netmap_mbq.c optional netmap > dev/netmap/netmap_mem2.c optional netmap > dev/netmap/netmap_monitor.c optional netmap > dev/netmap/netmap_offloadings.c optional netmap > dev/netmap/netmap_pipe.c optional netmap > dev/netmap/netmap_vale.c optional netmap > # compile-with "${NORMAL_C} -Wconversion -Wextra" > dev/nfsmb/nfsmb.c optional nfsmb pci > dev/nge/if_nge.c optional nge > dev/nxge/if_nxge.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-device.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-mm.c optional nxge > dev/nxge/xgehal/xge-queue.c optional nxge > dev/nxge/xgehal/xgehal-driver.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-ring.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-channel.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-fifo.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-stats.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nxge/xgehal/xgehal-config.c optional nxge > dev/nxge/xgehal/xgehal-mgmt.c optional nxge \ > compile-with "${NORMAL_C} ${NO_WSELF_ASSIGN}" > dev/nmdm/nmdm.c optional nmdm > dev/nsp/nsp.c optional nsp > dev/nsp/nsp_pccard.c optional nsp pccard > dev/null/null.c standard > dev/oce/oce_hw.c optional oce pci > dev/oce/oce_if.c optional oce pci > dev/oce/oce_mbox.c optional oce pci > dev/oce/oce_queue.c optional oce pci > dev/oce/oce_sysctl.c optional oce pci > dev/oce/oce_util.c optional oce pci > dev/ofw/ofw_bus_if.m optional fdt > dev/ofw/ofw_bus_subr.c optional fdt > dev/ofw/ofw_fdt.c optional fdt > dev/ofw/ofw_if.m optional fdt > dev/ofw/ofw_iicbus.c optional fdt iicbus > dev/ofw/ofwbus.c optional fdt > dev/ofw/openfirm.c optional fdt > dev/ofw/openfirmio.c optional fdt > dev/patm/if_patm.c optional patm pci > dev/patm/if_patm_attach.c optional patm pci > dev/patm/if_patm_intr.c optional patm pci > dev/patm/if_patm_ioctl.c optional patm pci > dev/patm/if_patm_rtables.c optional patm pci > dev/patm/if_patm_rx.c optional patm pci > dev/patm/if_patm_tx.c optional patm pci > dev/pbio/pbio.c optional pbio isa > dev/pccard/card_if.m standard > dev/pccard/pccard.c optional pccard > dev/pccard/pccard_cis.c optional pccard > dev/pccard/pccard_cis_quirks.c optional pccard > dev/pccard/pccard_device.c optional pccard > dev/pccard/power_if.m standard > dev/pccbb/pccbb.c optional cbb > dev/pccbb/pccbb_isa.c optional cbb isa > dev/pccbb/pccbb_pci.c optional cbb pci > dev/pcf/pcf.c optional pcf > dev/pci/eisa_pci.c optional pci eisa > dev/pci/fixup_pci.c optional pci > dev/pci/hostb_pci.c optional pci > dev/pci/ignore_pci.c optional pci > dev/pci/isa_pci.c optional pci isa > dev/pci/pci.c optional pci > dev/pci/pci_if.m standard > dev/pci/pci_pci.c optional pci > dev/pci/pci_subr.c optional pci > dev/pci/pci_user.c optional pci > dev/pci/pcib_if.m standard > dev/pci/pcib_support.c standard > dev/pci/vga_pci.c optional pci > dev/pcn/if_pcn.c optional pcn pci > dev/pdq/if_fea.c optional fea eisa > dev/pdq/if_fpa.c optional fpa pci > dev/pdq/pdq.c optional nowerror fea eisa | fpa pci > dev/pdq/pdq_ifsubr.c optional nowerror fea eisa | fpa pci > dev/ppbus/if_plip.c optional plip > dev/ppbus/immio.c optional vpo > dev/ppbus/lpbb.c optional lpbb > dev/ppbus/lpt.c optional lpt > dev/ppbus/pcfclock.c optional pcfclock > dev/ppbus/ppb_1284.c optional ppbus > dev/ppbus/ppb_base.c optional ppbus > dev/ppbus/ppb_msq.c optional ppbus > dev/ppbus/ppbconf.c optional ppbus > dev/ppbus/ppbus_if.m optional ppbus > dev/ppbus/ppi.c optional ppi > dev/ppbus/pps.c optional pps > dev/ppbus/vpo.c optional vpo > dev/ppbus/vpoio.c optional vpo > dev/ppc/ppc.c optional ppc > dev/ppc/ppc_acpi.c optional ppc acpi > dev/ppc/ppc_isa.c optional ppc isa > dev/ppc/ppc_pci.c optional ppc pci > dev/ppc/ppc_puc.c optional ppc puc > dev/pst/pst-iop.c optional pst > dev/pst/pst-pci.c optional pst pci > dev/pst/pst-raid.c optional pst > dev/pty/pty.c optional pty > dev/puc/puc.c optional puc > dev/puc/puc_cfg.c optional puc > dev/puc/puc_pccard.c optional puc pccard > dev/puc/puc_pci.c optional puc pci > dev/puc/pucdata.c optional puc pci > dev/quicc/quicc_core.c optional quicc > dev/ral/rt2560.c optional ral > dev/ral/rt2661.c optional ral > dev/ral/rt2860.c optional ral > dev/ral/if_ral_pci.c optional ral pci > rt2561fw.c optional rt2561fw | ralfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk rt2561.fw:rt2561fw -mrt2561 -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "rt2561fw.c" > rt2561fw.fwo optional rt2561fw | ralfw \ > dependency "rt2561.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "rt2561fw.fwo" > rt2561.fw optional rt2561fw | ralfw \ > dependency "$S/contrib/dev/ral/rt2561.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "rt2561.fw" > rt2561sfw.c optional rt2561sfw | ralfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk rt2561s.fw:rt2561sfw -mrt2561s -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "rt2561sfw.c" > rt2561sfw.fwo optional rt2561sfw | ralfw \ > dependency "rt2561s.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "rt2561sfw.fwo" > rt2561s.fw optional rt2561sfw | ralfw \ > dependency "$S/contrib/dev/ral/rt2561s.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "rt2561s.fw" > rt2661fw.c optional rt2661fw | ralfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk rt2661.fw:rt2661fw -mrt2661 -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "rt2661fw.c" > rt2661fw.fwo optional rt2661fw | ralfw \ > dependency "rt2661.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "rt2661fw.fwo" > rt2661.fw optional rt2661fw | ralfw \ > dependency "$S/contrib/dev/ral/rt2661.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "rt2661.fw" > rt2860fw.c optional rt2860fw | ralfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk rt2860.fw:rt2860fw -mrt2860 -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "rt2860fw.c" > rt2860fw.fwo optional rt2860fw | ralfw \ > dependency "rt2860.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "rt2860fw.fwo" > rt2860.fw optional rt2860fw | ralfw \ > dependency "$S/contrib/dev/ral/rt2860.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "rt2860.fw" > dev/random/randomdev.c standard > dev/random/random_adaptors.c standard > dev/random/dummy_rng.c standard > dev/random/live_entropy_sources.c standard > dev/random/random_harvestq.c standard > dev/random/randomdev_soft.c optional random > dev/random/yarrow.c optional random > dev/random/fortuna.c optional random > dev/random/hash.c optional random > dev/rc/rc.c optional rc > dev/re/if_re.c optional re > dev/rl/if_rl.c optional rl pci > dev/rndtest/rndtest.c optional rndtest > dev/rp/rp.c optional rp > dev/rp/rp_isa.c optional rp isa > dev/rp/rp_pci.c optional rp pci > dev/safe/safe.c optional safe > dev/scc/scc_if.m optional scc > dev/scc/scc_bfe_ebus.c optional scc ebus > dev/scc/scc_bfe_quicc.c optional scc quicc > dev/scc/scc_bfe_sbus.c optional scc fhc | scc sbus > dev/scc/scc_core.c optional scc > dev/scc/scc_dev_quicc.c optional scc quicc > dev/scc/scc_dev_sab82532.c optional scc > dev/scc/scc_dev_z8530.c optional scc > dev/scd/scd.c optional scd isa > dev/scd/scd_isa.c optional scd isa > dev/sdhci/sdhci.c optional sdhci > dev/sdhci/sdhci_if.m optional sdhci > dev/sdhci/sdhci_pci.c optional sdhci pci > dev/sf/if_sf.c optional sf pci > dev/sge/if_sge.c optional sge pci > dev/si/si.c optional si > dev/si/si2_z280.c optional si > dev/si/si3_t225.c optional si > dev/si/si_eisa.c optional si eisa > dev/si/si_isa.c optional si isa > dev/si/si_pci.c optional si pci > dev/siba/siba.c optional siba > dev/siba/siba_bwn.c optional siba_bwn pci > dev/siba/siba_cc.c optional siba > dev/siba/siba_core.c optional siba | siba_bwn pci > dev/siba/siba_pcib.c optional siba pci > dev/siis/siis.c optional siis pci > dev/sis/if_sis.c optional sis pci > dev/sk/if_sk.c optional sk pci > dev/smbus/smb.c optional smb > dev/smbus/smbconf.c optional smbus > dev/smbus/smbus.c optional smbus > dev/smbus/smbus_if.m optional smbus > dev/smc/if_smc.c optional smc > dev/smc/if_smc_fdt.c optional smc fdt > dev/sn/if_sn.c optional sn > dev/sn/if_sn_isa.c optional sn isa > dev/sn/if_sn_pccard.c optional sn pccard > dev/snp/snp.c optional snp > dev/sound/clone.c optional sound > dev/sound/unit.c optional sound > dev/sound/isa/ad1816.c optional snd_ad1816 isa > dev/sound/isa/ess.c optional snd_ess isa > dev/sound/isa/gusc.c optional snd_gusc isa > dev/sound/isa/mss.c optional snd_mss isa > dev/sound/isa/sb16.c optional snd_sb16 isa > dev/sound/isa/sb8.c optional snd_sb8 isa > dev/sound/isa/sbc.c optional snd_sbc isa > dev/sound/isa/sndbuf_dma.c optional sound isa > dev/sound/pci/als4000.c optional snd_als4000 pci > dev/sound/pci/atiixp.c optional snd_atiixp pci > dev/sound/pci/cmi.c optional snd_cmi pci > dev/sound/pci/cs4281.c optional snd_cs4281 pci > dev/sound/pci/csa.c optional snd_csa pci > dev/sound/pci/csapcm.c optional snd_csa pci > dev/sound/pci/ds1.c optional snd_ds1 pci > dev/sound/pci/emu10k1.c optional snd_emu10k1 pci > dev/sound/pci/emu10kx.c optional snd_emu10kx pci > dev/sound/pci/emu10kx-pcm.c optional snd_emu10kx pci > dev/sound/pci/emu10kx-midi.c optional snd_emu10kx pci > dev/sound/pci/envy24.c optional snd_envy24 pci > dev/sound/pci/envy24ht.c optional snd_envy24ht pci > dev/sound/pci/es137x.c optional snd_es137x pci > dev/sound/pci/fm801.c optional snd_fm801 pci > dev/sound/pci/ich.c optional snd_ich pci > dev/sound/pci/maestro.c optional snd_maestro pci > dev/sound/pci/maestro3.c optional snd_maestro3 pci > dev/sound/pci/neomagic.c optional snd_neomagic pci > dev/sound/pci/solo.c optional snd_solo pci > dev/sound/pci/spicds.c optional snd_spicds pci > dev/sound/pci/t4dwave.c optional snd_t4dwave pci > dev/sound/pci/via8233.c optional snd_via8233 pci > dev/sound/pci/via82c686.c optional snd_via82c686 pci > dev/sound/pci/vibes.c optional snd_vibes pci > dev/sound/pci/hda/hdaa.c optional snd_hda pci > dev/sound/pci/hda/hdaa_patches.c optional snd_hda pci > dev/sound/pci/hda/hdac.c optional snd_hda pci > dev/sound/pci/hda/hdac_if.m optional snd_hda pci > dev/sound/pci/hda/hdacc.c optional snd_hda pci > dev/sound/pci/hdspe.c optional snd_hdspe pci > dev/sound/pci/hdspe-pcm.c optional snd_hdspe pci > dev/sound/pcm/ac97.c optional sound > dev/sound/pcm/ac97_if.m optional sound > dev/sound/pcm/ac97_patch.c optional sound > dev/sound/pcm/buffer.c optional sound \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/channel.c optional sound > dev/sound/pcm/channel_if.m optional sound > dev/sound/pcm/dsp.c optional sound > dev/sound/pcm/feeder.c optional sound > dev/sound/pcm/feeder_chain.c optional sound > dev/sound/pcm/feeder_eq.c optional sound \ > dependency "feeder_eq_gen.h" \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/feeder_if.m optional sound > dev/sound/pcm/feeder_format.c optional sound \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/feeder_matrix.c optional sound \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/feeder_mixer.c optional sound \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/feeder_rate.c optional sound \ > dependency "feeder_rate_gen.h" \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/feeder_volume.c optional sound \ > dependency "snd_fxdiv_gen.h" > dev/sound/pcm/mixer.c optional sound > dev/sound/pcm/mixer_if.m optional sound > dev/sound/pcm/sndstat.c optional sound > dev/sound/pcm/sound.c optional sound > dev/sound/pcm/vchan.c optional sound > dev/sound/usb/uaudio.c optional snd_uaudio usb > dev/sound/usb/uaudio_pcm.c optional snd_uaudio usb > dev/sound/midi/midi.c optional sound > dev/sound/midi/mpu401.c optional sound > dev/sound/midi/mpu_if.m optional sound > dev/sound/midi/mpufoi_if.m optional sound > dev/sound/midi/sequencer.c optional sound > dev/sound/midi/synth_if.m optional sound > dev/spibus/ofw_spibus.c optional fdt spibus > dev/spibus/spibus.c optional spibus \ > dependency "spibus_if.h" > dev/spibus/spibus_if.m optional spibus > dev/ste/if_ste.c optional ste pci > dev/stg/tmc18c30.c optional stg > dev/stg/tmc18c30_isa.c optional stg isa > dev/stg/tmc18c30_pccard.c optional stg pccard > dev/stg/tmc18c30_pci.c optional stg pci > dev/stg/tmc18c30_subr.c optional stg > dev/stge/if_stge.c optional stge > dev/streams/streams.c optional streams > dev/sym/sym_hipd.c optional sym \ > dependency "$S/dev/sym/sym_{conf,defs}.h" > dev/syscons/blank/blank_saver.c optional blank_saver > dev/syscons/daemon/daemon_saver.c optional daemon_saver > dev/syscons/dragon/dragon_saver.c optional dragon_saver > dev/syscons/fade/fade_saver.c optional fade_saver > dev/syscons/fire/fire_saver.c optional fire_saver > dev/syscons/green/green_saver.c optional green_saver > dev/syscons/logo/logo.c optional logo_saver > dev/syscons/logo/logo_saver.c optional logo_saver > dev/syscons/rain/rain_saver.c optional rain_saver > dev/syscons/schistory.c optional sc > dev/syscons/scmouse.c optional sc > dev/syscons/scterm.c optional sc > dev/syscons/scvidctl.c optional sc > dev/syscons/snake/snake_saver.c optional snake_saver > dev/syscons/star/star_saver.c optional star_saver > dev/syscons/syscons.c optional sc > dev/syscons/sysmouse.c optional sc > dev/syscons/warp/warp_saver.c optional warp_saver > dev/tdfx/tdfx_linux.c optional tdfx_linux tdfx compat_linux > dev/tdfx/tdfx_pci.c optional tdfx pci > dev/ti/if_ti.c optional ti pci > dev/tl/if_tl.c optional tl pci > dev/trm/trm.c optional trm > dev/twa/tw_cl_init.c optional twa \ > compile-with "${NORMAL_C} -I$S/dev/twa" > dev/twa/tw_cl_intr.c optional twa \ > compile-with "${NORMAL_C} -I$S/dev/twa" > dev/twa/tw_cl_io.c optional twa \ > compile-with "${NORMAL_C} -I$S/dev/twa" > dev/twa/tw_cl_misc.c optional twa \ > compile-with "${NORMAL_C} -I$S/dev/twa" > dev/twa/tw_osl_cam.c optional twa \ > compile-with "${NORMAL_C} -I$S/dev/twa" > dev/twa/tw_osl_freebsd.c optional twa \ > compile-with "${NORMAL_C} -I$S/dev/twa" > dev/twe/twe.c optional twe > dev/twe/twe_freebsd.c optional twe > dev/tws/tws.c optional tws > dev/tws/tws_cam.c optional tws > dev/tws/tws_hdm.c optional tws > dev/tws/tws_services.c optional tws > dev/tws/tws_user.c optional tws > dev/tx/if_tx.c optional tx > dev/txp/if_txp.c optional txp > dev/uart/uart_bus_acpi.c optional uart acpi > #dev/uart/uart_bus_cbus.c optional uart cbus > dev/uart/uart_bus_ebus.c optional uart ebus > dev/uart/uart_bus_fdt.c optional uart fdt > dev/uart/uart_bus_isa.c optional uart isa > dev/uart/uart_bus_pccard.c optional uart pccard > dev/uart/uart_bus_pci.c optional uart pci > dev/uart/uart_bus_puc.c optional uart puc > dev/uart/uart_bus_scc.c optional uart scc > dev/uart/uart_core.c optional uart > dev/uart/uart_dbg.c optional uart gdb > dev/uart/uart_dev_ns8250.c optional uart uart_ns8250 > dev/uart/uart_dev_pl011.c optional uart pl011 > dev/uart/uart_dev_quicc.c optional uart quicc > dev/uart/uart_dev_sab82532.c optional uart uart_sab82532 > dev/uart/uart_dev_sab82532.c optional uart scc > dev/uart/uart_dev_z8530.c optional uart uart_z8530 > dev/uart/uart_dev_z8530.c optional uart scc > dev/uart/uart_if.m optional uart > dev/uart/uart_subr.c optional uart > dev/uart/uart_tty.c optional uart > dev/ubsec/ubsec.c optional ubsec > # > # USB controller drivers > # > dev/usb/controller/at91dci.c optional at91dci > dev/usb/controller/at91dci_atmelarm.c optional at91dci at91rm9200 > dev/usb/controller/musb_otg.c optional musb > dev/usb/controller/musb_otg_atmelarm.c optional musb at91rm9200 > dev/usb/controller/dwc_otg.c optional dwcotg > dev/usb/controller/dwc_otg_fdt.c optional dwcotg fdt > dev/usb/controller/ehci.c optional ehci > dev/usb/controller/ehci_pci.c optional ehci pci > dev/usb/controller/ohci.c optional ohci > dev/usb/controller/ohci_atmelarm.c optional ohci at91rm9200 > dev/usb/controller/ohci_pci.c optional ohci pci > dev/usb/controller/uhci.c optional uhci > dev/usb/controller/uhci_pci.c optional uhci pci > dev/usb/controller/xhci.c optional xhci > dev/usb/controller/xhci_pci.c optional xhci pci > dev/usb/controller/saf1761_otg.c optional saf1761otg > dev/usb/controller/saf1761_otg_fdt.c optional saf1761otg fdt > dev/usb/controller/uss820dci.c optional uss820dci > dev/usb/controller/uss820dci_atmelarm.c optional uss820dci at91rm9200 > dev/usb/controller/usb_controller.c optional usb > # > # USB storage drivers > # > dev/usb/storage/umass.c optional umass > dev/usb/storage/urio.c optional urio > dev/usb/storage/ustorage_fs.c optional usfs > # > # USB core > # > dev/usb/usb_busdma.c optional usb > dev/usb/usb_compat_linux.c optional usb > dev/usb/usb_core.c optional usb > dev/usb/usb_debug.c optional usb > dev/usb/usb_dev.c optional usb > dev/usb/usb_device.c optional usb > dev/usb/usb_dynamic.c optional usb > dev/usb/usb_error.c optional usb > dev/usb/usb_generic.c optional usb > dev/usb/usb_handle_request.c optional usb > dev/usb/usb_hid.c optional usb > dev/usb/usb_hub.c optional usb > dev/usb/usb_if.m optional usb > dev/usb/usb_lookup.c optional usb > dev/usb/usb_mbuf.c optional usb > dev/usb/usb_msctest.c optional usb > dev/usb/usb_parse.c optional usb > dev/usb/usb_pf.c optional usb > dev/usb/usb_process.c optional usb > dev/usb/usb_request.c optional usb > dev/usb/usb_transfer.c optional usb > dev/usb/usb_util.c optional usb > # > # USB network drivers > # > dev/usb/net/if_aue.c optional aue > dev/usb/net/if_axe.c optional axe > dev/usb/net/if_axge.c optional axge > dev/usb/net/if_cdce.c optional cdce > dev/usb/net/if_cue.c optional cue > dev/usb/net/if_ipheth.c optional ipheth > dev/usb/net/if_kue.c optional kue > dev/usb/net/if_mos.c optional mos > dev/usb/net/if_rue.c optional rue > dev/usb/net/if_smsc.c optional smsc > dev/usb/net/if_udav.c optional udav > dev/usb/net/if_usie.c optional usie > dev/usb/net/if_urndis.c optional urndis > dev/usb/net/ruephy.c optional rue > dev/usb/net/usb_ethernet.c optional aue | axe | axge | cdce | cue | kue | \ > mos | rue | smsc | udav | ipheth | \ > urndis > dev/usb/net/uhso.c optional uhso > # > # USB WLAN drivers > # > dev/usb/wlan/if_rsu.c optional rsu > rsu-rtl8712fw.c optional rsu-rtl8712fw | rsufw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk rsu-rtl8712fw.fw:rsu-rtl8712fw:120 -mrsu-rtl8712fw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "rsu-rtl8712fw.c" > rsu-rtl8712fw.fwo optional rsu-rtl8712fw | rsufw \ > dependency "rsu-rtl8712fw.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "rsu-rtl8712fw.fwo" > rsu-rtl8712fw.fw optional rsu-rtl8712.fw | rsufw \ > dependency "$S/contrib/dev/rsu/rsu-rtl8712fw.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "rsu-rtl8712fw.fw" > dev/usb/wlan/if_rum.c optional rum > dev/usb/wlan/if_run.c optional run > runfw.c optional runfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk run.fw:runfw -mrunfw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "runfw.c" > runfw.fwo optional runfw \ > dependency "run.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "runfw.fwo" > run.fw optional runfw \ > dependency "$S/contrib/dev/run/rt2870.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "run.fw" > dev/usb/wlan/if_uath.c optional uath > dev/usb/wlan/if_upgt.c optional upgt > dev/usb/wlan/if_ural.c optional ural > dev/usb/wlan/if_urtw.c optional urtw > dev/usb/wlan/if_urtwn.c optional urtwn > urtwn-rtl8188eufw.c optional urtwn-rtl8188eufw | urtwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk urtwn-rtl8188eufw.fw:urtwn-rtl8188eufw:111 -murtwn-rtl8188eufw -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "urtwn-rtl8188eufw.c" > urtwn-rtl8188eufw.fwo optional urtwn-rtl8188eufw | urtwnfw \ > dependency "urtwn-rtl8188eufw.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "urtwn-rtl8188eufw.fwo" > urtwn-rtl8188eufw.fw optional urtwn-rtl8188eufw | urtwnfw \ > dependency "$S/contrib/dev/urtwn/urtwn-rtl8188eufw.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "urtwn-rtl8188eufw.fw" > urtwn-rtl8192cfwT.c optional urtwn-rtl8192cfwT | urtwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk urtwn-rtl8192cfwT.fw:urtwn-rtl8192cfwT:111 -murtwn-rtl8192cfwT -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "urtwn-rtl8192cfwT.c" > urtwn-rtl8192cfwT.fwo optional urtwn-rtl8192cfwT | urtwnfw \ > dependency "urtwn-rtl8192cfwT.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "urtwn-rtl8192cfwT.fwo" > urtwn-rtl8192cfwT.fw optional urtwn-rtl8192cfwT | urtwnfw \ > dependency "$S/contrib/dev/urtwn/urtwn-rtl8192cfwT.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "urtwn-rtl8192cfwT.fw" > urtwn-rtl8192cfwU.c optional urtwn-rtl8192cfwU | urtwnfw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk urtwn-rtl8192cfwU.fw:urtwn-rtl8192cfwU:111 -murtwn-rtl8192cfwU -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "urtwn-rtl8192cfwU.c" > urtwn-rtl8192cfwU.fwo optional urtwn-rtl8192cfwU | urtwnfw \ > dependency "urtwn-rtl8192cfwU.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "urtwn-rtl8192cfwU.fwo" > urtwn-rtl8192cfwU.fw optional urtwn-rtl8192cfwU | urtwnfw \ > dependency "$S/contrib/dev/urtwn/urtwn-rtl8192cfwU.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "urtwn-rtl8192cfwU.fw" > > dev/usb/wlan/if_zyd.c optional zyd > # > # USB serial and parallel port drivers > # > dev/usb/serial/u3g.c optional u3g > dev/usb/serial/uark.c optional uark > dev/usb/serial/ubsa.c optional ubsa > dev/usb/serial/ubser.c optional ubser > dev/usb/serial/uchcom.c optional uchcom > dev/usb/serial/ucycom.c optional ucycom > dev/usb/serial/ufoma.c optional ufoma > dev/usb/serial/uftdi.c optional uftdi > dev/usb/serial/ugensa.c optional ugensa > dev/usb/serial/uipaq.c optional uipaq > dev/usb/serial/ulpt.c optional ulpt > dev/usb/serial/umcs.c optional umcs > dev/usb/serial/umct.c optional umct > dev/usb/serial/umodem.c optional umodem > dev/usb/serial/umoscom.c optional umoscom > dev/usb/serial/uplcom.c optional uplcom > dev/usb/serial/uslcom.c optional uslcom > dev/usb/serial/uvisor.c optional uvisor > dev/usb/serial/uvscom.c optional uvscom > dev/usb/serial/usb_serial.c optional ucom | u3g | uark | ubsa | ubser | \ > uchcom | ucycom | ufoma | uftdi | \ > ugensa | uipaq | umcs | umct | \ > umodem | umoscom | uplcom | usie | \ > uslcom | uvisor | uvscom > # > # USB misc drivers > # > dev/usb/misc/ufm.c optional ufm > dev/usb/misc/udbp.c optional udbp > dev/usb/misc/uled.c optional uled > # > # USB input drivers > # > dev/usb/input/atp.c optional atp > dev/usb/input/uep.c optional uep > dev/usb/input/uhid.c optional uhid > dev/usb/input/ukbd.c optional ukbd > dev/usb/input/ums.c optional ums > dev/usb/input/wsp.c optional wsp > # > # USB quirks > # > dev/usb/quirk/usb_quirk.c optional usb > # > # USB templates > # > dev/usb/template/usb_template.c optional usb_template > dev/usb/template/usb_template_audio.c optional usb_template > dev/usb/template/usb_template_cdce.c optional usb_template > dev/usb/template/usb_template_kbd.c optional usb_template > dev/usb/template/usb_template_modem.c optional usb_template > dev/usb/template/usb_template_mouse.c optional usb_template > dev/usb/template/usb_template_msc.c optional usb_template > dev/usb/template/usb_template_mtp.c optional usb_template > dev/usb/template/usb_template_phone.c optional usb_template > # > # USB END > # > dev/utopia/idtphy.c optional utopia > dev/utopia/suni.c optional utopia > dev/utopia/utopia.c optional utopia > dev/vge/if_vge.c optional vge > dev/viapm/viapm.c optional viapm pci > dev/vkbd/vkbd.c optional vkbd > dev/vr/if_vr.c optional vr pci > dev/vt/colors/vt_termcolors.c optional vt > dev/vt/font/vt_font_default.c optional vt > dev/vt/font/vt_mouse_cursor.c optional vt > dev/vt/hw/efifb/efifb.c optional vt_efifb > dev/vt/hw/fb/vt_fb.c optional vt > dev/vt/hw/vga/vt_vga.c optional vt vt_vga > dev/vt/logo/logo_freebsd.c optional vt splash > dev/vt/vt_buf.c optional vt > dev/vt/vt_consolectl.c optional vt > dev/vt/vt_core.c optional vt > dev/vt/vt_font.c optional vt > dev/vt/vt_sysmouse.c optional vt > dev/vte/if_vte.c optional vte pci > dev/vx/if_vx.c optional vx > dev/vx/if_vx_eisa.c optional vx eisa > dev/vx/if_vx_pci.c optional vx pci > dev/vxge/vxge.c optional vxge > dev/vxge/vxgehal/vxgehal-ifmsg.c optional vxge > dev/vxge/vxgehal/vxgehal-mrpcim.c optional vxge > dev/vxge/vxgehal/vxge-queue.c optional vxge > dev/vxge/vxgehal/vxgehal-ring.c optional vxge > dev/vxge/vxgehal/vxgehal-swapper.c optional vxge > dev/vxge/vxgehal/vxgehal-mgmt.c optional vxge > dev/vxge/vxgehal/vxgehal-srpcim.c optional vxge > dev/vxge/vxgehal/vxgehal-config.c optional vxge > dev/vxge/vxgehal/vxgehal-blockpool.c optional vxge > dev/vxge/vxgehal/vxgehal-doorbells.c optional vxge > dev/vxge/vxgehal/vxgehal-mgmtaux.c optional vxge > dev/vxge/vxgehal/vxgehal-device.c optional vxge > dev/vxge/vxgehal/vxgehal-mm.c optional vxge > dev/vxge/vxgehal/vxgehal-driver.c optional vxge > dev/vxge/vxgehal/vxgehal-virtualpath.c optional vxge > dev/vxge/vxgehal/vxgehal-channel.c optional vxge > dev/vxge/vxgehal/vxgehal-fifo.c optional vxge > dev/watchdog/watchdog.c standard > dev/wb/if_wb.c optional wb pci > dev/wds/wd7000.c optional wds isa > dev/wi/if_wi.c optional wi > dev/wi/if_wi_pccard.c optional wi pccard > dev/wi/if_wi_pci.c optional wi pci > dev/wl/if_wl.c optional wl isa > dev/wpi/if_wpi.c optional wpi pci > wpifw.c optional wpifw \ > compile-with "${AWK} -f $S/tools/fw_stub.awk wpi.fw:wpifw:153229 -mwpi -c${.TARGET}" \ > no-implicit-rule before-depend local \ > clean "wpifw.c" > wpifw.fwo optional wpifw \ > dependency "wpi.fw" \ > compile-with "${NORMAL_FWO}" \ > no-implicit-rule \ > clean "wpifw.fwo" > wpi.fw optional wpifw \ > dependency "$S/contrib/dev/wpi/iwlwifi-3945-15.32.2.9.fw.uu" \ > compile-with "${NORMAL_FW}" \ > no-obj no-implicit-rule \ > clean "wpi.fw" > dev/xe/if_xe.c optional xe > dev/xe/if_xe_pccard.c optional xe pccard > dev/xen/balloon/balloon.c optional xen | xenhvm > dev/xen/blkfront/blkfront.c optional xen | xenhvm > dev/xen/blkback/blkback.c optional xen | xenhvm > dev/xen/console/console.c optional xen | xenhvm > dev/xen/console/xencons_ring.c optional xen | xenhvm > dev/xen/control/control.c optional xen | xenhvm > dev/xen/netback/netback.c optional xen | xenhvm > dev/xen/netfront/netfront.c optional xen | xenhvm > dev/xen/xenpci/xenpci.c optional xenpci > dev/xen/timer/timer.c optional xen | xenhvm > dev/xen/pvcpu/pvcpu.c optional xen | xenhvm > dev/xen/xenstore/xenstore.c optional xen | xenhvm > dev/xen/xenstore/xenstore_dev.c optional xen | xenhvm > dev/xen/xenstore/xenstored_dev.c optional xen | xenhvm > dev/xen/evtchn/evtchn_dev.c optional xen | xenhvm > dev/xen/privcmd/privcmd.c optional xen | xenhvm > dev/xl/if_xl.c optional xl pci > dev/xl/xlphy.c optional xl pci > fs/autofs/autofs.c optional autofs > fs/autofs/autofs_vfsops.c optional autofs > fs/autofs/autofs_vnops.c optional autofs > fs/deadfs/dead_vnops.c standard > fs/devfs/devfs_devs.c standard > fs/devfs/devfs_dir.c standard > fs/devfs/devfs_rule.c standard > fs/devfs/devfs_vfsops.c standard > fs/devfs/devfs_vnops.c standard > fs/fdescfs/fdesc_vfsops.c optional fdescfs > fs/fdescfs/fdesc_vnops.c optional fdescfs > fs/fifofs/fifo_vnops.c standard > fs/cuse/cuse.c optional cuse > fs/fuse/fuse_device.c optional fuse > fs/fuse/fuse_file.c optional fuse > fs/fuse/fuse_internal.c optional fuse > fs/fuse/fuse_io.c optional fuse > fs/fuse/fuse_ipc.c optional fuse > fs/fuse/fuse_main.c optional fuse > fs/fuse/fuse_node.c optional fuse > fs/fuse/fuse_vfsops.c optional fuse > fs/fuse/fuse_vnops.c optional fuse > fs/msdosfs/msdosfs_conv.c optional msdosfs > fs/msdosfs/msdosfs_denode.c optional msdosfs > fs/msdosfs/msdosfs_fat.c optional msdosfs > fs/msdosfs/msdosfs_fileno.c optional msdosfs > fs/msdosfs/msdosfs_iconv.c optional msdosfs_iconv > fs/msdosfs/msdosfs_lookup.c optional msdosfs > fs/msdosfs/msdosfs_vfsops.c optional msdosfs > fs/msdosfs/msdosfs_vnops.c optional msdosfs > fs/nandfs/bmap.c optional nandfs > fs/nandfs/nandfs_alloc.c optional nandfs > fs/nandfs/nandfs_bmap.c optional nandfs > fs/nandfs/nandfs_buffer.c optional nandfs > fs/nandfs/nandfs_cleaner.c optional nandfs > fs/nandfs/nandfs_cpfile.c optional nandfs > fs/nandfs/nandfs_dat.c optional nandfs > fs/nandfs/nandfs_dir.c optional nandfs > fs/nandfs/nandfs_ifile.c optional nandfs > fs/nandfs/nandfs_segment.c optional nandfs > fs/nandfs/nandfs_subr.c optional nandfs > fs/nandfs/nandfs_sufile.c optional nandfs > fs/nandfs/nandfs_vfsops.c optional nandfs > fs/nandfs/nandfs_vnops.c optional nandfs > fs/nfs/nfs_commonkrpc.c optional nfscl | nfsd > fs/nfs/nfs_commonsubs.c optional nfscl | nfsd > fs/nfs/nfs_commonport.c optional nfscl | nfsd > fs/nfs/nfs_commonacl.c optional nfscl | nfsd > fs/nfsclient/nfs_clcomsubs.c optional nfscl > fs/nfsclient/nfs_clsubs.c optional nfscl > fs/nfsclient/nfs_clstate.c optional nfscl > fs/nfsclient/nfs_clkrpc.c optional nfscl > fs/nfsclient/nfs_clrpcops.c optional nfscl > fs/nfsclient/nfs_clvnops.c optional nfscl > fs/nfsclient/nfs_clnode.c optional nfscl > fs/nfsclient/nfs_clvfsops.c optional nfscl > fs/nfsclient/nfs_clport.c optional nfscl > fs/nfsclient/nfs_clbio.c optional nfscl > fs/nfsclient/nfs_clnfsiod.c optional nfscl > fs/nfsserver/nfs_fha_new.c optional nfsd inet > fs/nfsserver/nfs_nfsdsocket.c optional nfsd inet > fs/nfsserver/nfs_nfsdsubs.c optional nfsd inet > fs/nfsserver/nfs_nfsdstate.c optional nfsd inet > fs/nfsserver/nfs_nfsdkrpc.c optional nfsd inet > fs/nfsserver/nfs_nfsdserv.c optional nfsd inet > fs/nfsserver/nfs_nfsdport.c optional nfsd inet > fs/nfsserver/nfs_nfsdcache.c optional nfsd inet > fs/nullfs/null_subr.c optional nullfs > fs/nullfs/null_vfsops.c optional nullfs > fs/nullfs/null_vnops.c optional nullfs > fs/procfs/procfs.c optional procfs > fs/procfs/procfs_ctl.c optional procfs > fs/procfs/procfs_dbregs.c optional procfs > fs/procfs/procfs_fpregs.c optional procfs > fs/procfs/procfs_ioctl.c optional procfs > fs/procfs/procfs_map.c optional procfs > fs/procfs/procfs_mem.c optional procfs > fs/procfs/procfs_note.c optional procfs > fs/procfs/procfs_osrel.c optional procfs > fs/procfs/procfs_regs.c optional procfs > fs/procfs/procfs_rlimit.c optional procfs > fs/procfs/procfs_status.c optional procfs > fs/procfs/procfs_type.c optional procfs > fs/pseudofs/pseudofs.c optional pseudofs > fs/pseudofs/pseudofs_fileno.c optional pseudofs > fs/pseudofs/pseudofs_vncache.c optional pseudofs > fs/pseudofs/pseudofs_vnops.c optional pseudofs > fs/smbfs/smbfs_io.c optional smbfs > fs/smbfs/smbfs_node.c optional smbfs > fs/smbfs/smbfs_smb.c optional smbfs > fs/smbfs/smbfs_subr.c optional smbfs > fs/smbfs/smbfs_vfsops.c optional smbfs > fs/smbfs/smbfs_vnops.c optional smbfs > fs/udf/osta.c optional udf > fs/udf/udf_iconv.c optional udf_iconv > fs/udf/udf_vfsops.c optional udf > fs/udf/udf_vnops.c optional udf > fs/unionfs/union_subr.c optional unionfs > fs/unionfs/union_vfsops.c optional unionfs > fs/unionfs/union_vnops.c optional unionfs > fs/tmpfs/tmpfs_vnops.c optional tmpfs > fs/tmpfs/tmpfs_fifoops.c optional tmpfs > fs/tmpfs/tmpfs_vfsops.c optional tmpfs > fs/tmpfs/tmpfs_subr.c optional tmpfs > gdb/gdb_cons.c optional gdb > gdb/gdb_main.c optional gdb > gdb/gdb_packet.c optional gdb > geom/bde/g_bde.c optional geom_bde > geom/bde/g_bde_crypt.c optional geom_bde > geom/bde/g_bde_lock.c optional geom_bde > geom/bde/g_bde_work.c optional geom_bde > geom/cache/g_cache.c optional geom_cache > geom/concat/g_concat.c optional geom_concat > geom/eli/g_eli.c optional geom_eli > geom/eli/g_eli_crypto.c optional geom_eli > geom/eli/g_eli_ctl.c optional geom_eli > geom/eli/g_eli_integrity.c optional geom_eli > geom/eli/g_eli_key.c optional geom_eli > geom/eli/g_eli_key_cache.c optional geom_eli > geom/eli/g_eli_privacy.c optional geom_eli > geom/eli/pkcs5v2.c optional geom_eli > geom/gate/g_gate.c optional geom_gate > geom/geom_aes.c optional geom_aes > geom/geom_bsd.c optional geom_bsd > geom/geom_bsd_enc.c optional geom_bsd > geom/geom_ccd.c optional ccd | geom_ccd > geom/geom_ctl.c standard > geom/geom_dev.c standard > geom/geom_disk.c standard > geom/geom_dump.c standard > geom/geom_event.c standard > geom/geom_fox.c optional geom_fox > geom/geom_flashmap.c optional fdt cfi | fdt nand > geom/geom_io.c standard > geom/geom_kern.c standard > geom/geom_map.c optional geom_map > geom/geom_mbr.c optional geom_mbr > geom/geom_mbr_enc.c optional geom_mbr > geom/geom_pc98.c optional geom_pc98 > geom/geom_pc98_enc.c optional geom_pc98 > geom/geom_redboot.c optional geom_redboot > geom/geom_slice.c standard > geom/geom_subr.c standard > geom/geom_sunlabel.c optional geom_sunlabel > geom/geom_sunlabel_enc.c optional geom_sunlabel > geom/geom_vfs.c standard > geom/geom_vol_ffs.c optional geom_vol > geom/journal/g_journal.c optional geom_journal > geom/journal/g_journal_ufs.c optional geom_journal > geom/label/g_label.c optional geom_label | geom_label_gpt > geom/label/g_label_ext2fs.c optional geom_label > geom/label/g_label_iso9660.c optional geom_label > geom/label/g_label_msdosfs.c optional geom_label > geom/label/g_label_ntfs.c optional geom_label > geom/label/g_label_reiserfs.c optional geom_label > geom/label/g_label_ufs.c optional geom_label > geom/label/g_label_gpt.c optional geom_label | geom_label_gpt > geom/label/g_label_disk_ident.c optional geom_label > geom/linux_lvm/g_linux_lvm.c optional geom_linux_lvm > geom/mirror/g_mirror.c optional geom_mirror > geom/mirror/g_mirror_ctl.c optional geom_mirror > geom/mountver/g_mountver.c optional geom_mountver > geom/multipath/g_multipath.c optional geom_multipath > geom/nop/g_nop.c optional geom_nop > geom/part/g_part.c standard > geom/part/g_part_if.m standard > geom/part/g_part_apm.c optional geom_part_apm > geom/part/g_part_bsd.c optional geom_part_bsd > geom/part/g_part_bsd64.c optional geom_part_bsd64 > geom/part/g_part_ebr.c optional geom_part_ebr > geom/part/g_part_gpt.c optional geom_part_gpt > geom/part/g_part_ldm.c optional geom_part_ldm > geom/part/g_part_mbr.c optional geom_part_mbr > geom/part/g_part_pc98.c optional geom_part_pc98 > geom/part/g_part_vtoc8.c optional geom_part_vtoc8 > geom/raid/g_raid.c optional geom_raid > geom/raid/g_raid_ctl.c optional geom_raid > geom/raid/g_raid_md_if.m optional geom_raid > geom/raid/g_raid_tr_if.m optional geom_raid > geom/raid/md_ddf.c optional geom_raid > geom/raid/md_intel.c optional geom_raid > geom/raid/md_jmicron.c optional geom_raid > geom/raid/md_nvidia.c optional geom_raid > geom/raid/md_promise.c optional geom_raid > geom/raid/md_sii.c optional geom_raid > geom/raid/tr_concat.c optional geom_raid > geom/raid/tr_raid0.c optional geom_raid > geom/raid/tr_raid1.c optional geom_raid > geom/raid/tr_raid1e.c optional geom_raid > geom/raid/tr_raid5.c optional geom_raid > geom/raid3/g_raid3.c optional geom_raid3 > geom/raid3/g_raid3_ctl.c optional geom_raid3 > geom/shsec/g_shsec.c optional geom_shsec > geom/stripe/g_stripe.c optional geom_stripe > geom/uncompress/g_uncompress.c optional geom_uncompress > contrib/xz-embedded/freebsd/xz_malloc.c \ > optional xz_embedded | geom_uncompress \ > compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" > contrib/xz-embedded/linux/lib/xz/xz_crc32.c \ > optional xz_embedded | geom_uncompress \ > compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" > contrib/xz-embedded/linux/lib/xz/xz_dec_bcj.c \ > optional xz_embedded | geom_uncompress \ > compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" > contrib/xz-embedded/linux/lib/xz/xz_dec_lzma2.c \ > optional xz_embedded | geom_uncompress \ > compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" > contrib/xz-embedded/linux/lib/xz/xz_dec_stream.c \ > optional xz_embedded | geom_uncompress \ > compile-with "${NORMAL_C} -I$S/contrib/xz-embedded/freebsd/ -I$S/contrib/xz-embedded/linux/lib/xz/ -I$S/contrib/xz-embedded/linux/include/linux/" > geom/uzip/g_uzip.c optional geom_uzip > geom/vinum/geom_vinum.c optional geom_vinum > geom/vinum/geom_vinum_create.c optional geom_vinum > geom/vinum/geom_vinum_drive.c optional geom_vinum > geom/vinum/geom_vinum_plex.c optional geom_vinum > geom/vinum/geom_vinum_volume.c optional geom_vinum > geom/vinum/geom_vinum_subr.c optional geom_vinum > geom/vinum/geom_vinum_raid5.c optional geom_vinum > geom/vinum/geom_vinum_share.c optional geom_vinum > geom/vinum/geom_vinum_list.c optional geom_vinum > geom/vinum/geom_vinum_rm.c optional geom_vinum > geom/vinum/geom_vinum_init.c optional geom_vinum > geom/vinum/geom_vinum_state.c optional geom_vinum > geom/vinum/geom_vinum_rename.c optional geom_vinum > geom/vinum/geom_vinum_move.c optional geom_vinum > geom/vinum/geom_vinum_events.c optional geom_vinum > geom/virstor/binstream.c optional geom_virstor > geom/virstor/g_virstor.c optional geom_virstor > geom/virstor/g_virstor_md.c optional geom_virstor > geom/zero/g_zero.c optional geom_zero > fs/ext2fs/ext2_alloc.c optional ext2fs > fs/ext2fs/ext2_balloc.c optional ext2fs > fs/ext2fs/ext2_bmap.c optional ext2fs > fs/ext2fs/ext2_extents.c optional ext2fs > fs/ext2fs/ext2_inode.c optional ext2fs > fs/ext2fs/ext2_inode_cnv.c optional ext2fs > fs/ext2fs/ext2_hash.c optional ext2fs > fs/ext2fs/ext2_htree.c optional ext2fs > fs/ext2fs/ext2_lookup.c optional ext2fs > fs/ext2fs/ext2_subr.c optional ext2fs > fs/ext2fs/ext2_vfsops.c optional ext2fs > fs/ext2fs/ext2_vnops.c optional ext2fs > gnu/fs/reiserfs/reiserfs_hashes.c optional reiserfs \ > warning "kernel contains GPL contaminated ReiserFS filesystem" > gnu/fs/reiserfs/reiserfs_inode.c optional reiserfs > gnu/fs/reiserfs/reiserfs_item_ops.c optional reiserfs > gnu/fs/reiserfs/reiserfs_namei.c optional reiserfs > gnu/fs/reiserfs/reiserfs_prints.c optional reiserfs > gnu/fs/reiserfs/reiserfs_stree.c optional reiserfs > gnu/fs/reiserfs/reiserfs_vfsops.c optional reiserfs > gnu/fs/reiserfs/reiserfs_vnops.c optional reiserfs > # > isa/isa_if.m standard > isa/isa_common.c optional isa > isa/isahint.c optional isa > isa/pnp.c optional isa isapnp > isa/pnpparse.c optional isa isapnp > fs/cd9660/cd9660_bmap.c optional cd9660 > fs/cd9660/cd9660_lookup.c optional cd9660 > fs/cd9660/cd9660_node.c optional cd9660 > fs/cd9660/cd9660_rrip.c optional cd9660 > fs/cd9660/cd9660_util.c optional cd9660 > fs/cd9660/cd9660_vfsops.c optional cd9660 > fs/cd9660/cd9660_vnops.c optional cd9660 > fs/cd9660/cd9660_iconv.c optional cd9660_iconv > kern/bus_if.m standard > kern/clock_if.m standard > kern/cpufreq_if.m standard > kern/device_if.m standard > kern/imgact_binmisc.c optional imagact_binmisc > kern/imgact_elf.c standard > kern/imgact_elf32.c optional compat_freebsd32 > kern/imgact_shell.c standard > kern/inflate.c optional gzip > kern/init_main.c standard > kern/init_sysent.c standard > kern/ksched.c optional _kposix_priority_scheduling > kern/kern_acct.c standard > kern/kern_alq.c optional alq > kern/kern_clock.c standard > kern/kern_condvar.c standard > kern/kern_conf.c standard > kern/kern_cons.c standard > kern/kern_cpu.c standard > kern/kern_cpuset.c standard > kern/kern_context.c standard > kern/kern_descrip.c standard > kern/kern_dtrace.c optional kdtrace_hooks >+kern/kern_dump.c standard > kern/kern_environment.c standard > kern/kern_et.c standard > kern/kern_event.c standard > kern/kern_exec.c standard > kern/kern_exit.c standard > kern/kern_fail.c standard > kern/kern_ffclock.c standard > kern/kern_fork.c standard > kern/kern_gzio.c optional gzio > kern/kern_hhook.c standard > kern/kern_idle.c standard > kern/kern_intr.c standard > kern/kern_jail.c standard > kern/kern_khelp.c standard > kern/kern_kthread.c standard > kern/kern_ktr.c optional ktr > kern/kern_ktrace.c standard > kern/kern_linker.c standard > kern/kern_lock.c standard > kern/kern_lockf.c standard > kern/kern_lockstat.c optional kdtrace_hooks > kern/kern_loginclass.c standard > kern/kern_malloc.c standard > kern/kern_mbuf.c standard > kern/kern_mib.c standard > kern/kern_module.c standard > kern/kern_mtxpool.c standard > kern/kern_mutex.c standard > kern/kern_ntptime.c standard > kern/kern_osd.c standard > kern/kern_physio.c standard > kern/kern_pmc.c standard > kern/kern_poll.c optional device_polling > kern/kern_priv.c standard > kern/kern_proc.c standard > kern/kern_prot.c standard > kern/kern_racct.c standard > kern/kern_rangelock.c standard > kern/kern_rctl.c standard > kern/kern_resource.c standard > kern/kern_rmlock.c standard > kern/kern_rwlock.c standard > kern/kern_sdt.c optional kdtrace_hooks > kern/kern_sema.c standard > kern/kern_sharedpage.c standard > kern/kern_shutdown.c standard > kern/kern_sig.c standard > kern/kern_switch.c standard > kern/kern_sx.c standard > kern/kern_synch.c standard > kern/kern_syscalls.c standard > kern/kern_sysctl.c standard > kern/kern_tc.c standard > kern/kern_thr.c standard > kern/kern_thread.c standard > kern/kern_time.c standard > kern/kern_timeout.c standard > kern/kern_umtx.c standard > kern/kern_uuid.c standard > kern/kern_xxx.c standard > kern/link_elf.c standard > kern/linker_if.m standard > kern/md4c.c optional netsmb > kern/md5c.c standard > kern/p1003_1b.c standard > kern/posix4_mib.c standard > kern/sched_4bsd.c optional sched_4bsd > kern/sched_ule.c optional sched_ule > kern/serdev_if.m standard > kern/stack_protector.c standard \ > compile-with "${NORMAL_C:N-fstack-protector*}" > kern/subr_acl_nfs4.c optional ufs_acl | zfs > kern/subr_acl_posix1e.c optional ufs_acl > kern/subr_autoconf.c standard > kern/subr_blist.c standard > kern/subr_bus.c standard > kern/subr_bus_dma.c standard > kern/subr_bufring.c standard > kern/subr_capability.c standard > kern/subr_clock.c standard > kern/subr_counter.c standard > kern/subr_devstat.c standard > kern/subr_disk.c standard > kern/subr_eventhandler.c standard > kern/subr_fattime.c standard > kern/subr_firmware.c optional firmware > kern/subr_hash.c standard > kern/subr_hints.c standard > kern/subr_kdb.c standard > kern/subr_kobj.c standard > kern/subr_lock.c standard > kern/subr_log.c standard > kern/subr_mbpool.c optional libmbpool > kern/subr_mchain.c optional libmchain > kern/subr_module.c standard > kern/subr_msgbuf.c standard > kern/subr_param.c standard > kern/subr_pcpu.c standard > kern/subr_pctrie.c standard > kern/subr_power.c standard > kern/subr_prf.c standard > kern/subr_prof.c standard > kern/subr_rman.c standard > kern/subr_rtc.c standard > kern/subr_sbuf.c standard > kern/subr_scanf.c standard > kern/subr_sglist.c standard > kern/subr_sleepqueue.c standard > kern/subr_smp.c standard > kern/subr_stack.c optional ddb | stack | ktr > kern/subr_taskqueue.c standard > kern/subr_terminal.c optional vt > kern/subr_trap.c standard > kern/subr_turnstile.c standard > kern/subr_uio.c standard > kern/subr_unit.c standard > kern/subr_vmem.c standard > kern/subr_witness.c optional witness > kern/sys_capability.c standard > kern/sys_generic.c standard > kern/sys_pipe.c standard > kern/sys_procdesc.c standard > kern/sys_process.c standard > kern/sys_socket.c standard > kern/syscalls.c standard > kern/sysv_ipc.c standard > kern/sysv_msg.c optional sysvmsg > kern/sysv_sem.c optional sysvsem > kern/sysv_shm.c optional sysvshm > kern/tty.c standard > kern/tty_compat.c optional compat_43tty > kern/tty_info.c standard > kern/tty_inq.c standard > kern/tty_outq.c standard > kern/tty_pts.c standard > kern/tty_tty.c standard > kern/tty_ttydisc.c standard > kern/uipc_accf.c standard > kern/uipc_debug.c optional ddb > kern/uipc_domain.c standard > kern/uipc_mbuf.c standard > kern/uipc_mbuf2.c standard > kern/uipc_mqueue.c optional p1003_1b_mqueue > kern/uipc_sem.c optional p1003_1b_semaphores > kern/uipc_shm.c standard > kern/uipc_sockbuf.c standard > kern/uipc_socket.c standard > kern/uipc_syscalls.c standard > kern/uipc_usrreq.c standard > kern/vfs_acl.c standard > kern/vfs_aio.c optional vfs_aio > kern/vfs_bio.c standard > kern/vfs_cache.c standard > kern/vfs_cluster.c standard > kern/vfs_default.c standard > kern/vfs_export.c standard > kern/vfs_extattr.c standard > kern/vfs_hash.c standard > kern/vfs_init.c standard > kern/vfs_lookup.c standard > kern/vfs_mount.c standard > kern/vfs_mountroot.c standard > kern/vfs_subr.c standard > kern/vfs_syscalls.c standard > kern/vfs_vnops.c standard > # > # Kernel GSS-API > # > gssd.h optional kgssapi \ > dependency "$S/kgssapi/gssd.x" \ > compile-with "RPCGEN_CPP='${CPP}' rpcgen -hM $S/kgssapi/gssd.x | grep -v pthread.h > gssd.h" \ > no-obj no-implicit-rule before-depend local \ > clean "gssd.h" > gssd_xdr.c optional kgssapi \ > dependency "$S/kgssapi/gssd.x gssd.h" \ > compile-with "RPCGEN_CPP='${CPP}' rpcgen -c $S/kgssapi/gssd.x -o gssd_xdr.c" \ > no-implicit-rule before-depend local \ > clean "gssd_xdr.c" > gssd_clnt.c optional kgssapi \ > dependency "$S/kgssapi/gssd.x gssd.h" \ > compile-with "RPCGEN_CPP='${CPP}' rpcgen -lM $S/kgssapi/gssd.x | grep -v string.h > gssd_clnt.c" \ > no-implicit-rule before-depend local \ > clean "gssd_clnt.c" > kgssapi/gss_accept_sec_context.c optional kgssapi > kgssapi/gss_add_oid_set_member.c optional kgssapi > kgssapi/gss_acquire_cred.c optional kgssapi > kgssapi/gss_canonicalize_name.c optional kgssapi > kgssapi/gss_create_empty_oid_set.c optional kgssapi > kgssapi/gss_delete_sec_context.c optional kgssapi > kgssapi/gss_display_status.c optional kgssapi > kgssapi/gss_export_name.c optional kgssapi > kgssapi/gss_get_mic.c optional kgssapi > kgssapi/gss_init_sec_context.c optional kgssapi > kgssapi/gss_impl.c optional kgssapi > kgssapi/gss_import_name.c optional kgssapi > kgssapi/gss_names.c optional kgssapi > kgssapi/gss_pname_to_uid.c optional kgssapi > kgssapi/gss_release_buffer.c optional kgssapi > kgssapi/gss_release_cred.c optional kgssapi > kgssapi/gss_release_name.c optional kgssapi > kgssapi/gss_release_oid_set.c optional kgssapi > kgssapi/gss_set_cred_option.c optional kgssapi > kgssapi/gss_test_oid_set_member.c optional kgssapi > kgssapi/gss_unwrap.c optional kgssapi > kgssapi/gss_verify_mic.c optional kgssapi > kgssapi/gss_wrap.c optional kgssapi > kgssapi/gss_wrap_size_limit.c optional kgssapi > kgssapi/gssd_prot.c optional kgssapi > kgssapi/krb5/krb5_mech.c optional kgssapi > kgssapi/krb5/kcrypto.c optional kgssapi > kgssapi/krb5/kcrypto_aes.c optional kgssapi > kgssapi/krb5/kcrypto_arcfour.c optional kgssapi > kgssapi/krb5/kcrypto_des.c optional kgssapi > kgssapi/krb5/kcrypto_des3.c optional kgssapi > kgssapi/kgss_if.m optional kgssapi > kgssapi/gsstest.c optional kgssapi_debug > # These files in libkern/ are those needed by all architectures. Some > # of the files in libkern/ are only needed on some architectures, e.g., > # libkern/divdi3.c is needed by i386 but not alpha. Also, some of these > # routines may be optimized for a particular platform. In either case, > # the file should be moved to conf/files.<arch> from here. > # > libkern/arc4random.c standard > libkern/bcd.c standard > libkern/bsearch.c standard > libkern/crc32.c standard > libkern/explicit_bzero.c standard > libkern/fnmatch.c standard > libkern/iconv.c optional libiconv > libkern/iconv_converter_if.m optional libiconv > libkern/iconv_ucs.c optional libiconv > libkern/iconv_xlat.c optional libiconv > libkern/iconv_xlat16.c optional libiconv > libkern/inet_aton.c standard > libkern/inet_ntoa.c standard > libkern/inet_ntop.c standard > libkern/inet_pton.c standard > libkern/jenkins_hash.c standard > libkern/murmur3_32.c standard > libkern/mcount.c optional profiling-routine > libkern/memcchr.c standard > libkern/memchr.c optional fdt | gdb > libkern/memcmp.c standard > libkern/memmem.c optional gdb > libkern/qsort.c standard > libkern/qsort_r.c standard > libkern/random.c standard > libkern/scanc.c standard > libkern/strcasecmp.c standard > libkern/strcat.c standard > libkern/strchr.c standard > libkern/strcmp.c standard > libkern/strcpy.c standard > libkern/strcspn.c standard > libkern/strdup.c standard > libkern/strndup.c standard > libkern/strlcat.c standard > libkern/strlcpy.c standard > libkern/strlen.c standard > libkern/strncmp.c standard > libkern/strncpy.c standard > libkern/strnlen.c standard > libkern/strrchr.c standard > libkern/strsep.c standard > libkern/strspn.c standard > libkern/strstr.c standard > libkern/strtol.c standard > libkern/strtoq.c standard > libkern/strtoul.c standard > libkern/strtouq.c standard > libkern/strvalid.c standard > net/bpf.c standard > net/bpf_buffer.c optional bpf > net/bpf_jitter.c optional bpf_jitter > net/bpf_filter.c optional bpf | netgraph_bpf > net/bpf_zerocopy.c optional bpf > net/bridgestp.c optional bridge | if_bridge > net/flowtable.c optional flowtable inet | flowtable inet6 > net/ieee8023ad_lacp.c optional lagg > net/if.c standard > net/if_arcsubr.c optional arcnet > net/if_atmsubr.c optional atm > net/if_bridge.c optional bridge inet | if_bridge inet > net/if_clone.c standard > net/if_dead.c standard > net/if_debug.c optional ddb > net/if_disc.c optional disc > net/if_edsc.c optional edsc > net/if_enc.c optional enc ipsec inet | enc ipsec inet6 > net/if_epair.c optional epair > net/if_ethersubr.c optional ether > net/if_fddisubr.c optional fddi > net/if_fwsubr.c optional fwip > net/if_gif.c optional gif inet | gif inet6 | \ > netgraph_gif inet | netgraph_gif inet6 > net/if_gre.c optional gre inet | gre inet6 > net/if_iso88025subr.c optional token > net/if_lagg.c optional lagg > net/if_loop.c optional loop > net/if_llatbl.c standard > net/if_me.c optional me inet > net/if_media.c standard > net/if_mib.c standard > net/if_spppfr.c optional sppp | netgraph_sppp > net/if_spppsubr.c optional sppp | netgraph_sppp > net/if_stf.c optional stf inet inet6 > net/if_tun.c optional tun > net/if_tap.c optional tap > net/if_vlan.c optional vlan > net/if_vxlan.c optional vxlan inet | vxlan inet6 > net/mppcc.c optional netgraph_mppc_compression > net/mppcd.c optional netgraph_mppc_compression > net/netisr.c standard > net/pfil.c optional ether | inet > net/radix.c standard > net/radix_mpath.c standard > net/raw_cb.c standard > net/raw_usrreq.c standard > net/route.c standard > net/rtsock.c standard > net/slcompress.c optional netgraph_vjc | sppp | \ > netgraph_sppp > net/vnet.c optional vimage > net/zlib.c optional crypto | geom_uzip | ipsec | \ > mxge | netgraph_deflate | \ > ddb_ctf | gzio | geom_uncompress > net80211/ieee80211.c optional wlan > net80211/ieee80211_acl.c optional wlan wlan_acl > net80211/ieee80211_action.c optional wlan > net80211/ieee80211_ageq.c optional wlan > net80211/ieee80211_adhoc.c optional wlan \ > compile-with "${NORMAL_C} -Wno-unused-function" > net80211/ieee80211_ageq.c optional wlan > net80211/ieee80211_amrr.c optional wlan | wlan_amrr > net80211/ieee80211_crypto.c optional wlan \ > compile-with "${NORMAL_C} -Wno-unused-function" > net80211/ieee80211_crypto_ccmp.c optional wlan wlan_ccmp > net80211/ieee80211_crypto_none.c optional wlan > net80211/ieee80211_crypto_tkip.c optional wlan wlan_tkip > net80211/ieee80211_crypto_wep.c optional wlan wlan_wep > net80211/ieee80211_ddb.c optional wlan ddb > net80211/ieee80211_dfs.c optional wlan > net80211/ieee80211_freebsd.c optional wlan > net80211/ieee80211_hostap.c optional wlan \ > compile-with "${NORMAL_C} -Wno-unused-function" > net80211/ieee80211_ht.c optional wlan > net80211/ieee80211_hwmp.c optional wlan ieee80211_support_mesh > net80211/ieee80211_input.c optional wlan > net80211/ieee80211_ioctl.c optional wlan > net80211/ieee80211_mesh.c optional wlan ieee80211_support_mesh \ > compile-with "${NORMAL_C} -Wno-unused-function" > net80211/ieee80211_monitor.c optional wlan > net80211/ieee80211_node.c optional wlan > net80211/ieee80211_output.c optional wlan > net80211/ieee80211_phy.c optional wlan > net80211/ieee80211_power.c optional wlan > net80211/ieee80211_proto.c optional wlan > net80211/ieee80211_radiotap.c optional wlan > net80211/ieee80211_ratectl.c optional wlan > net80211/ieee80211_ratectl_none.c optional wlan > net80211/ieee80211_regdomain.c optional wlan > net80211/ieee80211_rssadapt.c optional wlan wlan_rssadapt > net80211/ieee80211_scan.c optional wlan > net80211/ieee80211_scan_sta.c optional wlan > net80211/ieee80211_sta.c optional wlan \ > compile-with "${NORMAL_C} -Wno-unused-function" > net80211/ieee80211_superg.c optional wlan ieee80211_support_superg > net80211/ieee80211_tdma.c optional wlan ieee80211_support_tdma > net80211/ieee80211_wds.c optional wlan > net80211/ieee80211_xauth.c optional wlan wlan_xauth > net80211/ieee80211_alq.c optional wlan ieee80211_alq > netgraph/atm/ccatm/ng_ccatm.c optional ngatm_ccatm \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > netgraph/atm/ng_atm.c optional ngatm_atm > netgraph/atm/ngatmbase.c optional ngatm_atmbase \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > netgraph/atm/sscfu/ng_sscfu.c optional ngatm_sscfu \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > netgraph/atm/sscop/ng_sscop.c optional ngatm_sscop \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > netgraph/atm/uni/ng_uni.c optional ngatm_uni \ > compile-with "${NORMAL_C} -I$S/contrib/ngatm" > netgraph/bluetooth/common/ng_bluetooth.c optional netgraph_bluetooth > netgraph/bluetooth/drivers/bt3c/ng_bt3c_pccard.c optional netgraph_bluetooth_bt3c > netgraph/bluetooth/drivers/h4/ng_h4.c optional netgraph_bluetooth_h4 > netgraph/bluetooth/drivers/ubt/ng_ubt.c optional netgraph_bluetooth_ubt usb > netgraph/bluetooth/drivers/ubtbcmfw/ubtbcmfw.c optional netgraph_bluetooth_ubtbcmfw usb > netgraph/bluetooth/hci/ng_hci_cmds.c optional netgraph_bluetooth_hci > netgraph/bluetooth/hci/ng_hci_evnt.c optional netgraph_bluetooth_hci > netgraph/bluetooth/hci/ng_hci_main.c optional netgraph_bluetooth_hci > netgraph/bluetooth/hci/ng_hci_misc.c optional netgraph_bluetooth_hci > netgraph/bluetooth/hci/ng_hci_ulpi.c optional netgraph_bluetooth_hci > netgraph/bluetooth/l2cap/ng_l2cap_cmds.c optional netgraph_bluetooth_l2cap > netgraph/bluetooth/l2cap/ng_l2cap_evnt.c optional netgraph_bluetooth_l2cap > netgraph/bluetooth/l2cap/ng_l2cap_llpi.c optional netgraph_bluetooth_l2cap > netgraph/bluetooth/l2cap/ng_l2cap_main.c optional netgraph_bluetooth_l2cap > netgraph/bluetooth/l2cap/ng_l2cap_misc.c optional netgraph_bluetooth_l2cap > netgraph/bluetooth/l2cap/ng_l2cap_ulpi.c optional netgraph_bluetooth_l2cap > netgraph/bluetooth/socket/ng_btsocket.c optional netgraph_bluetooth_socket > netgraph/bluetooth/socket/ng_btsocket_hci_raw.c optional netgraph_bluetooth_socket > netgraph/bluetooth/socket/ng_btsocket_l2cap.c optional netgraph_bluetooth_socket > netgraph/bluetooth/socket/ng_btsocket_l2cap_raw.c optional netgraph_bluetooth_socket > netgraph/bluetooth/socket/ng_btsocket_rfcomm.c optional netgraph_bluetooth_socket > netgraph/bluetooth/socket/ng_btsocket_sco.c optional netgraph_bluetooth_socket > netgraph/netflow/netflow.c optional netgraph_netflow > netgraph/netflow/netflow_v9.c optional netgraph_netflow > netgraph/netflow/ng_netflow.c optional netgraph_netflow > netgraph/ng_UI.c optional netgraph_UI > netgraph/ng_async.c optional netgraph_async > netgraph/ng_atmllc.c optional netgraph_atmllc > netgraph/ng_base.c optional netgraph > netgraph/ng_bpf.c optional netgraph_bpf > netgraph/ng_bridge.c optional netgraph_bridge > netgraph/ng_car.c optional netgraph_car > netgraph/ng_cisco.c optional netgraph_cisco > netgraph/ng_deflate.c optional netgraph_deflate > netgraph/ng_device.c optional netgraph_device > netgraph/ng_echo.c optional netgraph_echo > netgraph/ng_eiface.c optional netgraph_eiface > netgraph/ng_ether.c optional netgraph_ether > netgraph/ng_ether_echo.c optional netgraph_ether_echo > netgraph/ng_frame_relay.c optional netgraph_frame_relay > netgraph/ng_gif.c optional netgraph_gif inet6 | netgraph_gif inet > netgraph/ng_gif_demux.c optional netgraph_gif_demux > netgraph/ng_hole.c optional netgraph_hole > netgraph/ng_iface.c optional netgraph_iface > netgraph/ng_ip_input.c optional netgraph_ip_input > netgraph/ng_ipfw.c optional netgraph_ipfw inet ipfirewall > netgraph/ng_ksocket.c optional netgraph_ksocket > netgraph/ng_l2tp.c optional netgraph_l2tp > netgraph/ng_lmi.c optional netgraph_lmi > netgraph/ng_mppc.c optional netgraph_mppc_compression | \ > netgraph_mppc_encryption > netgraph/ng_nat.c optional netgraph_nat inet libalias > netgraph/ng_one2many.c optional netgraph_one2many > netgraph/ng_parse.c optional netgraph > netgraph/ng_patch.c optional netgraph_patch > netgraph/ng_pipe.c optional netgraph_pipe > netgraph/ng_ppp.c optional netgraph_ppp > netgraph/ng_pppoe.c optional netgraph_pppoe > netgraph/ng_pptpgre.c optional netgraph_pptpgre > netgraph/ng_pred1.c optional netgraph_pred1 > netgraph/ng_rfc1490.c optional netgraph_rfc1490 > netgraph/ng_socket.c optional netgraph_socket > netgraph/ng_split.c optional netgraph_split > netgraph/ng_sppp.c optional netgraph_sppp > netgraph/ng_tag.c optional netgraph_tag > netgraph/ng_tcpmss.c optional netgraph_tcpmss > netgraph/ng_tee.c optional netgraph_tee > netgraph/ng_tty.c optional netgraph_tty > netgraph/ng_vjc.c optional netgraph_vjc > netgraph/ng_vlan.c optional netgraph_vlan > netinet/accf_data.c optional accept_filter_data inet > netinet/accf_dns.c optional accept_filter_dns inet > netinet/accf_http.c optional accept_filter_http inet > netinet/if_atm.c optional atm > netinet/if_ether.c optional inet ether > netinet/igmp.c optional inet > netinet/in.c optional inet > netinet/in_debug.c optional inet ddb > netinet/in_kdtrace.c optional inet | inet6 > netinet/ip_carp.c optional inet carp | inet6 carp > netinet/in_gif.c optional gif inet | netgraph_gif inet > netinet/ip_gre.c optional gre inet > netinet/ip_id.c optional inet > netinet/in_mcast.c optional inet > netinet/in_pcb.c optional inet | inet6 > netinet/in_pcbgroup.c optional inet pcbgroup | inet6 pcbgroup > netinet/in_proto.c optional inet | inet6 > netinet/in_rmx.c optional inet > netinet/in_rss.c optional inet rss | inet6 rss > netinet/ip_divert.c optional inet ipdivert ipfirewall > netinet/ip_ecn.c optional inet | inet6 > netinet/ip_encap.c optional inet | inet6 > netinet/ip_fastfwd.c optional inet > netinet/ip_icmp.c optional inet | inet6 > netinet/ip_input.c optional inet > netinet/ip_ipsec.c optional inet ipsec > netinet/ip_mroute.c optional mrouting inet > netinet/ip_options.c optional inet > netinet/ip_output.c optional inet > netinet/raw_ip.c optional inet | inet6 > netinet/cc/cc.c optional inet | inet6 > netinet/cc/cc_newreno.c optional inet | inet6 > netinet/sctp_asconf.c optional inet sctp | inet6 sctp > netinet/sctp_auth.c optional inet sctp | inet6 sctp > netinet/sctp_bsd_addr.c optional inet sctp | inet6 sctp > netinet/sctp_cc_functions.c optional inet sctp | inet6 sctp > netinet/sctp_crc32.c optional inet sctp | inet6 sctp > netinet/sctp_indata.c optional inet sctp | inet6 sctp > netinet/sctp_input.c optional inet sctp | inet6 sctp > netinet/sctp_output.c optional inet sctp | inet6 sctp > netinet/sctp_pcb.c optional inet sctp | inet6 sctp > netinet/sctp_peeloff.c optional inet sctp | inet6 sctp > netinet/sctp_ss_functions.c optional inet sctp | inet6 sctp > netinet/sctp_syscalls.c optional inet sctp | inet6 sctp > netinet/sctp_sysctl.c optional inet sctp | inet6 sctp > netinet/sctp_timer.c optional inet sctp | inet6 sctp > netinet/sctp_usrreq.c optional inet sctp | inet6 sctp > netinet/sctputil.c optional inet sctp | inet6 sctp > netinet/tcp_debug.c optional tcpdebug > netinet/tcp_hostcache.c optional inet | inet6 > netinet/tcp_input.c optional inet | inet6 > netinet/tcp_lro.c optional inet | inet6 > netinet/tcp_output.c optional inet | inet6 > netinet/tcp_offload.c optional tcp_offload inet | tcp_offload inet6 > netinet/tcp_reass.c optional inet | inet6 > netinet/tcp_sack.c optional inet | inet6 > netinet/tcp_subr.c optional inet | inet6 > netinet/tcp_syncache.c optional inet | inet6 > netinet/tcp_timer.c optional inet | inet6 > netinet/tcp_timewait.c optional inet | inet6 > netinet/tcp_usrreq.c optional inet | inet6 > netinet/toeplitz.c optional inet rss | inet6 rss > netinet/udp_usrreq.c optional inet | inet6 > netinet/libalias/alias.c optional libalias inet | netgraph_nat inet > netinet/libalias/alias_db.c optional libalias inet | netgraph_nat inet > netinet/libalias/alias_mod.c optional libalias | netgraph_nat > netinet/libalias/alias_proxy.c optional libalias inet | netgraph_nat inet > netinet/libalias/alias_util.c optional libalias inet | netgraph_nat inet > netinet/libalias/alias_sctp.c optional libalias inet | netgraph_nat inet > netinet6/dest6.c optional inet6 > netinet6/frag6.c optional inet6 > netinet6/icmp6.c optional inet6 > netinet6/in6.c optional inet6 > netinet6/in6_cksum.c optional inet6 > netinet6/in6_gif.c optional gif inet6 | netgraph_gif inet6 > netinet6/in6_ifattach.c optional inet6 > netinet6/in6_mcast.c optional inet6 > netinet6/in6_pcb.c optional inet6 > netinet6/in6_pcbgroup.c optional inet6 pcbgroup > netinet6/in6_proto.c optional inet6 > netinet6/in6_rmx.c optional inet6 > netinet6/in6_src.c optional inet6 > netinet6/ip6_forward.c optional inet6 > netinet6/ip6_gre.c optional gre inet6 > netinet6/ip6_id.c optional inet6 > netinet6/ip6_input.c optional inet6 > netinet6/ip6_mroute.c optional mrouting inet6 > netinet6/ip6_output.c optional inet6 > netinet6/ip6_ipsec.c optional inet6 ipsec > netinet6/mld6.c optional inet6 > netinet6/nd6.c optional inet6 > netinet6/nd6_nbr.c optional inet6 > netinet6/nd6_rtr.c optional inet6 > netinet6/raw_ip6.c optional inet6 > netinet6/route6.c optional inet6 > netinet6/scope6.c optional inet6 > netinet6/sctp6_usrreq.c optional inet6 sctp > netinet6/udp6_usrreq.c optional inet6 > netipsec/ipsec.c optional ipsec inet | ipsec inet6 > netipsec/ipsec_input.c optional ipsec inet | ipsec inet6 > netipsec/ipsec_mbuf.c optional ipsec inet | ipsec inet6 > netipsec/ipsec_output.c optional ipsec inet | ipsec inet6 > netipsec/key.c optional ipsec inet | ipsec inet6 > netipsec/key_debug.c optional ipsec inet | ipsec inet6 > netipsec/keysock.c optional ipsec inet | ipsec inet6 > netipsec/xform_ah.c optional ipsec inet | ipsec inet6 > netipsec/xform_esp.c optional ipsec inet | ipsec inet6 > netipsec/xform_ipcomp.c optional ipsec inet | ipsec inet6 > netipsec/xform_ipip.c optional ipsec inet | ipsec inet6 > netipsec/xform_tcp.c optional ipsec inet tcp_signature | \ > ipsec inet6 tcp_signature > netnatm/natm.c optional natm > netnatm/natm_pcb.c optional natm > netnatm/natm_proto.c optional natm > netpfil/ipfw/dn_heap.c optional inet dummynet > netpfil/ipfw/dn_sched_fifo.c optional inet dummynet > netpfil/ipfw/dn_sched_prio.c optional inet dummynet > netpfil/ipfw/dn_sched_qfq.c optional inet dummynet > netpfil/ipfw/dn_sched_rr.c optional inet dummynet > netpfil/ipfw/dn_sched_wf2q.c optional inet dummynet > netpfil/ipfw/ip_dummynet.c optional inet dummynet > netpfil/ipfw/ip_dn_io.c optional inet dummynet > netpfil/ipfw/ip_dn_glue.c optional inet dummynet > netpfil/ipfw/ip_fw2.c optional inet ipfirewall > netpfil/ipfw/ip_fw_dynamic.c optional inet ipfirewall > netpfil/ipfw/ip_fw_log.c optional inet ipfirewall > netpfil/ipfw/ip_fw_pfil.c optional inet ipfirewall > netpfil/ipfw/ip_fw_sockopt.c optional inet ipfirewall > netpfil/ipfw/ip_fw_table.c optional inet ipfirewall > netpfil/ipfw/ip_fw_table_algo.c optional inet ipfirewall > netpfil/ipfw/ip_fw_table_value.c optional inet ipfirewall > netpfil/ipfw/ip_fw_iface.c optional inet ipfirewall > netpfil/ipfw/ip_fw_nat.c optional inet ipfirewall_nat > netpfil/pf/if_pflog.c optional pflog pf inet > netpfil/pf/if_pfsync.c optional pfsync pf inet > netpfil/pf/pf.c optional pf inet > netpfil/pf/pf_if.c optional pf inet > netpfil/pf/pf_ioctl.c optional pf inet > netpfil/pf/pf_lb.c optional pf inet > netpfil/pf/pf_norm.c optional pf inet > netpfil/pf/pf_osfp.c optional pf inet > netpfil/pf/pf_ruleset.c optional pf inet > netpfil/pf/pf_table.c optional pf inet > netpfil/pf/in4_cksum.c optional pf inet > netsmb/smb_conn.c optional netsmb > netsmb/smb_crypt.c optional netsmb > netsmb/smb_dev.c optional netsmb > netsmb/smb_iod.c optional netsmb > netsmb/smb_rq.c optional netsmb > netsmb/smb_smb.c optional netsmb > netsmb/smb_subr.c optional netsmb > netsmb/smb_trantcp.c optional netsmb > netsmb/smb_usr.c optional netsmb > nfs/bootp_subr.c optional bootp nfsclient | bootp nfscl > nfs/krpc_subr.c optional bootp nfsclient | bootp nfscl > nfs/nfs_common.c optional nfsclient | nfsserver > nfs/nfs_diskless.c optional nfsclient nfs_root | nfscl nfs_root > nfs/nfs_fha.c optional nfsserver | nfsd > nfs/nfs_lock.c optional nfsclient | nfscl | nfslockd | nfsd > nfsclient/nfs_bio.c optional nfsclient > nfsclient/nfs_node.c optional nfsclient > nfsclient/nfs_krpc.c optional nfsclient > nfsclient/nfs_subs.c optional nfsclient > nfsclient/nfs_nfsiod.c optional nfsclient > nfsclient/nfs_vfsops.c optional nfsclient > nfsclient/nfs_vnops.c optional nfsclient > nfsserver/nfs_fha_old.c optional nfsserver > nfsserver/nfs_serv.c optional nfsserver > nfsserver/nfs_srvkrpc.c optional nfsserver > nfsserver/nfs_srvsubs.c optional nfsserver > nfs/nfs_nfssvc.c optional nfsserver | nfscl | nfsd > nlm/nlm_advlock.c optional nfslockd | nfsd > nlm/nlm_prot_clnt.c optional nfslockd | nfsd > nlm/nlm_prot_impl.c optional nfslockd | nfsd > nlm/nlm_prot_server.c optional nfslockd | nfsd > nlm/nlm_prot_svc.c optional nfslockd | nfsd > nlm/nlm_prot_xdr.c optional nfslockd | nfsd > nlm/sm_inter_xdr.c optional nfslockd | nfsd > > # OpenFabrics Enterprise Distribution (Infiniband) > ofed/include/linux/linux_compat.c optional ofed \ > no-depend compile-with "${OFED_C}" > ofed/include/linux/linux_idr.c optional ofed \ > no-depend compile-with "${OFED_C}" > ofed/include/linux/linux_radix.c optional ofed \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/core/addr.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/agent.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/cache.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > # XXX Mad.c must be ordered before cm.c for sysinit sets to occur in > # the correct order. > ofed/drivers/infiniband/core/mad.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/cm.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/ -Wno-unused-function" > ofed/drivers/infiniband/core/cma.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/device.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/fmr_pool.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/iwcm.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/local_sa.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/mad_rmpp.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/multicast.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/notice.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/packer.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/sa_query.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/smi.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/sysfs.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/ucm.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/ucma.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/ud_header.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/umem.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/user_mad.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/uverbs_cmd.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/uverbs_main.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/uverbs_marshall.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > ofed/drivers/infiniband/core/verbs.c optional ofed \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/core/" > > ofed/drivers/infiniband/ulp/ipoib/ipoib_cm.c optional ipoib \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > #ofed/drivers/infiniband/ulp/ipoib/ipoib_fs.c optional ipoib \ > # no-depend \ > # compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > ofed/drivers/infiniband/ulp/ipoib/ipoib_ib.c optional ipoib \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > ofed/drivers/infiniband/ulp/ipoib/ipoib_main.c optional ipoib \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > ofed/drivers/infiniband/ulp/ipoib/ipoib_multicast.c optional ipoib \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > ofed/drivers/infiniband/ulp/ipoib/ipoib_verbs.c optional ipoib \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > #ofed/drivers/infiniband/ulp/ipoib/ipoib_vlan.c optional ipoib \ > # no-depend \ > # compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/ipoib/" > > ofed/drivers/infiniband/ulp/sdp/sdp_bcopy.c optional sdp inet \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" > ofed/drivers/infiniband/ulp/sdp/sdp_main.c optional sdp inet \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" > ofed/drivers/infiniband/ulp/sdp/sdp_rx.c optional sdp inet \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" > ofed/drivers/infiniband/ulp/sdp/sdp_cma.c optional sdp inet \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" > ofed/drivers/infiniband/ulp/sdp/sdp_tx.c optional sdp inet \ > no-depend \ > compile-with "${OFED_C} -I$S/ofed/drivers/infiniband/ulp/sdp/" > > ofed/drivers/infiniband/hw/mlx4/alias_GUID.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/mcg.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/sysfs.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/cm.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/ah.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/cq.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/doorbell.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/mad.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/main.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/mr.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/qp.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/srq.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > ofed/drivers/infiniband/hw/mlx4/wc.c optional mlx4ib \ > no-depend obj-prefix "mlx4ib_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/infiniband/hw/mlx4/" > > ofed/drivers/net/mlx4/alloc.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/catas.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/cmd.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/cq.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/eq.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/fw.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/icm.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/intf.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/main.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/mcg.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/ -Wno-unused" > ofed/drivers/net/mlx4/mr.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/pd.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/port.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/profile.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/qp.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/reset.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/sense.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/srq.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/resource_tracker.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/sys_tune.c optional mlx4ib | mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > > ofed/drivers/net/mlx4/en_cq.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/utils.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/en_main.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/en_netdev.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/en_port.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/en_resources.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/en_rx.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > ofed/drivers/net/mlx4/en_tx.c optional mlxen \ > no-depend obj-prefix "mlx4_" \ > compile-with "${OFED_C_NOIMP} -I$S/ofed/drivers/net/mlx4/" > > ofed/drivers/infiniband/hw/mthca/mthca_allocator.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_av.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_catas.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_cmd.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_cq.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_eq.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_mad.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_main.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_mcg.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_memfree.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_mr.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_pd.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_profile.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_provider.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_qp.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_reset.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_srq.c optional mthca \ > no-depend compile-with "${OFED_C}" > ofed/drivers/infiniband/hw/mthca/mthca_uar.c optional mthca \ > no-depend compile-with "${OFED_C}" > > # crypto support > opencrypto/cast.c optional crypto | ipsec > opencrypto/criov.c optional crypto > opencrypto/crypto.c optional crypto > opencrypto/cryptodev.c optional cryptodev > opencrypto/cryptodev_if.m optional crypto > opencrypto/cryptosoft.c optional crypto > opencrypto/cryptodeflate.c optional crypto > opencrypto/rmd160.c optional crypto | ipsec > opencrypto/skipjack.c optional crypto > opencrypto/xform.c optional crypto > rpc/auth_none.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/auth_unix.c optional krpc | nfslockd | nfsclient | nfscl | nfsd > rpc/authunix_prot.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/clnt_bck.c optional krpc | nfslockd | nfsserver | nfscl | nfsd > rpc/clnt_dg.c optional krpc | nfslockd | nfsclient | nfscl | nfsd > rpc/clnt_rc.c optional krpc | nfslockd | nfsclient | nfscl | nfsd > rpc/clnt_vc.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/getnetconfig.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/replay.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/rpc_callmsg.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/rpc_generic.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/rpc_prot.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/rpcb_clnt.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/rpcb_prot.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/svc.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/svc_auth.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/svc_auth_unix.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > rpc/svc_dg.c optional krpc | nfslockd | nfsserver | nfscl | nfsd > rpc/svc_generic.c optional krpc | nfslockd | nfsserver | nfscl | nfsd > rpc/svc_vc.c optional krpc | nfslockd | nfsserver | nfscl | nfsd > rpc/rpcsec_gss/rpcsec_gss.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi > rpc/rpcsec_gss/rpcsec_gss_conf.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi > rpc/rpcsec_gss/rpcsec_gss_misc.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi > rpc/rpcsec_gss/rpcsec_gss_prot.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi > rpc/rpcsec_gss/svc_rpcsec_gss.c optional krpc kgssapi | nfslockd kgssapi | nfscl kgssapi | nfsd kgssapi > security/audit/audit.c optional audit > security/audit/audit_arg.c optional audit > security/audit/audit_bsm.c optional audit > security/audit/audit_bsm_klib.c optional audit > security/audit/audit_pipe.c optional audit > security/audit/audit_syscalls.c standard > security/audit/audit_trigger.c optional audit > security/audit/audit_worker.c optional audit > security/audit/bsm_domain.c optional audit > security/audit/bsm_errno.c optional audit > security/audit/bsm_fcntl.c optional audit > security/audit/bsm_socket_type.c optional audit > security/audit/bsm_token.c optional audit > security/mac/mac_audit.c optional mac audit > security/mac/mac_cred.c optional mac > security/mac/mac_framework.c optional mac > security/mac/mac_inet.c optional mac inet | mac inet6 > security/mac/mac_inet6.c optional mac inet6 > security/mac/mac_label.c optional mac > security/mac/mac_net.c optional mac > security/mac/mac_pipe.c optional mac > security/mac/mac_posix_sem.c optional mac > security/mac/mac_posix_shm.c optional mac > security/mac/mac_priv.c optional mac > security/mac/mac_process.c optional mac > security/mac/mac_socket.c optional mac > security/mac/mac_syscalls.c standard > security/mac/mac_system.c optional mac > security/mac/mac_sysv_msg.c optional mac > security/mac/mac_sysv_sem.c optional mac > security/mac/mac_sysv_shm.c optional mac > security/mac/mac_vfs.c optional mac > security/mac_biba/mac_biba.c optional mac_biba > security/mac_bsdextended/mac_bsdextended.c optional mac_bsdextended > security/mac_bsdextended/ugidfw_system.c optional mac_bsdextended > security/mac_bsdextended/ugidfw_vnode.c optional mac_bsdextended > security/mac_ifoff/mac_ifoff.c optional mac_ifoff > security/mac_lomac/mac_lomac.c optional mac_lomac > security/mac_mls/mac_mls.c optional mac_mls > security/mac_none/mac_none.c optional mac_none > security/mac_partition/mac_partition.c optional mac_partition > security/mac_portacl/mac_portacl.c optional mac_portacl > security/mac_seeotheruids/mac_seeotheruids.c optional mac_seeotheruids > security/mac_stub/mac_stub.c optional mac_stub > security/mac_test/mac_test.c optional mac_test > teken/teken.c optional sc | vt > ufs/ffs/ffs_alloc.c optional ffs > ufs/ffs/ffs_balloc.c optional ffs > ufs/ffs/ffs_inode.c optional ffs > ufs/ffs/ffs_snapshot.c optional ffs > ufs/ffs/ffs_softdep.c optional ffs > ufs/ffs/ffs_subr.c optional ffs > ufs/ffs/ffs_tables.c optional ffs > ufs/ffs/ffs_vfsops.c optional ffs > ufs/ffs/ffs_vnops.c optional ffs > ufs/ffs/ffs_rawread.c optional ffs directio > ufs/ffs/ffs_suspend.c optional ffs > ufs/ufs/ufs_acl.c optional ffs > ufs/ufs/ufs_bmap.c optional ffs > ufs/ufs/ufs_dirhash.c optional ffs > ufs/ufs/ufs_extattr.c optional ffs > ufs/ufs/ufs_gjournal.c optional ffs UFS_GJOURNAL > ufs/ufs/ufs_inode.c optional ffs > ufs/ufs/ufs_lookup.c optional ffs > ufs/ufs/ufs_quota.c optional ffs > ufs/ufs/ufs_vfsops.c optional ffs > ufs/ufs/ufs_vnops.c optional ffs > vm/default_pager.c standard > vm/device_pager.c standard > vm/phys_pager.c standard > vm/redzone.c optional DEBUG_REDZONE > vm/sg_pager.c standard > vm/swap_pager.c standard > vm/uma_core.c standard > vm/uma_dbg.c standard > vm/memguard.c optional DEBUG_MEMGUARD > vm/vm_fault.c standard > vm/vm_glue.c standard > vm/vm_init.c standard > vm/vm_kern.c standard > vm/vm_map.c standard > vm/vm_meter.c standard > vm/vm_mmap.c standard > vm/vm_object.c standard > vm/vm_page.c standard > vm/vm_pageout.c standard > vm/vm_pager.c standard > vm/vm_phys.c standard > vm/vm_radix.c standard > vm/vm_reserv.c standard > vm/vm_unix.c standard > vm/vm_zeroidle.c standard > vm/vnode_pager.c standard > xen/gnttab.c optional xen | xenhvm > xen/features.c optional xen | xenhvm > xen/xenbus/xenbus_if.m optional xen | xenhvm > xen/xenbus/xenbus.c optional xen | xenhvm > xen/xenbus/xenbusb_if.m optional xen | xenhvm > xen/xenbus/xenbusb.c optional xen | xenhvm > xen/xenbus/xenbusb_front.c optional xen | xenhvm > xen/xenbus/xenbusb_back.c optional xen | xenhvm > xdr/xdr.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > xdr/xdr_array.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > xdr/xdr_mbuf.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > xdr/xdr_mem.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > xdr/xdr_reference.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd > xdr/xdr_sizeof.c optional krpc | nfslockd | nfsclient | nfsserver | nfscl | nfsd >diff --git a/sys/i386/include/dump.h b/sys/i386/include/dump.h >new file mode 100644 >index 0000000..2da561e >--- /dev/null >+++ b/sys/i386/include/dump.h >@@ -0,0 +1,81 @@ >+/*- >+ * Copyright (c) 2014 EMC Corp. >+ * Copyright (c) 2014 Conrad Meyer <conrad.meyer@isilon.com> >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND >+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE >+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE >+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL >+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS >+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) >+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT >+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY >+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF >+ * SUCH DAMAGE. >+ * >+ * $FreeBSD$ >+ */ >+ >+#ifndef _MACHINE_DUMP_H_ >+#define _MACHINE_DUMP_H_ >+ >+#define KERNELDUMP_VERSION KERNELDUMP_I386_VERSION >+#define EM_VALUE EM_386 >+/* 20 phys_avail entry pairs correspond to 10 md_pa's */ >+#define DUMPSYS_MD_PA_NPAIRS 10 >+#define DUMPSYS_NUM_AUX_HDRS 0 >+ >+static inline void >+dumpsys_md_pa_init(void) >+{ >+ >+ dumpsys_gen_md_pa_init(); >+} >+ >+static inline struct dump_pa * >+dumpsys_md_pa_next(struct dump_pa *p) >+{ >+ >+ return (dumpsys_gen_md_pa_next(p)); >+} >+ >+static inline void >+dumpsys_wbinv_all(void) >+{ >+ >+ dumpsys_gen_wbinv_all(); >+} >+ >+static inline void >+dumpsys_unmap_chunk(vm_paddr_t pa, size_t s, void *va) >+{ >+ >+ dumpsys_gen_unmap_chunk(pa, s, va); >+} >+ >+static inline int >+dumpsys_write_aux_headers(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_gen_write_aux_headers(di)); >+} >+ >+static inline int >+dumpsys(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_generic(di)); >+} >+ >+#endif /* !_MACHINE_DUMP_H_ */ >diff --git a/sys/kern/kern_dump.c b/sys/kern/kern_dump.c >new file mode 100644 >index 0000000..f86223e >--- /dev/null >+++ b/sys/kern/kern_dump.c >@@ -0,0 +1,398 @@ >+/*- >+ * Copyright (c) 2002 Marcel Moolenaar >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR >+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES >+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. >+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, >+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT >+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, >+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY >+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF >+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >+ */ >+ >+#include <sys/cdefs.h> >+__FBSDID("$FreeBSD$"); >+ >+#include "opt_watchdog.h" >+ >+#include <sys/param.h> >+#include <sys/systm.h> >+#include <sys/conf.h> >+#include <sys/cons.h> >+#include <sys/kernel.h> >+#include <sys/proc.h> >+#include <sys/kerneldump.h> >+#ifdef SW_WATCHDOG >+#include <sys/watchdog.h> >+#endif >+#include <vm/vm.h> >+#include <vm/vm_param.h> >+#include <vm/pmap.h> >+#include <machine/dump.h> >+#include <machine/elf.h> >+#include <machine/md_var.h> >+#include <machine/pcb.h> >+ >+CTASSERT(sizeof(struct kerneldumpheader) == 512); >+ >+/* >+ * Don't touch the first SIZEOF_METADATA bytes on the dump device. This >+ * is to protect us from metadata and to protect metadata from us. >+ */ >+#define SIZEOF_METADATA (64*1024) >+ >+#define MD_ALIGN(x) (((off_t)(x) + PAGE_MASK) & ~PAGE_MASK) >+#define DEV_ALIGN(x) (((off_t)(x) + (DEV_BSIZE-1)) & ~(DEV_BSIZE-1)) >+ >+off_t dumplo; >+ >+/* Handle buffered writes. */ >+static char buffer[DEV_BSIZE]; >+static size_t fragsz; >+ >+struct dump_pa dump_map[DUMPSYS_MD_PA_NPAIRS]; >+ >+void >+dumpsys_gen_md_pa_init(void) >+{ >+#if !defined(__sparc__) && !defined(__powerpc__) >+ int n, idx; >+ >+ bzero(dump_map, sizeof(dump_map)); >+ for (n = 0; n < sizeof(dump_map) / sizeof(dump_map[0]); n++) { >+ idx = n * 2; >+ if (dump_avail[idx] == 0 && dump_avail[idx + 1] == 0) >+ break; >+ dump_map[n].md_start = dump_avail[idx]; >+ dump_map[n].md_size = dump_avail[idx + 1] - dump_avail[idx]; >+ } >+#endif >+} >+ >+struct dump_pa * >+dumpsys_gen_md_pa_next(struct dump_pa *mdp) >+{ >+ >+ if (mdp == NULL) >+ return (&dump_map[0]); >+ >+ mdp++; >+ if (mdp->md_size == 0) >+ mdp = NULL; >+ return (mdp); >+} >+ >+void >+dumpsys_gen_wbinv_all(void) >+{ >+ >+ /* nop */; >+} >+ >+void >+dumpsys_gen_unmap_chunk(vm_paddr_t pa __unused, size_t chunk __unused, >+ void *va __unused) >+{ >+ >+ /* nop */; >+} >+ >+int >+dumpsys_gen_write_aux_headers(struct dumperinfo *di) >+{ >+ >+ /* nop */ >+ return (0); >+} >+ >+int >+dumpsys_buf_write(struct dumperinfo *di, char *ptr, size_t sz) >+{ >+ size_t len; >+ int error; >+ >+ while (sz) { >+ len = DEV_BSIZE - fragsz; >+ if (len > sz) >+ len = sz; >+ bcopy(ptr, buffer + fragsz, len); >+ fragsz += len; >+ ptr += len; >+ sz -= len; >+ if (fragsz == DEV_BSIZE) { >+ error = dump_write(di, buffer, 0, dumplo, >+ DEV_BSIZE); >+ if (error) >+ return error; >+ dumplo += DEV_BSIZE; >+ fragsz = 0; >+ } >+ } >+ >+ return (0); >+} >+ >+int >+dumpsys_buf_flush(struct dumperinfo *di) >+{ >+ int error; >+ >+ if (fragsz == 0) >+ return (0); >+ >+ error = dump_write(di, buffer, 0, dumplo, DEV_BSIZE); >+ dumplo += DEV_BSIZE; >+ fragsz = 0; >+ return (error); >+} >+ >+CTASSERT(PAGE_SHIFT < 20); >+#define PG2MB(pgs) ((pgs + (1 << (20 - PAGE_SHIFT)) - 1) >> (20 - PAGE_SHIFT)) >+ >+int >+dumpsys_cb_dumpdata(struct dump_pa *mdp, int seqnr, void *arg) >+{ >+ struct dumperinfo *di = (struct dumperinfo*)arg; >+ vm_paddr_t pa; >+ void *va; >+ uint64_t pgs; >+ size_t counter, sz, chunk; >+ int c, error, twiddle; >+ u_int maxdumppgs; >+ >+ error = 0; /* catch case in which chunk size is 0 */ >+ counter = 0; /* Update twiddle every 16MB */ >+ twiddle = 0; >+ va = 0; >+ pgs = mdp->md_size / PAGE_SIZE; >+ pa = mdp->md_start; >+ maxdumppgs = min(di->maxiosize / PAGE_SIZE, MAXDUMPPGS); >+ if (maxdumppgs == 0) /* seatbelt */ >+ maxdumppgs = 1; >+ >+ printf(" chunk %d: %juMB (%ju pages)", seqnr, (uintmax_t)PG2MB(pgs), >+ (uintmax_t)pgs); >+ >+ dumpsys_wbinv_all(); >+ while (pgs) { >+ chunk = pgs; >+ if (chunk > maxdumppgs) >+ chunk = maxdumppgs; >+ sz = chunk << PAGE_SHIFT; >+ counter += sz; >+ if (counter >> 24) { >+ printf(" %ju", (uintmax_t)PG2MB(pgs)); >+ counter &= (1<<24) - 1; >+ } >+ >+ dumpsys_map_chunk(pa, chunk, &va); >+#ifdef SW_WATCHDOG >+ wdog_kern_pat(WD_LASTVAL); >+#endif >+ >+ error = dump_write(di, va, 0, dumplo, sz); >+ dumpsys_unmap_chunk(pa, chunk, va); >+ if (error) >+ break; >+ dumplo += sz; >+ pgs -= chunk; >+ pa += sz; >+ >+ /* Check for user abort. */ >+ c = cncheckc(); >+ if (c == 0x03) >+ return (ECANCELED); >+ if (c != -1) >+ printf(" (CTRL-C to abort) "); >+ } >+ printf(" ... %s\n", (error) ? "fail" : "ok"); >+ return (error); >+} >+ >+int >+dumpsys_foreach_chunk(dumpsys_callback_t cb, void *arg) >+{ >+ struct dump_pa *mdp; >+ int error, seqnr; >+ >+ seqnr = 0; >+ mdp = dumpsys_md_pa_next(NULL); >+ while (mdp != NULL) { >+ error = (*cb)(mdp, seqnr++, arg); >+ if (error) >+ return (-error); >+ mdp = dumpsys_md_pa_next(mdp); >+ } >+ return (seqnr); >+} >+ >+static off_t fileofs; >+ >+static int >+cb_dumphdr(struct dump_pa *mdp, int seqnr, void *arg) >+{ >+ struct dumperinfo *di = (struct dumperinfo*)arg; >+ Elf_Phdr phdr; >+ uint64_t size; >+ int error; >+ >+ size = mdp->md_size; >+ bzero(&phdr, sizeof(phdr)); >+ phdr.p_type = PT_LOAD; >+ phdr.p_flags = PF_R; /* XXX */ >+ phdr.p_offset = fileofs; >+#ifdef __powerpc__ >+ phdr.p_vaddr = (do_minidump? mdp->md_start : ~0L); >+ phdr.p_paddr = (do_minidump? ~0L : mdp->md_start); >+#else >+ phdr.p_vaddr = mdp->md_start; >+ phdr.p_paddr = mdp->md_start; >+#endif >+ phdr.p_filesz = size; >+ phdr.p_memsz = size; >+ phdr.p_align = PAGE_SIZE; >+ >+ error = dumpsys_buf_write(di, (char*)&phdr, sizeof(phdr)); >+ fileofs += phdr.p_filesz; >+ return (error); >+} >+ >+static int >+cb_size(struct dump_pa *mdp, int seqnr, void *arg) >+{ >+ uint64_t *sz = (uint64_t*)arg; >+ >+ *sz += (uint64_t)mdp->md_size; >+ return (0); >+} >+ >+int >+dumpsys_generic(struct dumperinfo *di) >+{ >+ struct kerneldumpheader kdh; >+ Elf_Ehdr ehdr; >+ uint64_t dumpsize; >+ off_t hdrgap; >+ size_t hdrsz; >+ int error; >+ >+#ifndef __powerpc__ >+ if (do_minidump) >+ return (minidumpsys(di)); >+#endif >+ bzero(&ehdr, sizeof(ehdr)); >+ ehdr.e_ident[EI_MAG0] = ELFMAG0; >+ ehdr.e_ident[EI_MAG1] = ELFMAG1; >+ ehdr.e_ident[EI_MAG2] = ELFMAG2; >+ ehdr.e_ident[EI_MAG3] = ELFMAG3; >+ ehdr.e_ident[EI_CLASS] = ELF_CLASS; >+#if BYTE_ORDER == LITTLE_ENDIAN >+ ehdr.e_ident[EI_DATA] = ELFDATA2LSB; >+#else >+ ehdr.e_ident[EI_DATA] = ELFDATA2MSB; >+#endif >+ ehdr.e_ident[EI_VERSION] = EV_CURRENT; >+ ehdr.e_ident[EI_OSABI] = ELFOSABI_STANDALONE; /* XXX big picture? */ >+ ehdr.e_type = ET_CORE; >+ ehdr.e_machine = EM_VALUE; >+ ehdr.e_phoff = sizeof(ehdr); >+ ehdr.e_flags = 0; >+ ehdr.e_ehsize = sizeof(ehdr); >+ ehdr.e_phentsize = sizeof(Elf_Phdr); >+ ehdr.e_shentsize = sizeof(Elf_Shdr); >+ >+ dumpsys_md_pa_init(); >+ >+ /* Calculate dump size. */ >+ dumpsize = 0L; >+ ehdr.e_phnum = dumpsys_foreach_chunk(cb_size, &dumpsize) + >+ DUMPSYS_NUM_AUX_HDRS; >+ hdrsz = ehdr.e_phoff + ehdr.e_phnum * ehdr.e_phentsize; >+ fileofs = MD_ALIGN(hdrsz); >+ dumpsize += fileofs; >+ hdrgap = fileofs - DEV_ALIGN(hdrsz); >+ >+ /* Determine dump offset on device. */ >+ if (di->mediasize < SIZEOF_METADATA + dumpsize + sizeof(kdh) * 2) { >+ error = ENOSPC; >+ goto fail; >+ } >+ dumplo = di->mediaoffset + di->mediasize - dumpsize; >+ dumplo -= sizeof(kdh) * 2; >+ >+ mkdumpheader(&kdh, KERNELDUMPMAGIC, KERNELDUMP_VERSION, dumpsize, >+ di->blocksize); >+ >+ printf("Dumping %ju MB (%d chunks)\n", (uintmax_t)dumpsize >> 20, >+ ehdr.e_phnum - DUMPSYS_NUM_AUX_HDRS); >+ >+ /* Dump leader */ >+ error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >+ if (error) >+ goto fail; >+ dumplo += sizeof(kdh); >+ >+ /* Dump ELF header */ >+ error = dumpsys_buf_write(di, (char*)&ehdr, sizeof(ehdr)); >+ if (error) >+ goto fail; >+ >+ /* Dump program headers */ >+ error = dumpsys_foreach_chunk(cb_dumphdr, di); >+ if (error < 0) >+ goto fail; >+ error = dumpsys_write_aux_headers(di); >+ if (error < 0) >+ goto fail; >+ dumpsys_buf_flush(di); >+ >+ /* >+ * All headers are written using blocked I/O, so we know the >+ * current offset is (still) block aligned. Skip the alignement >+ * in the file to have the segment contents aligned at page >+ * boundary. We cannot use MD_ALIGN on dumplo, because we don't >+ * care and may very well be unaligned within the dump device. >+ */ >+ dumplo += hdrgap; >+ >+ /* Dump memory chunks (updates dumplo) */ >+ error = dumpsys_foreach_chunk(dumpsys_cb_dumpdata, di); >+ if (error < 0) >+ goto fail; >+ >+ /* Dump trailer */ >+ error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >+ if (error) >+ goto fail; >+ >+ /* Signal completion, signoff and exit stage left. */ >+ dump_write(di, NULL, 0, 0, 0); >+ printf("\nDump complete\n"); >+ return (0); >+ >+ fail: >+ if (error < 0) >+ error = -error; >+ >+ if (error == ECANCELED) >+ printf("\nDump aborted\n"); >+ else if (error == ENOSPC) >+ printf("\nDump failed. Partition too small.\n"); >+ else >+ printf("\n** DUMP FAILED (ERROR %d) **\n", error); >+ return (error); >+} >diff --git a/sys/kern/kern_shutdown.c b/sys/kern/kern_shutdown.c >index 357099b..e547c5f 100644 >--- a/sys/kern/kern_shutdown.c >+++ b/sys/kern/kern_shutdown.c >@@ -1,890 +1,891 @@ > /*- > * Copyright (c) 1986, 1988, 1991, 1993 > * The Regents of the University of California. All rights reserved. > * (c) UNIX System Laboratories, Inc. > * All or some portions of this file are derived from material licensed > * to the University of California by American Telephone and Telegraph > * Co. or Unix System Laboratories, Inc. and are reproduced herein with > * the permission of UNIX System Laboratories, Inc. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 4. Neither the name of the University nor the names of its contributors > * may be used to endorse or promote products derived from this software > * without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND > * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE > * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > * SUCH DAMAGE. > * > * @(#)kern_shutdown.c 8.3 (Berkeley) 1/21/94 > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > #include "opt_ddb.h" > #include "opt_kdb.h" > #include "opt_panic.h" > #include "opt_sched.h" > #include "opt_watchdog.h" > > #include <sys/param.h> > #include <sys/systm.h> > #include <sys/bio.h> > #include <sys/buf.h> > #include <sys/conf.h> > #include <sys/cons.h> > #include <sys/eventhandler.h> > #include <sys/jail.h> > #include <sys/kdb.h> > #include <sys/kernel.h> > #include <sys/kerneldump.h> > #include <sys/kthread.h> > #include <sys/ktr.h> > #include <sys/malloc.h> > #include <sys/mount.h> > #include <sys/priv.h> > #include <sys/proc.h> > #include <sys/reboot.h> > #include <sys/resourcevar.h> > #include <sys/rwlock.h> > #include <sys/sched.h> > #include <sys/smp.h> > #include <sys/sysctl.h> > #include <sys/sysproto.h> > #include <sys/vnode.h> > #include <sys/watchdog.h> > > #include <ddb/ddb.h> > > #include <machine/cpu.h> >+#include <machine/dump.h> > #include <machine/pcb.h> > #include <machine/smp.h> > > #include <security/mac/mac_framework.h> > > #include <vm/vm.h> > #include <vm/vm_object.h> > #include <vm/vm_page.h> > #include <vm/vm_pager.h> > #include <vm/swap_pager.h> > > #include <sys/signalvar.h> > > #ifndef PANIC_REBOOT_WAIT_TIME > #define PANIC_REBOOT_WAIT_TIME 15 /* default to 15 seconds */ > #endif > static int panic_reboot_wait_time = PANIC_REBOOT_WAIT_TIME; > SYSCTL_INT(_kern, OID_AUTO, panic_reboot_wait_time, CTLFLAG_RWTUN, > &panic_reboot_wait_time, 0, > "Seconds to wait before rebooting after a panic"); > > /* > * Note that stdarg.h and the ANSI style va_start macro is used for both > * ANSI and traditional C compilers. > */ > #include <machine/stdarg.h> > > #ifdef KDB > #ifdef KDB_UNATTENDED > int debugger_on_panic = 0; > #else > int debugger_on_panic = 1; > #endif > SYSCTL_INT(_debug, OID_AUTO, debugger_on_panic, > CTLFLAG_RWTUN | CTLFLAG_SECURE, > &debugger_on_panic, 0, "Run debugger on kernel panic"); > > #ifdef KDB_TRACE > static int trace_on_panic = 1; > #else > static int trace_on_panic = 0; > #endif > SYSCTL_INT(_debug, OID_AUTO, trace_on_panic, > CTLFLAG_RWTUN | CTLFLAG_SECURE, > &trace_on_panic, 0, "Print stack trace on kernel panic"); > #endif /* KDB */ > > static int sync_on_panic = 0; > SYSCTL_INT(_kern, OID_AUTO, sync_on_panic, CTLFLAG_RWTUN, > &sync_on_panic, 0, "Do a sync before rebooting from a panic"); > > static SYSCTL_NODE(_kern, OID_AUTO, shutdown, CTLFLAG_RW, 0, > "Shutdown environment"); > > #ifndef DIAGNOSTIC > static int show_busybufs; > #else > static int show_busybufs = 1; > #endif > SYSCTL_INT(_kern_shutdown, OID_AUTO, show_busybufs, CTLFLAG_RW, > &show_busybufs, 0, ""); > > /* > * Variable panicstr contains argument to first call to panic; used as flag > * to indicate that the kernel has already called panic. > */ > const char *panicstr; > > int dumping; /* system is dumping */ > int rebooting; /* system is rebooting */ > static struct dumperinfo dumper; /* our selected dumper */ > > /* Context information for dump-debuggers. */ > static struct pcb dumppcb; /* Registers. */ > lwpid_t dumptid; /* Thread ID. */ > > static void poweroff_wait(void *, int); > static void shutdown_halt(void *junk, int howto); > static void shutdown_panic(void *junk, int howto); > static void shutdown_reset(void *junk, int howto); > static void vpanic(const char *fmt, va_list ap) __dead2; > > /* register various local shutdown events */ > static void > shutdown_conf(void *unused) > { > > EVENTHANDLER_REGISTER(shutdown_final, poweroff_wait, NULL, > SHUTDOWN_PRI_FIRST); > EVENTHANDLER_REGISTER(shutdown_final, shutdown_halt, NULL, > SHUTDOWN_PRI_LAST + 100); > EVENTHANDLER_REGISTER(shutdown_final, shutdown_panic, NULL, > SHUTDOWN_PRI_LAST + 100); > EVENTHANDLER_REGISTER(shutdown_final, shutdown_reset, NULL, > SHUTDOWN_PRI_LAST + 200); > } > > SYSINIT(shutdown_conf, SI_SUB_INTRINSIC, SI_ORDER_ANY, shutdown_conf, NULL); > > /* > * The system call that results in a reboot. > */ > /* ARGSUSED */ > int > sys_reboot(struct thread *td, struct reboot_args *uap) > { > int error; > > error = 0; > #ifdef MAC > error = mac_system_check_reboot(td->td_ucred, uap->opt); > #endif > if (error == 0) > error = priv_check(td, PRIV_REBOOT); > if (error == 0) { > mtx_lock(&Giant); > kern_reboot(uap->opt); > mtx_unlock(&Giant); > } > return (error); > } > > /* > * Called by events that want to shut down.. e.g <CTL><ALT><DEL> on a PC > */ > void > shutdown_nice(int howto) > { > > if (initproc != NULL) { > /* Send a signal to init(8) and have it shutdown the world. */ > PROC_LOCK(initproc); > if (howto & RB_POWEROFF) > kern_psignal(initproc, SIGUSR2); > else if (howto & RB_HALT) > kern_psignal(initproc, SIGUSR1); > else > kern_psignal(initproc, SIGINT); > PROC_UNLOCK(initproc); > } else { > /* No init(8) running, so simply reboot. */ > kern_reboot(howto | RB_NOSYNC); > } > } > > static void > print_uptime(void) > { > int f; > struct timespec ts; > > getnanouptime(&ts); > printf("Uptime: "); > f = 0; > if (ts.tv_sec >= 86400) { > printf("%ldd", (long)ts.tv_sec / 86400); > ts.tv_sec %= 86400; > f = 1; > } > if (f || ts.tv_sec >= 3600) { > printf("%ldh", (long)ts.tv_sec / 3600); > ts.tv_sec %= 3600; > f = 1; > } > if (f || ts.tv_sec >= 60) { > printf("%ldm", (long)ts.tv_sec / 60); > ts.tv_sec %= 60; > f = 1; > } > printf("%lds\n", (long)ts.tv_sec); > } > > int > doadump(boolean_t textdump) > { > boolean_t coredump; > int error; > > error = 0; > if (dumping) > return (EBUSY); > if (dumper.dumper == NULL) > return (ENXIO); > > savectx(&dumppcb); > dumptid = curthread->td_tid; > dumping++; > > coredump = TRUE; > #ifdef DDB > if (textdump && textdump_pending) { > coredump = FALSE; > textdump_dumpsys(&dumper); > } > #endif > if (coredump) > error = dumpsys(&dumper); > > dumping--; > return (error); > } > > static int > isbufbusy(struct buf *bp) > { > if (((bp->b_flags & (B_INVAL | B_PERSISTENT)) == 0 && > BUF_ISLOCKED(bp)) || > ((bp->b_flags & (B_DELWRI | B_INVAL)) == B_DELWRI)) > return (1); > return (0); > } > > /* > * Shutdown the system cleanly to prepare for reboot, halt, or power off. > */ > void > kern_reboot(int howto) > { > static int first_buf_printf = 1; > static int waittime = -1; > > #if defined(SMP) > /* > * Bind us to CPU 0 so that all shutdown code runs there. Some > * systems don't shutdown properly (i.e., ACPI power off) if we > * run on another processor. > */ > if (!SCHEDULER_STOPPED()) { > thread_lock(curthread); > sched_bind(curthread, 0); > thread_unlock(curthread); > KASSERT(PCPU_GET(cpuid) == 0, ("boot: not running on cpu 0")); > } > #endif > /* We're in the process of rebooting. */ > rebooting = 1; > > /* We are out of the debugger now. */ > kdb_active = 0; > > /* > * Do any callouts that should be done BEFORE syncing the filesystems. > */ > EVENTHANDLER_INVOKE(shutdown_pre_sync, howto); > > /* > * Now sync filesystems > */ > if (!cold && (howto & RB_NOSYNC) == 0 && waittime < 0) { > register struct buf *bp; > int iter, nbusy, pbusy; > #ifndef PREEMPTION > int subiter; > #endif > > waittime = 0; > > wdog_kern_pat(WD_LASTVAL); > sys_sync(curthread, NULL); > > /* > * With soft updates, some buffers that are > * written will be remarked as dirty until other > * buffers are written. > */ > for (iter = pbusy = 0; iter < 20; iter++) { > nbusy = 0; > for (bp = &buf[nbuf]; --bp >= buf; ) > if (isbufbusy(bp)) > nbusy++; > if (nbusy == 0) { > if (first_buf_printf) > printf("All buffers synced."); > break; > } > if (first_buf_printf) { > printf("Syncing disks, buffers remaining... "); > first_buf_printf = 0; > } > printf("%d ", nbusy); > if (nbusy < pbusy) > iter = 0; > pbusy = nbusy; > > wdog_kern_pat(WD_LASTVAL); > sys_sync(curthread, NULL); > > #ifdef PREEMPTION > /* > * Drop Giant and spin for a while to allow > * interrupt threads to run. > */ > DROP_GIANT(); > DELAY(50000 * iter); > PICKUP_GIANT(); > #else > /* > * Drop Giant and context switch several times to > * allow interrupt threads to run. > */ > DROP_GIANT(); > for (subiter = 0; subiter < 50 * iter; subiter++) { > thread_lock(curthread); > mi_switch(SW_VOL, NULL); > thread_unlock(curthread); > DELAY(1000); > } > PICKUP_GIANT(); > #endif > } > printf("\n"); > /* > * Count only busy local buffers to prevent forcing > * a fsck if we're just a client of a wedged NFS server > */ > nbusy = 0; > for (bp = &buf[nbuf]; --bp >= buf; ) { > if (isbufbusy(bp)) { > #if 0 > /* XXX: This is bogus. We should probably have a BO_REMOTE flag instead */ > if (bp->b_dev == NULL) { > TAILQ_REMOVE(&mountlist, > bp->b_vp->v_mount, mnt_list); > continue; > } > #endif > nbusy++; > if (show_busybufs > 0) { > printf( > "%d: buf:%p, vnode:%p, flags:%0x, blkno:%jd, lblkno:%jd, buflock:", > nbusy, bp, bp->b_vp, bp->b_flags, > (intmax_t)bp->b_blkno, > (intmax_t)bp->b_lblkno); > BUF_LOCKPRINTINFO(bp); > if (show_busybufs > 1) > vn_printf(bp->b_vp, > "vnode content: "); > } > } > } > if (nbusy) { > /* > * Failed to sync all blocks. Indicate this and don't > * unmount filesystems (thus forcing an fsck on reboot). > */ > printf("Giving up on %d buffers\n", nbusy); > DELAY(5000000); /* 5 seconds */ > } else { > if (!first_buf_printf) > printf("Final sync complete\n"); > /* > * Unmount filesystems > */ > if (panicstr == 0) > vfs_unmountall(); > } > swapoff_all(); > DELAY(100000); /* wait for console output to finish */ > } > > print_uptime(); > > cngrab(); > > /* > * Ok, now do things that assume all filesystem activity has > * been completed. > */ > EVENTHANDLER_INVOKE(shutdown_post_sync, howto); > > if ((howto & (RB_HALT|RB_DUMP)) == RB_DUMP && !cold && !dumping) > doadump(TRUE); > > /* Now that we're going to really halt the system... */ > EVENTHANDLER_INVOKE(shutdown_final, howto); > > for(;;) ; /* safety against shutdown_reset not working */ > /* NOTREACHED */ > } > > /* > * If the shutdown was a clean halt, behave accordingly. > */ > static void > shutdown_halt(void *junk, int howto) > { > > if (howto & RB_HALT) { > printf("\n"); > printf("The operating system has halted.\n"); > printf("Please press any key to reboot.\n\n"); > switch (cngetc()) { > case -1: /* No console, just die */ > cpu_halt(); > /* NOTREACHED */ > default: > howto &= ~RB_HALT; > break; > } > } > } > > /* > * Check to see if the system paniced, pause and then reboot > * according to the specified delay. > */ > static void > shutdown_panic(void *junk, int howto) > { > int loop; > > if (howto & RB_DUMP) { > if (panic_reboot_wait_time != 0) { > if (panic_reboot_wait_time != -1) { > printf("Automatic reboot in %d seconds - " > "press a key on the console to abort\n", > panic_reboot_wait_time); > for (loop = panic_reboot_wait_time * 10; > loop > 0; --loop) { > DELAY(1000 * 100); /* 1/10th second */ > /* Did user type a key? */ > if (cncheckc() != -1) > break; > } > if (!loop) > return; > } > } else { /* zero time specified - reboot NOW */ > return; > } > printf("--> Press a key on the console to reboot,\n"); > printf("--> or switch off the system now.\n"); > cngetc(); > } > } > > /* > * Everything done, now reset > */ > static void > shutdown_reset(void *junk, int howto) > { > > printf("Rebooting...\n"); > DELAY(1000000); /* wait 1 sec for printf's to complete and be read */ > > /* > * Acquiring smp_ipi_mtx here has a double effect: > * - it disables interrupts avoiding CPU0 preemption > * by fast handlers (thus deadlocking against other CPUs) > * - it avoids deadlocks against smp_rendezvous() or, more > * generally, threads busy-waiting, with this spinlock held, > * and waiting for responses by threads on other CPUs > * (ie. smp_tlb_shootdown()). > * > * For the !SMP case it just needs to handle the former problem. > */ > #ifdef SMP > mtx_lock_spin(&smp_ipi_mtx); > #else > spinlock_enter(); > #endif > > /* cpu_boot(howto); */ /* doesn't do anything at the moment */ > cpu_reset(); > /* NOTREACHED */ /* assuming reset worked */ > } > > #if defined(WITNESS) || defined(INVARIANTS) > static int kassert_warn_only = 0; > #ifdef KDB > static int kassert_do_kdb = 0; > #endif > #ifdef KTR > static int kassert_do_ktr = 0; > #endif > static int kassert_do_log = 1; > static int kassert_log_pps_limit = 4; > static int kassert_log_mute_at = 0; > static int kassert_log_panic_at = 0; > static int kassert_warnings = 0; > > SYSCTL_NODE(_debug, OID_AUTO, kassert, CTLFLAG_RW, NULL, "kassert options"); > > SYSCTL_INT(_debug_kassert, OID_AUTO, warn_only, CTLFLAG_RWTUN, > &kassert_warn_only, 0, > "KASSERT triggers a panic (1) or just a warning (0)"); > > #ifdef KDB > SYSCTL_INT(_debug_kassert, OID_AUTO, do_kdb, CTLFLAG_RWTUN, > &kassert_do_kdb, 0, "KASSERT will enter the debugger"); > #endif > > #ifdef KTR > SYSCTL_UINT(_debug_kassert, OID_AUTO, do_ktr, CTLFLAG_RWTUN, > &kassert_do_ktr, 0, > "KASSERT does a KTR, set this to the KTRMASK you want"); > #endif > > SYSCTL_INT(_debug_kassert, OID_AUTO, do_log, CTLFLAG_RWTUN, > &kassert_do_log, 0, "KASSERT triggers a panic (1) or just a warning (0)"); > > SYSCTL_INT(_debug_kassert, OID_AUTO, warnings, CTLFLAG_RWTUN, > &kassert_warnings, 0, "number of KASSERTs that have been triggered"); > > SYSCTL_INT(_debug_kassert, OID_AUTO, log_panic_at, CTLFLAG_RWTUN, > &kassert_log_panic_at, 0, "max number of KASSERTS before we will panic"); > > SYSCTL_INT(_debug_kassert, OID_AUTO, log_pps_limit, CTLFLAG_RWTUN, > &kassert_log_pps_limit, 0, "limit number of log messages per second"); > > SYSCTL_INT(_debug_kassert, OID_AUTO, log_mute_at, CTLFLAG_RWTUN, > &kassert_log_mute_at, 0, "max number of KASSERTS to log"); > > static int kassert_sysctl_kassert(SYSCTL_HANDLER_ARGS); > > SYSCTL_PROC(_debug_kassert, OID_AUTO, kassert, > CTLTYPE_INT | CTLFLAG_RW | CTLFLAG_SECURE, NULL, 0, > kassert_sysctl_kassert, "I", "set to trigger a test kassert"); > > static int > kassert_sysctl_kassert(SYSCTL_HANDLER_ARGS) > { > int error, i; > > error = sysctl_wire_old_buffer(req, sizeof(int)); > if (error == 0) { > i = 0; > error = sysctl_handle_int(oidp, &i, 0, req); > } > if (error != 0 || req->newptr == NULL) > return (error); > KASSERT(0, ("kassert_sysctl_kassert triggered kassert %d", i)); > return (0); > } > > /* > * Called by KASSERT, this decides if we will panic > * or if we will log via printf and/or ktr. > */ > void > kassert_panic(const char *fmt, ...) > { > static char buf[256]; > va_list ap; > > va_start(ap, fmt); > (void)vsnprintf(buf, sizeof(buf), fmt, ap); > va_end(ap); > > /* > * panic if we're not just warning, or if we've exceeded > * kassert_log_panic_at warnings. > */ > if (!kassert_warn_only || > (kassert_log_panic_at > 0 && > kassert_warnings >= kassert_log_panic_at)) { > va_start(ap, fmt); > vpanic(fmt, ap); > /* NORETURN */ > } > #ifdef KTR > if (kassert_do_ktr) > CTR0(ktr_mask, buf); > #endif /* KTR */ > /* > * log if we've not yet met the mute limit. > */ > if (kassert_do_log && > (kassert_log_mute_at == 0 || > kassert_warnings < kassert_log_mute_at)) { > static struct timeval lasterr; > static int curerr; > > if (ppsratecheck(&lasterr, &curerr, kassert_log_pps_limit)) { > printf("KASSERT failed: %s\n", buf); > kdb_backtrace(); > } > } > #ifdef KDB > if (kassert_do_kdb) { > kdb_enter(KDB_WHY_KASSERT, buf); > } > #endif > atomic_add_int(&kassert_warnings, 1); > } > #endif > > /* > * Panic is called on unresolvable fatal errors. It prints "panic: mesg", > * and then reboots. If we are called twice, then we avoid trying to sync > * the disks as this often leads to recursive panics. > */ > void > panic(const char *fmt, ...) > { > va_list ap; > > va_start(ap, fmt); > vpanic(fmt, ap); > } > > static void > vpanic(const char *fmt, va_list ap) > { > #ifdef SMP > cpuset_t other_cpus; > #endif > struct thread *td = curthread; > int bootopt, newpanic; > static char buf[256]; > > spinlock_enter(); > > #ifdef SMP > /* > * stop_cpus_hard(other_cpus) should prevent multiple CPUs from > * concurrently entering panic. Only the winner will proceed > * further. > */ > if (panicstr == NULL && !kdb_active) { > other_cpus = all_cpus; > CPU_CLR(PCPU_GET(cpuid), &other_cpus); > stop_cpus_hard(other_cpus); > } > > /* > * We set stop_scheduler here and not in the block above, > * because we want to ensure that if panic has been called and > * stop_scheduler_on_panic is true, then stop_scheduler will > * always be set. Even if panic has been entered from kdb. > */ > td->td_stopsched = 1; > #endif > > bootopt = RB_AUTOBOOT; > newpanic = 0; > if (panicstr) > bootopt |= RB_NOSYNC; > else { > bootopt |= RB_DUMP; > panicstr = fmt; > newpanic = 1; > } > > if (newpanic) { > (void)vsnprintf(buf, sizeof(buf), fmt, ap); > panicstr = buf; > cngrab(); > printf("panic: %s\n", buf); > } else { > printf("panic: "); > vprintf(fmt, ap); > printf("\n"); > } > #ifdef SMP > printf("cpuid = %d\n", PCPU_GET(cpuid)); > #endif > > #ifdef KDB > if (newpanic && trace_on_panic) > kdb_backtrace(); > if (debugger_on_panic) > kdb_enter(KDB_WHY_PANIC, "panic"); > #endif > /*thread_lock(td); */ > td->td_flags |= TDF_INPANIC; > /* thread_unlock(td); */ > if (!sync_on_panic) > bootopt |= RB_NOSYNC; > kern_reboot(bootopt); > } > > /* > * Support for poweroff delay. > * > * Please note that setting this delay too short might power off your machine > * before the write cache on your hard disk has been flushed, leading to > * soft-updates inconsistencies. > */ > #ifndef POWEROFF_DELAY > # define POWEROFF_DELAY 5000 > #endif > static int poweroff_delay = POWEROFF_DELAY; > > SYSCTL_INT(_kern_shutdown, OID_AUTO, poweroff_delay, CTLFLAG_RW, > &poweroff_delay, 0, "Delay before poweroff to write disk caches (msec)"); > > static void > poweroff_wait(void *junk, int howto) > { > > if (!(howto & RB_POWEROFF) || poweroff_delay <= 0) > return; > DELAY(poweroff_delay * 1000); > } > > /* > * Some system processes (e.g. syncer) need to be stopped at appropriate > * points in their main loops prior to a system shutdown, so that they > * won't interfere with the shutdown process (e.g. by holding a disk buf > * to cause sync to fail). For each of these system processes, register > * shutdown_kproc() as a handler for one of shutdown events. > */ > static int kproc_shutdown_wait = 60; > SYSCTL_INT(_kern_shutdown, OID_AUTO, kproc_shutdown_wait, CTLFLAG_RW, > &kproc_shutdown_wait, 0, "Max wait time (sec) to stop for each process"); > > void > kproc_shutdown(void *arg, int howto) > { > struct proc *p; > int error; > > if (panicstr) > return; > > p = (struct proc *)arg; > printf("Waiting (max %d seconds) for system process `%s' to stop...", > kproc_shutdown_wait, p->p_comm); > error = kproc_suspend(p, kproc_shutdown_wait * hz); > > if (error == EWOULDBLOCK) > printf("timed out\n"); > else > printf("done\n"); > } > > void > kthread_shutdown(void *arg, int howto) > { > struct thread *td; > int error; > > if (panicstr) > return; > > td = (struct thread *)arg; > printf("Waiting (max %d seconds) for system thread `%s' to stop...", > kproc_shutdown_wait, td->td_name); > error = kthread_suspend(td, kproc_shutdown_wait * hz); > > if (error == EWOULDBLOCK) > printf("timed out\n"); > else > printf("done\n"); > } > > static char dumpdevname[sizeof(((struct cdev*)NULL)->si_name)]; > SYSCTL_STRING(_kern_shutdown, OID_AUTO, dumpdevname, CTLFLAG_RD, > dumpdevname, 0, "Device for kernel dumps"); > > /* Registration of dumpers */ > int > set_dumper(struct dumperinfo *di, const char *devname, struct thread *td) > { > size_t wantcopy; > int error; > > error = priv_check(td, PRIV_SETDUMPER); > if (error != 0) > return (error); > > if (di == NULL) { > bzero(&dumper, sizeof dumper); > dumpdevname[0] = '\0'; > return (0); > } > if (dumper.dumper != NULL) > return (EBUSY); > dumper = *di; > wantcopy = strlcpy(dumpdevname, devname, sizeof(dumpdevname)); > if (wantcopy >= sizeof(dumpdevname)) { > printf("set_dumper: device name truncated from '%s' -> '%s'\n", > devname, dumpdevname); > } > return (0); > } > > /* Call dumper with bounds checking. */ > int > dump_write(struct dumperinfo *di, void *virtual, vm_offset_t physical, > off_t offset, size_t length) > { > > if (length != 0 && (offset < di->mediaoffset || > offset - di->mediaoffset + length > di->mediasize)) { > printf("Attempt to write outside dump device boundaries.\n" > "offset(%jd), mediaoffset(%jd), length(%ju), mediasize(%jd).\n", > (intmax_t)offset, (intmax_t)di->mediaoffset, > (uintmax_t)length, (intmax_t)di->mediasize); > return (ENOSPC); > } > return (di->dumper(di->priv, virtual, physical, offset, length)); > } > > void > mkdumpheader(struct kerneldumpheader *kdh, char *magic, uint32_t archver, > uint64_t dumplen, uint32_t blksz) > { > > bzero(kdh, sizeof(*kdh)); > strncpy(kdh->magic, magic, sizeof(kdh->magic)); > strncpy(kdh->architecture, MACHINE_ARCH, sizeof(kdh->architecture)); > kdh->version = htod32(KERNELDUMPVERSION); > kdh->architectureversion = htod32(archver); > kdh->dumplength = htod64(dumplen); > kdh->dumptime = htod64(time_second); > kdh->blocksize = htod32(blksz); > strncpy(kdh->hostname, prison0.pr_hostname, sizeof(kdh->hostname)); > strncpy(kdh->versionstring, version, sizeof(kdh->versionstring)); > if (panicstr != NULL) > strncpy(kdh->panicstring, panicstr, sizeof(kdh->panicstring)); > kdh->parity = kerneldump_parity(kdh); > } >diff --git a/sys/mips/include/dump.h b/sys/mips/include/dump.h >new file mode 100644 >index 0000000..b3fc907 >--- /dev/null >+++ b/sys/mips/include/dump.h >@@ -0,0 +1,76 @@ >+/*- >+ * Copyright (c) 2014 EMC Corp. >+ * Copyright (c) 2014 Conrad Meyer <conrad.meyer@isilon.com> >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND >+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE >+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE >+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL >+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS >+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) >+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT >+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY >+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF >+ * SUCH DAMAGE. >+ * >+ * $FreeBSD$ >+ */ >+ >+#ifndef _MACHINE_DUMP_H_ >+#define _MACHINE_DUMP_H_ >+ >+#define KERNELDUMP_VERSION KERNELDUMP_MIPS_VERSION >+#define EM_VALUE EM_MIPS >+/* XXX: I suppose 20 should be enough. */ >+#define DUMPSYS_MD_PA_NPAIRS 20 >+#define DUMPSYS_NUM_AUX_HDRS 0 >+ >+void dumpsys_wbinv_all(void); >+ >+static inline void >+dumpsys_md_pa_init(void) >+{ >+ >+ dumpsys_gen_md_pa_init(); >+} >+ >+static inline struct dump_pa * >+dumpsys_md_pa_next(struct dump_pa *p) >+{ >+ >+ return (dumpsys_gen_md_pa_next(p)); >+} >+ >+static inline void >+dumpsys_unmap_chunk(vm_paddr_t pa, size_t s, void *va) >+{ >+ >+ dumpsys_gen_unmap_chunk(pa, s, va); >+} >+ >+static inline int >+dumpsys_write_aux_headers(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_gen_write_aux_headers(di)); >+} >+ >+static inline int >+dumpsys(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_generic(di)); >+} >+ >+#endif /* !_MACHINE_DUMP_H_ */ >diff --git a/sys/mips/include/md_var.h b/sys/mips/include/md_var.h >index f3778a8..622781d 100644 >--- a/sys/mips/include/md_var.h >+++ b/sys/mips/include/md_var.h >@@ -1,83 +1,84 @@ > /*- > * Copyright (c) 1995 Bruce D. Evans. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 3. Neither the name of the author nor the names of contributors > * may be used to endorse or promote products derived from this software > * without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND > * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE > * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > * SUCH DAMAGE. > * > * from: src/sys/i386/include/md_var.h,v 1.35 2000/02/20 20:51:23 bsd > * JNPR: md_var.h,v 1.4 2006/10/16 12:30:34 katta > * $FreeBSD$ > */ > > #ifndef _MACHINE_MD_VAR_H_ > #define _MACHINE_MD_VAR_H_ > > #include <machine/reg.h> > > /* > * Miscellaneous machine-dependent declarations. > */ > extern long Maxmem; > extern char sigcode[]; > extern int szsigcode; > #if defined(__mips_n32) || defined(__mips_n64) > extern char sigcode32[]; > extern int szsigcode32; > #endif > extern uint32_t *vm_page_dump; > extern int vm_page_dump_size; > > extern vm_offset_t kstack0; > extern vm_offset_t kernel_kseg0_end; > > void MipsSaveCurFPState(struct thread *); > void fork_trampoline(void); > uintptr_t MipsEmulateBranch(struct trapframe *, uintptr_t, int, uintptr_t); > void MipsSwitchFPState(struct thread *, struct trapframe *); > int is_cacheable_mem(vm_paddr_t addr); > void mips_wait(void); > > #define MIPS_DEBUG 0 > > #if MIPS_DEBUG > #define MIPS_DEBUG_PRINT(fmt, args...) printf("%s: " fmt "\n" , __FUNCTION__ , ## args) > #else > #define MIPS_DEBUG_PRINT(fmt, args...) > #endif > > void mips_vector_init(void); > void mips_cpu_init(void); > void mips_pcpu0_init(void); > void mips_proc0_init(void); > void mips_postboot_fixup(void); > > extern int busdma_swi_pending; > void busdma_swi(void); > > struct dumperinfo; > void dump_add_page(vm_paddr_t); > void dump_drop_page(vm_paddr_t); > int minidumpsys(struct dumperinfo *); >+ > #endif /* !_MACHINE_MD_VAR_H_ */ >diff --git a/sys/mips/mips/dump_machdep.c b/sys/mips/mips/dump_machdep.c >index fa96e79..584b85e 100644 >--- a/sys/mips/mips/dump_machdep.c >+++ b/sys/mips/mips/dump_machdep.c >@@ -1,368 +1,57 @@ > /*- > * Copyright (c) 2002 Marcel Moolenaar > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, > * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF > * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > >-#include "opt_watchdog.h" >- > #include <sys/param.h> > #include <sys/systm.h> > #include <sys/conf.h> >-#include <sys/cons.h> >-#include <sys/sysctl.h> >-#include <sys/kernel.h> >-#include <sys/proc.h> > #include <sys/kerneldump.h> >-#ifdef SW_WATCHDOG >-#include <sys/watchdog.h> >-#endif >-#include <vm/vm.h> >-#include <vm/pmap.h> >-#include <machine/elf.h> >-#include <machine/md_var.h> >-#include <machine/pcb.h> >-#include <machine/cache.h> >+#include <sys/sysctl.h> > >-CTASSERT(sizeof(struct kerneldumpheader) == 512); >+#include <machine/cache.h> >+#include <machine/dump.h> > > int do_minidump = 1; > SYSCTL_INT(_debug, OID_AUTO, minidump, CTLFLAG_RWTUN, &do_minidump, 0, > "Enable mini crash dumps"); > >-/* >- * Don't touch the first SIZEOF_METADATA bytes on the dump device. This >- * is to protect us from metadata and to protect metadata from us. >- */ >-#define SIZEOF_METADATA (64*1024) >- >-#define MD_ALIGN(x) (((off_t)(x) + PAGE_MASK) & ~PAGE_MASK) >-#define DEV_ALIGN(x) (((off_t)(x) + (DEV_BSIZE-1)) & ~(DEV_BSIZE-1)) >-extern struct pcb dumppcb; >- >-struct md_pa { >- vm_paddr_t md_start; >- vm_paddr_t md_size; >-}; >- >-typedef int callback_t(struct md_pa *, int, void *); >- >-static struct kerneldumpheader kdh; >-static off_t dumplo, fileofs; >- >-/* Handle buffered writes. */ >-static char buffer[DEV_BSIZE]; >-static size_t fragsz; >- >-/* XXX: I suppose 20 should be enough. */ >-static struct md_pa dump_map[20]; >- >-static void >-md_pa_init(void) >-{ >- int n, idx; >- >- bzero(dump_map, sizeof(dump_map)); >- for (n = 0; n < sizeof(dump_map) / sizeof(dump_map[0]); n++) { >- idx = n * 2; >- if (dump_avail[idx] == 0 && dump_avail[idx + 1] == 0) >- break; >- dump_map[n].md_start = dump_avail[idx]; >- dump_map[n].md_size = dump_avail[idx + 1] - dump_avail[idx]; >- } >-} >- >-static struct md_pa * >-md_pa_first(void) >-{ >- >- return (&dump_map[0]); >-} >- >-static struct md_pa * >-md_pa_next(struct md_pa *mdp) >-{ >- >- mdp++; >- if (mdp->md_size == 0) >- mdp = NULL; >- return (mdp); >-} >- >-static int >-buf_write(struct dumperinfo *di, char *ptr, size_t sz) >-{ >- size_t len; >- int error; >- >- while (sz) { >- len = DEV_BSIZE - fragsz; >- if (len > sz) >- len = sz; >- bcopy(ptr, buffer + fragsz, len); >- fragsz += len; >- ptr += len; >- sz -= len; >- if (fragsz == DEV_BSIZE) { >- error = dump_write(di, buffer, 0, dumplo, >- DEV_BSIZE); >- if (error) >- return error; >- dumplo += DEV_BSIZE; >- fragsz = 0; >- } >- } >- >- return (0); >-} >- >-static int >-buf_flush(struct dumperinfo *di) >-{ >- int error; >- >- if (fragsz == 0) >- return (0); >- >- error = dump_write(di, buffer, 0, dumplo, DEV_BSIZE); >- dumplo += DEV_BSIZE; >- fragsz = 0; >- return (error); >-} >- >-extern vm_offset_t kernel_l1kva; >-extern char *pouet2; >- >-static int >-cb_dumpdata(struct md_pa *mdp, int seqnr, void *arg) >+void >+dumpsys_wbinv_all(void) > { >- struct dumperinfo *di = (struct dumperinfo*)arg; >- vm_paddr_t pa; >- uint32_t pgs; >- size_t counter, sz, chunk; >- int c, error; >- >- error = 0; /* catch case in which chunk size is 0 */ >- counter = 0; >- pgs = mdp->md_size / PAGE_SIZE; >- pa = mdp->md_start; >- >- printf(" chunk %d: %dMB (%d pages)", seqnr, pgs * PAGE_SIZE / ( >- 1024*1024), pgs); > > /* Make sure we write coherent datas. */ > mips_dcache_wbinv_all(); >- while (pgs) { >- chunk = pgs; >- if (chunk > MAXDUMPPGS) >- chunk = MAXDUMPPGS; >- sz = chunk << PAGE_SHIFT; >- counter += sz; >- if (counter >> 24) { >- printf(" %d", pgs * PAGE_SIZE); >- counter &= (1<<24) - 1; >- } >- >-#ifdef SW_WATCHDOG >- wdog_kern_pat(WD_LASTVAL); >-#endif >- error = dump_write(di, (void *)(intptr_t)(pa),0, dumplo, sz); /* XXX fix PA */ >- if (error) >- break; >- dumplo += sz; >- pgs -= chunk; >- pa += sz; >- >- /* Check for user abort. */ >- c = cncheckc(); >- if (c == 0x03) >- return (ECANCELED); >- if (c != -1) >- printf(" (CTRL-C to abort) "); >- } >- printf(" ... %s\n", (error) ? "fail" : "ok"); >- return (error); >-} >- >-static int >-cb_dumphdr(struct md_pa *mdp, int seqnr, void *arg) >-{ >- struct dumperinfo *di = (struct dumperinfo*)arg; >- Elf_Phdr phdr; >- uint64_t size; >- int error; >- >- size = mdp->md_size; >- bzero(&phdr, sizeof(phdr)); >- phdr.p_type = PT_LOAD; >- phdr.p_flags = PF_R; /* XXX */ >- phdr.p_offset = fileofs; >- phdr.p_vaddr = mdp->md_start; >- phdr.p_paddr = mdp->md_start; >- phdr.p_filesz = size; >- phdr.p_memsz = size; >- phdr.p_align = PAGE_SIZE; >- >- error = buf_write(di, (char*)&phdr, sizeof(phdr)); >- fileofs += phdr.p_filesz; >- return (error); >-} >- >-static int >-cb_size(struct md_pa *mdp, int seqnr, void *arg) >-{ >- uint32_t *sz = (uint32_t*)arg; >- >- *sz += (uint32_t)mdp->md_size; >- return (0); >-} >- >-static int >-foreach_chunk(callback_t cb, void *arg) >-{ >- struct md_pa *mdp; >- int error, seqnr; >- >- seqnr = 0; >- mdp = md_pa_first(); >- while (mdp != NULL) { >- error = (*cb)(mdp, seqnr++, arg); >- if (error) >- return (-error); >- mdp = md_pa_next(mdp); >- } >- return (seqnr); > } > >-int >-dumpsys(struct dumperinfo *di) >+void >+dumpsys_map_chunk(vm_paddr_t pa, size_t chunk __unused, void **va) > { >- Elf_Ehdr ehdr; >- uint32_t dumpsize; >- off_t hdrgap; >- size_t hdrsz; >- int error; >- >- if (do_minidump) >- return (minidumpsys(di)); >- >- bzero(&ehdr, sizeof(ehdr)); >- ehdr.e_ident[EI_MAG0] = ELFMAG0; >- ehdr.e_ident[EI_MAG1] = ELFMAG1; >- ehdr.e_ident[EI_MAG2] = ELFMAG2; >- ehdr.e_ident[EI_MAG3] = ELFMAG3; >- ehdr.e_ident[EI_CLASS] = ELF_CLASS; >-#if BYTE_ORDER == LITTLE_ENDIAN >- ehdr.e_ident[EI_DATA] = ELFDATA2LSB; >-#else >- ehdr.e_ident[EI_DATA] = ELFDATA2MSB; >-#endif >- ehdr.e_ident[EI_VERSION] = EV_CURRENT; >- ehdr.e_ident[EI_OSABI] = ELFOSABI_STANDALONE; /* XXX big picture? */ >- ehdr.e_type = ET_CORE; >- ehdr.e_machine = EM_MIPS; >- ehdr.e_phoff = sizeof(ehdr); >- ehdr.e_flags = 0; >- ehdr.e_ehsize = sizeof(ehdr); >- ehdr.e_phentsize = sizeof(Elf_Phdr); >- ehdr.e_shentsize = sizeof(Elf_Shdr); >- >- md_pa_init(); >- >- /* Calculate dump size. */ >- dumpsize = 0L; >- ehdr.e_phnum = foreach_chunk(cb_size, &dumpsize); >- hdrsz = ehdr.e_phoff + ehdr.e_phnum * ehdr.e_phentsize; >- fileofs = MD_ALIGN(hdrsz); >- dumpsize += fileofs; >- hdrgap = fileofs - DEV_ALIGN(hdrsz); >- >- /* Determine dump offset on device. */ >- if (di->mediasize < SIZEOF_METADATA + dumpsize + sizeof(kdh) * 2) { >- error = ENOSPC; >- goto fail; >- } >- dumplo = di->mediaoffset + di->mediasize - dumpsize; >- dumplo -= sizeof(kdh) * 2; >- >- mkdumpheader(&kdh, KERNELDUMPMAGIC, KERNELDUMP_MIPS_VERSION, dumpsize, di->blocksize); >- >- printf("Dumping %llu MB (%d chunks)\n", (long long)dumpsize >> 20, >- ehdr.e_phnum); >- >- /* Dump leader */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- dumplo += sizeof(kdh); >- >- /* Dump ELF header */ >- error = buf_write(di, (char*)&ehdr, sizeof(ehdr)); >- if (error) >- goto fail; >- >- /* Dump program headers */ >- error = foreach_chunk(cb_dumphdr, di); >- if (error < 0) >- goto fail; >- buf_flush(di); >- >- /* >- * All headers are written using blocked I/O, so we know the >- * current offset is (still) block aligned. Skip the alignement >- * in the file to have the segment contents aligned at page >- * boundary. We cannot use MD_ALIGN on dumplo, because we don't >- * care and may very well be unaligned within the dump device. >- */ >- dumplo += hdrgap; >- >- /* Dump memory chunks (updates dumplo) */ >- error = foreach_chunk(cb_dumpdata, di); >- if (error < 0) >- goto fail; >- >- /* Dump trailer */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- >- /* Signal completion, signoff and exit stage left. */ >- dump_write(di, NULL, 0, 0, 0); >- printf("\nDump complete\n"); >- return (0); >- >- fail: >- if (error < 0) >- error = -error; > >- if (error == ECANCELED) >- printf("\nDump aborted\n"); >- else if (error == ENOSPC) >- printf("\nDump failed. Partition too small.\n"); >- else >- printf("\n** DUMP FAILED (ERROR %d) **\n", error); >- return (error); >+ /* XXX fix PA */ >+ *va = (void*)(intptr_t)pa; > } >diff --git a/sys/powerpc/aim/mmu_oea.c b/sys/powerpc/aim/mmu_oea.c >index 742dd70..c49404e 100644 >--- a/sys/powerpc/aim/mmu_oea.c >+++ b/sys/powerpc/aim/mmu_oea.c >@@ -1,2727 +1,2699 @@ > /*- > * Copyright (c) 2001 The NetBSD Foundation, Inc. > * All rights reserved. > * > * This code is derived from software contributed to The NetBSD Foundation > * by Matt Thomas <matt@3am-software.com> of Allegro Networks, Inc. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS > * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED > * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR > * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS > * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR > * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF > * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS > * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE > * POSSIBILITY OF SUCH DAMAGE. > */ > /*- > * Copyright (C) 1995, 1996 Wolfgang Solfrank. > * Copyright (C) 1995, 1996 TooLs GmbH. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 3. All advertising materials mentioning features or use of this software > * must display the following acknowledgement: > * This product includes software developed by TooLs GmbH. > * 4. The name of TooLs GmbH may not be used to endorse or promote products > * derived from this software without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY TOOLS GMBH ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, > * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; > * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, > * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR > * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > * > * $NetBSD: pmap.c,v 1.28 2000/03/26 20:42:36 kleink Exp $ > */ > /*- > * Copyright (C) 2001 Benno Rice. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY Benno Rice ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, > * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; > * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, > * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR > * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > /* > * Manages physical address maps. > * > * Since the information managed by this module is also stored by the > * logical address mapping module, this module may throw away valid virtual > * to physical mappings at almost any time. However, invalidations of > * mappings must be done as requested. > * > * In order to cope with hardware architectures which make virtual to > * physical map invalidates expensive, this module may delay invalidate > * reduced protection operations until such time as they are actually > * necessary. This module is given full information as to which processors > * are currently using which maps, and to when physical maps must be made > * correct. > */ > > #include "opt_kstack_pages.h" > > #include <sys/param.h> > #include <sys/kernel.h> >+#include <sys/conf.h> > #include <sys/queue.h> > #include <sys/cpuset.h> >+#include <sys/kerneldump.h> > #include <sys/ktr.h> > #include <sys/lock.h> > #include <sys/msgbuf.h> > #include <sys/mutex.h> > #include <sys/proc.h> > #include <sys/rwlock.h> > #include <sys/sched.h> > #include <sys/sysctl.h> > #include <sys/systm.h> > #include <sys/vmmeter.h> > > #include <dev/ofw/openfirm.h> > > #include <vm/vm.h> > #include <vm/vm_param.h> > #include <vm/vm_kern.h> > #include <vm/vm_page.h> > #include <vm/vm_map.h> > #include <vm/vm_object.h> > #include <vm/vm_extern.h> > #include <vm/vm_pageout.h> > #include <vm/uma.h> > > #include <machine/cpu.h> > #include <machine/platform.h> > #include <machine/bat.h> > #include <machine/frame.h> > #include <machine/md_var.h> > #include <machine/psl.h> > #include <machine/pte.h> > #include <machine/smp.h> > #include <machine/sr.h> > #include <machine/mmuvar.h> > #include <machine/trap.h> > > #include "mmu_if.h" > > #define MOEA_DEBUG > > #define TODO panic("%s: not implemented", __func__); > > #define VSID_MAKE(sr, hash) ((sr) | (((hash) & 0xfffff) << 4)) > #define VSID_TO_SR(vsid) ((vsid) & 0xf) > #define VSID_TO_HASH(vsid) (((vsid) >> 4) & 0xfffff) > > struct ofw_map { > vm_offset_t om_va; > vm_size_t om_len; > vm_offset_t om_pa; > u_int om_mode; > }; > > extern unsigned char _etext[]; > extern unsigned char _end[]; > >-extern int dumpsys_minidump; >- > /* > * Map of physical memory regions. > */ > static struct mem_region *regions; > static struct mem_region *pregions; > static u_int phys_avail_count; > static int regions_sz, pregions_sz; > static struct ofw_map *translations; > > /* > * Lock for the pteg and pvo tables. > */ > struct mtx moea_table_mutex; > struct mtx moea_vsid_mutex; > > /* tlbie instruction synchronization */ > static struct mtx tlbie_mtx; > > /* > * PTEG data. > */ > static struct pteg *moea_pteg_table; > u_int moea_pteg_count; > u_int moea_pteg_mask; > > /* > * PVO data. > */ > struct pvo_head *moea_pvo_table; /* pvo entries by pteg index */ > struct pvo_head moea_pvo_kunmanaged = > LIST_HEAD_INITIALIZER(moea_pvo_kunmanaged); /* list of unmanaged pages */ > > static struct rwlock_padalign pvh_global_lock; > > uma_zone_t moea_upvo_zone; /* zone for pvo entries for unmanaged pages */ > uma_zone_t moea_mpvo_zone; /* zone for pvo entries for managed pages */ > > #define BPVO_POOL_SIZE 32768 > static struct pvo_entry *moea_bpvo_pool; > static int moea_bpvo_pool_index = 0; > > #define VSID_NBPW (sizeof(u_int32_t) * 8) > static u_int moea_vsid_bitmap[NPMAPS / VSID_NBPW]; > > static boolean_t moea_initialized = FALSE; > > /* > * Statistics. > */ > u_int moea_pte_valid = 0; > u_int moea_pte_overflow = 0; > u_int moea_pte_replacements = 0; > u_int moea_pvo_entries = 0; > u_int moea_pvo_enter_calls = 0; > u_int moea_pvo_remove_calls = 0; > u_int moea_pte_spills = 0; > SYSCTL_INT(_machdep, OID_AUTO, moea_pte_valid, CTLFLAG_RD, &moea_pte_valid, > 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea_pte_overflow, CTLFLAG_RD, > &moea_pte_overflow, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea_pte_replacements, CTLFLAG_RD, > &moea_pte_replacements, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea_pvo_entries, CTLFLAG_RD, &moea_pvo_entries, > 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea_pvo_enter_calls, CTLFLAG_RD, > &moea_pvo_enter_calls, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea_pvo_remove_calls, CTLFLAG_RD, > &moea_pvo_remove_calls, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea_pte_spills, CTLFLAG_RD, > &moea_pte_spills, 0, ""); > > /* > * Allocate physical memory for use in moea_bootstrap. > */ > static vm_offset_t moea_bootstrap_alloc(vm_size_t, u_int); > > /* > * PTE calls. > */ > static int moea_pte_insert(u_int, struct pte *); > > /* > * PVO calls. > */ > static int moea_pvo_enter(pmap_t, uma_zone_t, struct pvo_head *, > vm_offset_t, vm_offset_t, u_int, int); > static void moea_pvo_remove(struct pvo_entry *, int); > static struct pvo_entry *moea_pvo_find_va(pmap_t, vm_offset_t, int *); > static struct pte *moea_pvo_to_pte(const struct pvo_entry *, int); > > /* > * Utility routines. > */ > static int moea_enter_locked(pmap_t, vm_offset_t, vm_page_t, > vm_prot_t, u_int, int8_t); > static void moea_syncicache(vm_offset_t, vm_size_t); > static boolean_t moea_query_bit(vm_page_t, int); > static u_int moea_clear_bit(vm_page_t, int); > static void moea_kremove(mmu_t, vm_offset_t); > int moea_pte_spill(vm_offset_t); > > /* > * Kernel MMU interface > */ > void moea_clear_modify(mmu_t, vm_page_t); > void moea_copy_page(mmu_t, vm_page_t, vm_page_t); > void moea_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize); > int moea_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, u_int, > int8_t); > void moea_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_page_t, > vm_prot_t); > void moea_enter_quick(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t); > vm_paddr_t moea_extract(mmu_t, pmap_t, vm_offset_t); > vm_page_t moea_extract_and_hold(mmu_t, pmap_t, vm_offset_t, vm_prot_t); > void moea_init(mmu_t); > boolean_t moea_is_modified(mmu_t, vm_page_t); > boolean_t moea_is_prefaultable(mmu_t, pmap_t, vm_offset_t); > boolean_t moea_is_referenced(mmu_t, vm_page_t); > int moea_ts_referenced(mmu_t, vm_page_t); > vm_offset_t moea_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, int); > boolean_t moea_page_exists_quick(mmu_t, pmap_t, vm_page_t); > int moea_page_wired_mappings(mmu_t, vm_page_t); > void moea_pinit(mmu_t, pmap_t); > void moea_pinit0(mmu_t, pmap_t); > void moea_protect(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_prot_t); > void moea_qenter(mmu_t, vm_offset_t, vm_page_t *, int); > void moea_qremove(mmu_t, vm_offset_t, int); > void moea_release(mmu_t, pmap_t); > void moea_remove(mmu_t, pmap_t, vm_offset_t, vm_offset_t); > void moea_remove_all(mmu_t, vm_page_t); > void moea_remove_write(mmu_t, vm_page_t); > void moea_unwire(mmu_t, pmap_t, vm_offset_t, vm_offset_t); > void moea_zero_page(mmu_t, vm_page_t); > void moea_zero_page_area(mmu_t, vm_page_t, int, int); > void moea_zero_page_idle(mmu_t, vm_page_t); > void moea_activate(mmu_t, struct thread *); > void moea_deactivate(mmu_t, struct thread *); > void moea_cpu_bootstrap(mmu_t, int); > void moea_bootstrap(mmu_t, vm_offset_t, vm_offset_t); > void *moea_mapdev(mmu_t, vm_paddr_t, vm_size_t); > void *moea_mapdev_attr(mmu_t, vm_offset_t, vm_size_t, vm_memattr_t); > void moea_unmapdev(mmu_t, vm_offset_t, vm_size_t); > vm_paddr_t moea_kextract(mmu_t, vm_offset_t); > void moea_kenter_attr(mmu_t, vm_offset_t, vm_offset_t, vm_memattr_t); > void moea_kenter(mmu_t, vm_offset_t, vm_paddr_t); > void moea_page_set_memattr(mmu_t mmu, vm_page_t m, vm_memattr_t ma); > boolean_t moea_dev_direct_mapped(mmu_t, vm_paddr_t, vm_size_t); > static void moea_sync_icache(mmu_t, pmap_t, vm_offset_t, vm_size_t); >-vm_offset_t moea_dumpsys_map(mmu_t mmu, struct pmap_md *md, vm_size_t ofs, >- vm_size_t *sz); >-struct pmap_md * moea_scan_md(mmu_t mmu, struct pmap_md *prev); >+void moea_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t sz, void **va); >+void moea_scan_init(mmu_t mmu); > > static mmu_method_t moea_methods[] = { > MMUMETHOD(mmu_clear_modify, moea_clear_modify), > MMUMETHOD(mmu_copy_page, moea_copy_page), > MMUMETHOD(mmu_copy_pages, moea_copy_pages), > MMUMETHOD(mmu_enter, moea_enter), > MMUMETHOD(mmu_enter_object, moea_enter_object), > MMUMETHOD(mmu_enter_quick, moea_enter_quick), > MMUMETHOD(mmu_extract, moea_extract), > MMUMETHOD(mmu_extract_and_hold, moea_extract_and_hold), > MMUMETHOD(mmu_init, moea_init), > MMUMETHOD(mmu_is_modified, moea_is_modified), > MMUMETHOD(mmu_is_prefaultable, moea_is_prefaultable), > MMUMETHOD(mmu_is_referenced, moea_is_referenced), > MMUMETHOD(mmu_ts_referenced, moea_ts_referenced), > MMUMETHOD(mmu_map, moea_map), > MMUMETHOD(mmu_page_exists_quick,moea_page_exists_quick), > MMUMETHOD(mmu_page_wired_mappings,moea_page_wired_mappings), > MMUMETHOD(mmu_pinit, moea_pinit), > MMUMETHOD(mmu_pinit0, moea_pinit0), > MMUMETHOD(mmu_protect, moea_protect), > MMUMETHOD(mmu_qenter, moea_qenter), > MMUMETHOD(mmu_qremove, moea_qremove), > MMUMETHOD(mmu_release, moea_release), > MMUMETHOD(mmu_remove, moea_remove), > MMUMETHOD(mmu_remove_all, moea_remove_all), > MMUMETHOD(mmu_remove_write, moea_remove_write), > MMUMETHOD(mmu_sync_icache, moea_sync_icache), > MMUMETHOD(mmu_unwire, moea_unwire), > MMUMETHOD(mmu_zero_page, moea_zero_page), > MMUMETHOD(mmu_zero_page_area, moea_zero_page_area), > MMUMETHOD(mmu_zero_page_idle, moea_zero_page_idle), > MMUMETHOD(mmu_activate, moea_activate), > MMUMETHOD(mmu_deactivate, moea_deactivate), > MMUMETHOD(mmu_page_set_memattr, moea_page_set_memattr), > > /* Internal interfaces */ > MMUMETHOD(mmu_bootstrap, moea_bootstrap), > MMUMETHOD(mmu_cpu_bootstrap, moea_cpu_bootstrap), > MMUMETHOD(mmu_mapdev_attr, moea_mapdev_attr), > MMUMETHOD(mmu_mapdev, moea_mapdev), > MMUMETHOD(mmu_unmapdev, moea_unmapdev), > MMUMETHOD(mmu_kextract, moea_kextract), > MMUMETHOD(mmu_kenter, moea_kenter), > MMUMETHOD(mmu_kenter_attr, moea_kenter_attr), > MMUMETHOD(mmu_dev_direct_mapped,moea_dev_direct_mapped), >- MMUMETHOD(mmu_scan_md, moea_scan_md), >+ MMUMETHOD(mmu_scan_init, moea_scan_init), > MMUMETHOD(mmu_dumpsys_map, moea_dumpsys_map), > > { 0, 0 } > }; > > MMU_DEF(oea_mmu, MMU_TYPE_OEA, moea_methods, 0); > > static __inline uint32_t > moea_calc_wimg(vm_offset_t pa, vm_memattr_t ma) > { > uint32_t pte_lo; > int i; > > if (ma != VM_MEMATTR_DEFAULT) { > switch (ma) { > case VM_MEMATTR_UNCACHEABLE: > return (PTE_I | PTE_G); > case VM_MEMATTR_WRITE_COMBINING: > case VM_MEMATTR_WRITE_BACK: > case VM_MEMATTR_PREFETCHABLE: > return (PTE_I); > case VM_MEMATTR_WRITE_THROUGH: > return (PTE_W | PTE_M); > } > } > > /* > * Assume the page is cache inhibited and access is guarded unless > * it's in our available memory array. > */ > pte_lo = PTE_I | PTE_G; > for (i = 0; i < pregions_sz; i++) { > if ((pa >= pregions[i].mr_start) && > (pa < (pregions[i].mr_start + pregions[i].mr_size))) { > pte_lo = PTE_M; > break; > } > } > > return pte_lo; > } > > static void > tlbie(vm_offset_t va) > { > > mtx_lock_spin(&tlbie_mtx); > __asm __volatile("ptesync"); > __asm __volatile("tlbie %0" :: "r"(va)); > __asm __volatile("eieio; tlbsync; ptesync"); > mtx_unlock_spin(&tlbie_mtx); > } > > static void > tlbia(void) > { > vm_offset_t va; > > for (va = 0; va < 0x00040000; va += 0x00001000) { > __asm __volatile("tlbie %0" :: "r"(va)); > powerpc_sync(); > } > __asm __volatile("tlbsync"); > powerpc_sync(); > } > > static __inline int > va_to_sr(u_int *sr, vm_offset_t va) > { > return (sr[(uintptr_t)va >> ADDR_SR_SHFT]); > } > > static __inline u_int > va_to_pteg(u_int sr, vm_offset_t addr) > { > u_int hash; > > hash = (sr & SR_VSID_MASK) ^ (((u_int)addr & ADDR_PIDX) >> > ADDR_PIDX_SHFT); > return (hash & moea_pteg_mask); > } > > static __inline struct pvo_head * > vm_page_to_pvoh(vm_page_t m) > { > > return (&m->md.mdpg_pvoh); > } > > static __inline void > moea_attr_clear(vm_page_t m, int ptebit) > { > > rw_assert(&pvh_global_lock, RA_WLOCKED); > m->md.mdpg_attrs &= ~ptebit; > } > > static __inline int > moea_attr_fetch(vm_page_t m) > { > > return (m->md.mdpg_attrs); > } > > static __inline void > moea_attr_save(vm_page_t m, int ptebit) > { > > rw_assert(&pvh_global_lock, RA_WLOCKED); > m->md.mdpg_attrs |= ptebit; > } > > static __inline int > moea_pte_compare(const struct pte *pt, const struct pte *pvo_pt) > { > if (pt->pte_hi == pvo_pt->pte_hi) > return (1); > > return (0); > } > > static __inline int > moea_pte_match(struct pte *pt, u_int sr, vm_offset_t va, int which) > { > return (pt->pte_hi & ~PTE_VALID) == > (((sr & SR_VSID_MASK) << PTE_VSID_SHFT) | > ((va >> ADDR_API_SHFT) & PTE_API) | which); > } > > static __inline void > moea_pte_create(struct pte *pt, u_int sr, vm_offset_t va, u_int pte_lo) > { > > mtx_assert(&moea_table_mutex, MA_OWNED); > > /* > * Construct a PTE. Default to IMB initially. Valid bit only gets > * set when the real pte is set in memory. > * > * Note: Don't set the valid bit for correct operation of tlb update. > */ > pt->pte_hi = ((sr & SR_VSID_MASK) << PTE_VSID_SHFT) | > (((va & ADDR_PIDX) >> ADDR_API_SHFT) & PTE_API); > pt->pte_lo = pte_lo; > } > > static __inline void > moea_pte_synch(struct pte *pt, struct pte *pvo_pt) > { > > mtx_assert(&moea_table_mutex, MA_OWNED); > pvo_pt->pte_lo |= pt->pte_lo & (PTE_REF | PTE_CHG); > } > > static __inline void > moea_pte_clear(struct pte *pt, vm_offset_t va, int ptebit) > { > > mtx_assert(&moea_table_mutex, MA_OWNED); > > /* > * As shown in Section 7.6.3.2.3 > */ > pt->pte_lo &= ~ptebit; > tlbie(va); > } > > static __inline void > moea_pte_set(struct pte *pt, struct pte *pvo_pt) > { > > mtx_assert(&moea_table_mutex, MA_OWNED); > pvo_pt->pte_hi |= PTE_VALID; > > /* > * Update the PTE as defined in section 7.6.3.1. > * Note that the REF/CHG bits are from pvo_pt and thus should have > * been saved so this routine can restore them (if desired). > */ > pt->pte_lo = pvo_pt->pte_lo; > powerpc_sync(); > pt->pte_hi = pvo_pt->pte_hi; > powerpc_sync(); > moea_pte_valid++; > } > > static __inline void > moea_pte_unset(struct pte *pt, struct pte *pvo_pt, vm_offset_t va) > { > > mtx_assert(&moea_table_mutex, MA_OWNED); > pvo_pt->pte_hi &= ~PTE_VALID; > > /* > * Force the reg & chg bits back into the PTEs. > */ > powerpc_sync(); > > /* > * Invalidate the pte. > */ > pt->pte_hi &= ~PTE_VALID; > > tlbie(va); > > /* > * Save the reg & chg bits. > */ > moea_pte_synch(pt, pvo_pt); > moea_pte_valid--; > } > > static __inline void > moea_pte_change(struct pte *pt, struct pte *pvo_pt, vm_offset_t va) > { > > /* > * Invalidate the PTE > */ > moea_pte_unset(pt, pvo_pt, va); > moea_pte_set(pt, pvo_pt); > } > > /* > * Quick sort callout for comparing memory regions. > */ > static int om_cmp(const void *a, const void *b); > > static int > om_cmp(const void *a, const void *b) > { > const struct ofw_map *mapa; > const struct ofw_map *mapb; > > mapa = a; > mapb = b; > if (mapa->om_pa < mapb->om_pa) > return (-1); > else if (mapa->om_pa > mapb->om_pa) > return (1); > else > return (0); > } > > void > moea_cpu_bootstrap(mmu_t mmup, int ap) > { > u_int sdr; > int i; > > if (ap) { > powerpc_sync(); > __asm __volatile("mtdbatu 0,%0" :: "r"(battable[0].batu)); > __asm __volatile("mtdbatl 0,%0" :: "r"(battable[0].batl)); > isync(); > __asm __volatile("mtibatu 0,%0" :: "r"(battable[0].batu)); > __asm __volatile("mtibatl 0,%0" :: "r"(battable[0].batl)); > isync(); > } > > #ifdef WII > /* > * Special case for the Wii: don't install the PCI BAT. > */ > if (strcmp(installed_platform(), "wii") != 0) { > #endif > __asm __volatile("mtdbatu 1,%0" :: "r"(battable[8].batu)); > __asm __volatile("mtdbatl 1,%0" :: "r"(battable[8].batl)); > #ifdef WII > } > #endif > isync(); > > __asm __volatile("mtibatu 1,%0" :: "r"(0)); > __asm __volatile("mtdbatu 2,%0" :: "r"(0)); > __asm __volatile("mtibatu 2,%0" :: "r"(0)); > __asm __volatile("mtdbatu 3,%0" :: "r"(0)); > __asm __volatile("mtibatu 3,%0" :: "r"(0)); > isync(); > > for (i = 0; i < 16; i++) > mtsrin(i << ADDR_SR_SHFT, kernel_pmap->pm_sr[i]); > powerpc_sync(); > > sdr = (u_int)moea_pteg_table | (moea_pteg_mask >> 10); > __asm __volatile("mtsdr1 %0" :: "r"(sdr)); > isync(); > > tlbia(); > } > > void > moea_bootstrap(mmu_t mmup, vm_offset_t kernelstart, vm_offset_t kernelend) > { > ihandle_t mmui; > phandle_t chosen, mmu; > int sz; > int i, j; > vm_size_t size, physsz, hwphyssz; > vm_offset_t pa, va, off; > void *dpcpu; > register_t msr; > > /* > * Set up BAT0 to map the lowest 256 MB area > */ > battable[0x0].batl = BATL(0x00000000, BAT_M, BAT_PP_RW); > battable[0x0].batu = BATU(0x00000000, BAT_BL_256M, BAT_Vs); > > /* > * Map PCI memory space. > */ > battable[0x8].batl = BATL(0x80000000, BAT_I|BAT_G, BAT_PP_RW); > battable[0x8].batu = BATU(0x80000000, BAT_BL_256M, BAT_Vs); > > battable[0x9].batl = BATL(0x90000000, BAT_I|BAT_G, BAT_PP_RW); > battable[0x9].batu = BATU(0x90000000, BAT_BL_256M, BAT_Vs); > > battable[0xa].batl = BATL(0xa0000000, BAT_I|BAT_G, BAT_PP_RW); > battable[0xa].batu = BATU(0xa0000000, BAT_BL_256M, BAT_Vs); > > battable[0xb].batl = BATL(0xb0000000, BAT_I|BAT_G, BAT_PP_RW); > battable[0xb].batu = BATU(0xb0000000, BAT_BL_256M, BAT_Vs); > > /* > * Map obio devices. > */ > battable[0xf].batl = BATL(0xf0000000, BAT_I|BAT_G, BAT_PP_RW); > battable[0xf].batu = BATU(0xf0000000, BAT_BL_256M, BAT_Vs); > > /* > * Use an IBAT and a DBAT to map the bottom segment of memory > * where we are. Turn off instruction relocation temporarily > * to prevent faults while reprogramming the IBAT. > */ > msr = mfmsr(); > mtmsr(msr & ~PSL_IR); > __asm (".balign 32; \n" > "mtibatu 0,%0; mtibatl 0,%1; isync; \n" > "mtdbatu 0,%0; mtdbatl 0,%1; isync" > :: "r"(battable[0].batu), "r"(battable[0].batl)); > mtmsr(msr); > > #ifdef WII > if (strcmp(installed_platform(), "wii") != 0) { > #endif > /* map pci space */ > __asm __volatile("mtdbatu 1,%0" :: "r"(battable[8].batu)); > __asm __volatile("mtdbatl 1,%0" :: "r"(battable[8].batl)); > #ifdef WII > } > #endif > isync(); > > /* set global direct map flag */ > hw_direct_map = 1; > > mem_regions(&pregions, &pregions_sz, ®ions, ®ions_sz); > CTR0(KTR_PMAP, "moea_bootstrap: physical memory"); > > for (i = 0; i < pregions_sz; i++) { > vm_offset_t pa; > vm_offset_t end; > > CTR3(KTR_PMAP, "physregion: %#x - %#x (%#x)", > pregions[i].mr_start, > pregions[i].mr_start + pregions[i].mr_size, > pregions[i].mr_size); > /* > * Install entries into the BAT table to allow all > * of physmem to be convered by on-demand BAT entries. > * The loop will sometimes set the same battable element > * twice, but that's fine since they won't be used for > * a while yet. > */ > pa = pregions[i].mr_start & 0xf0000000; > end = pregions[i].mr_start + pregions[i].mr_size; > do { > u_int n = pa >> ADDR_SR_SHFT; > > battable[n].batl = BATL(pa, BAT_M, BAT_PP_RW); > battable[n].batu = BATU(pa, BAT_BL_256M, BAT_Vs); > pa += SEGMENT_LENGTH; > } while (pa < end); > } > > if (sizeof(phys_avail)/sizeof(phys_avail[0]) < regions_sz) > panic("moea_bootstrap: phys_avail too small"); > > phys_avail_count = 0; > physsz = 0; > hwphyssz = 0; > TUNABLE_ULONG_FETCH("hw.physmem", (u_long *) &hwphyssz); > for (i = 0, j = 0; i < regions_sz; i++, j += 2) { > CTR3(KTR_PMAP, "region: %#x - %#x (%#x)", regions[i].mr_start, > regions[i].mr_start + regions[i].mr_size, > regions[i].mr_size); > if (hwphyssz != 0 && > (physsz + regions[i].mr_size) >= hwphyssz) { > if (physsz < hwphyssz) { > phys_avail[j] = regions[i].mr_start; > phys_avail[j + 1] = regions[i].mr_start + > hwphyssz - physsz; > physsz = hwphyssz; > phys_avail_count++; > } > break; > } > phys_avail[j] = regions[i].mr_start; > phys_avail[j + 1] = regions[i].mr_start + regions[i].mr_size; > phys_avail_count++; > physsz += regions[i].mr_size; > } > > /* Check for overlap with the kernel and exception vectors */ > for (j = 0; j < 2*phys_avail_count; j+=2) { > if (phys_avail[j] < EXC_LAST) > phys_avail[j] += EXC_LAST; > > if (kernelstart >= phys_avail[j] && > kernelstart < phys_avail[j+1]) { > if (kernelend < phys_avail[j+1]) { > phys_avail[2*phys_avail_count] = > (kernelend & ~PAGE_MASK) + PAGE_SIZE; > phys_avail[2*phys_avail_count + 1] = > phys_avail[j+1]; > phys_avail_count++; > } > > phys_avail[j+1] = kernelstart & ~PAGE_MASK; > } > > if (kernelend >= phys_avail[j] && > kernelend < phys_avail[j+1]) { > if (kernelstart > phys_avail[j]) { > phys_avail[2*phys_avail_count] = phys_avail[j]; > phys_avail[2*phys_avail_count + 1] = > kernelstart & ~PAGE_MASK; > phys_avail_count++; > } > > phys_avail[j] = (kernelend & ~PAGE_MASK) + PAGE_SIZE; > } > } > > physmem = btoc(physsz); > > /* > * Allocate PTEG table. > */ > #ifdef PTEGCOUNT > moea_pteg_count = PTEGCOUNT; > #else > moea_pteg_count = 0x1000; > > while (moea_pteg_count < physmem) > moea_pteg_count <<= 1; > > moea_pteg_count >>= 1; > #endif /* PTEGCOUNT */ > > size = moea_pteg_count * sizeof(struct pteg); > CTR2(KTR_PMAP, "moea_bootstrap: %d PTEGs, %d bytes", moea_pteg_count, > size); > moea_pteg_table = (struct pteg *)moea_bootstrap_alloc(size, size); > CTR1(KTR_PMAP, "moea_bootstrap: PTEG table at %p", moea_pteg_table); > bzero((void *)moea_pteg_table, moea_pteg_count * sizeof(struct pteg)); > moea_pteg_mask = moea_pteg_count - 1; > > /* > * Allocate pv/overflow lists. > */ > size = sizeof(struct pvo_head) * moea_pteg_count; > moea_pvo_table = (struct pvo_head *)moea_bootstrap_alloc(size, > PAGE_SIZE); > CTR1(KTR_PMAP, "moea_bootstrap: PVO table at %p", moea_pvo_table); > for (i = 0; i < moea_pteg_count; i++) > LIST_INIT(&moea_pvo_table[i]); > > /* > * Initialize the lock that synchronizes access to the pteg and pvo > * tables. > */ > mtx_init(&moea_table_mutex, "pmap table", NULL, MTX_DEF | > MTX_RECURSE); > mtx_init(&moea_vsid_mutex, "VSID table", NULL, MTX_DEF); > > mtx_init(&tlbie_mtx, "tlbie", NULL, MTX_SPIN); > > /* > * Initialise the unmanaged pvo pool. > */ > moea_bpvo_pool = (struct pvo_entry *)moea_bootstrap_alloc( > BPVO_POOL_SIZE*sizeof(struct pvo_entry), 0); > moea_bpvo_pool_index = 0; > > /* > * Make sure kernel vsid is allocated as well as VSID 0. > */ > moea_vsid_bitmap[(KERNEL_VSIDBITS & (NPMAPS - 1)) / VSID_NBPW] > |= 1 << (KERNEL_VSIDBITS % VSID_NBPW); > moea_vsid_bitmap[0] |= 1; > > /* > * Initialize the kernel pmap (which is statically allocated). > */ > PMAP_LOCK_INIT(kernel_pmap); > for (i = 0; i < 16; i++) > kernel_pmap->pm_sr[i] = EMPTY_SEGMENT + i; > CPU_FILL(&kernel_pmap->pm_active); > RB_INIT(&kernel_pmap->pmap_pvo); > > /* > * Initialize the global pv list lock. > */ > rw_init(&pvh_global_lock, "pmap pv global"); > > /* > * Set up the Open Firmware mappings > */ > chosen = OF_finddevice("/chosen"); > if (chosen != -1 && OF_getprop(chosen, "mmu", &mmui, 4) != -1 && > (mmu = OF_instance_to_package(mmui)) != -1 && > (sz = OF_getproplen(mmu, "translations")) != -1) { > translations = NULL; > for (i = 0; phys_avail[i] != 0; i += 2) { > if (phys_avail[i + 1] >= sz) { > translations = (struct ofw_map *)phys_avail[i]; > break; > } > } > if (translations == NULL) > panic("moea_bootstrap: no space to copy translations"); > bzero(translations, sz); > if (OF_getprop(mmu, "translations", translations, sz) == -1) > panic("moea_bootstrap: can't get ofw translations"); > CTR0(KTR_PMAP, "moea_bootstrap: translations"); > sz /= sizeof(*translations); > qsort(translations, sz, sizeof (*translations), om_cmp); > for (i = 0; i < sz; i++) { > CTR3(KTR_PMAP, "translation: pa=%#x va=%#x len=%#x", > translations[i].om_pa, translations[i].om_va, > translations[i].om_len); > > /* > * If the mapping is 1:1, let the RAM and device > * on-demand BAT tables take care of the translation. > */ > if (translations[i].om_va == translations[i].om_pa) > continue; > > /* Enter the pages */ > for (off = 0; off < translations[i].om_len; > off += PAGE_SIZE) > moea_kenter(mmup, translations[i].om_va + off, > translations[i].om_pa + off); > } > } > > /* > * Calculate the last available physical address. > */ > for (i = 0; phys_avail[i + 2] != 0; i += 2) > ; > Maxmem = powerpc_btop(phys_avail[i + 1]); > > moea_cpu_bootstrap(mmup,0); > > pmap_bootstrapped++; > > /* > * Set the start and end of kva. > */ > virtual_avail = VM_MIN_KERNEL_ADDRESS; > virtual_end = VM_MAX_SAFE_KERNEL_ADDRESS; > > /* > * Allocate a kernel stack with a guard page for thread0 and map it > * into the kernel page map. > */ > pa = moea_bootstrap_alloc(KSTACK_PAGES * PAGE_SIZE, PAGE_SIZE); > va = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; > virtual_avail = va + KSTACK_PAGES * PAGE_SIZE; > CTR2(KTR_PMAP, "moea_bootstrap: kstack0 at %#x (%#x)", pa, va); > thread0.td_kstack = va; > thread0.td_kstack_pages = KSTACK_PAGES; > for (i = 0; i < KSTACK_PAGES; i++) { > moea_kenter(mmup, va, pa); > pa += PAGE_SIZE; > va += PAGE_SIZE; > } > > /* > * Allocate virtual address space for the message buffer. > */ > pa = msgbuf_phys = moea_bootstrap_alloc(msgbufsize, PAGE_SIZE); > msgbufp = (struct msgbuf *)virtual_avail; > va = virtual_avail; > virtual_avail += round_page(msgbufsize); > while (va < virtual_avail) { > moea_kenter(mmup, va, pa); > pa += PAGE_SIZE; > va += PAGE_SIZE; > } > > /* > * Allocate virtual address space for the dynamic percpu area. > */ > pa = moea_bootstrap_alloc(DPCPU_SIZE, PAGE_SIZE); > dpcpu = (void *)virtual_avail; > va = virtual_avail; > virtual_avail += DPCPU_SIZE; > while (va < virtual_avail) { > moea_kenter(mmup, va, pa); > pa += PAGE_SIZE; > va += PAGE_SIZE; > } > dpcpu_init(dpcpu, 0); > } > > /* > * Activate a user pmap. The pmap must be activated before it's address > * space can be accessed in any way. > */ > void > moea_activate(mmu_t mmu, struct thread *td) > { > pmap_t pm, pmr; > > /* > * Load all the data we need up front to encourage the compiler to > * not issue any loads while we have interrupts disabled below. > */ > pm = &td->td_proc->p_vmspace->vm_pmap; > pmr = pm->pmap_phys; > > CPU_SET(PCPU_GET(cpuid), &pm->pm_active); > PCPU_SET(curpmap, pmr); > } > > void > moea_deactivate(mmu_t mmu, struct thread *td) > { > pmap_t pm; > > pm = &td->td_proc->p_vmspace->vm_pmap; > CPU_CLR(PCPU_GET(cpuid), &pm->pm_active); > PCPU_SET(curpmap, NULL); > } > > void > moea_unwire(mmu_t mmu, pmap_t pm, vm_offset_t sva, vm_offset_t eva) > { > struct pvo_entry key, *pvo; > > PMAP_LOCK(pm); > key.pvo_vaddr = sva; > for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); > pvo != NULL && PVO_VADDR(pvo) < eva; > pvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo)) { > if ((pvo->pvo_vaddr & PVO_WIRED) == 0) > panic("moea_unwire: pvo %p is missing PVO_WIRED", pvo); > pvo->pvo_vaddr &= ~PVO_WIRED; > pm->pm_stats.wired_count--; > } > PMAP_UNLOCK(pm); > } > > void > moea_copy_page(mmu_t mmu, vm_page_t msrc, vm_page_t mdst) > { > vm_offset_t dst; > vm_offset_t src; > > dst = VM_PAGE_TO_PHYS(mdst); > src = VM_PAGE_TO_PHYS(msrc); > > bcopy((void *)src, (void *)dst, PAGE_SIZE); > } > > void > moea_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize) > { > void *a_cp, *b_cp; > vm_offset_t a_pg_offset, b_pg_offset; > int cnt; > > while (xfersize > 0) { > a_pg_offset = a_offset & PAGE_MASK; > cnt = min(xfersize, PAGE_SIZE - a_pg_offset); > a_cp = (char *)VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT]) + > a_pg_offset; > b_pg_offset = b_offset & PAGE_MASK; > cnt = min(cnt, PAGE_SIZE - b_pg_offset); > b_cp = (char *)VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT]) + > b_pg_offset; > bcopy(a_cp, b_cp, cnt); > a_offset += cnt; > b_offset += cnt; > xfersize -= cnt; > } > } > > /* > * Zero a page of physical memory by temporarily mapping it into the tlb. > */ > void > moea_zero_page(mmu_t mmu, vm_page_t m) > { > vm_offset_t off, pa = VM_PAGE_TO_PHYS(m); > > for (off = 0; off < PAGE_SIZE; off += cacheline_size) > __asm __volatile("dcbz 0,%0" :: "r"(pa + off)); > } > > void > moea_zero_page_area(mmu_t mmu, vm_page_t m, int off, int size) > { > vm_offset_t pa = VM_PAGE_TO_PHYS(m); > void *va = (void *)(pa + off); > > bzero(va, size); > } > > void > moea_zero_page_idle(mmu_t mmu, vm_page_t m) > { > > moea_zero_page(mmu, m); > } > > /* > * Map the given physical page at the specified virtual address in the > * target pmap with the protection requested. If specified the page > * will be wired down. > */ > int > moea_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, > u_int flags, int8_t psind) > { > int error; > > for (;;) { > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pmap); > error = moea_enter_locked(pmap, va, m, prot, flags, psind); > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pmap); > if (error != ENOMEM) > return (KERN_SUCCESS); > if ((flags & PMAP_ENTER_NOSLEEP) != 0) > return (KERN_RESOURCE_SHORTAGE); > VM_OBJECT_ASSERT_UNLOCKED(m->object); > VM_WAIT; > } > } > > /* > * Map the given physical page at the specified virtual address in the > * target pmap with the protection requested. If specified the page > * will be wired down. > * > * The global pvh and pmap must be locked. > */ > static int > moea_enter_locked(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot, > u_int flags, int8_t psind __unused) > { > struct pvo_head *pvo_head; > uma_zone_t zone; > u_int pte_lo, pvo_flags; > int error; > > if (pmap_bootstrapped) > rw_assert(&pvh_global_lock, RA_WLOCKED); > PMAP_LOCK_ASSERT(pmap, MA_OWNED); > if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m)) > VM_OBJECT_ASSERT_LOCKED(m->object); > > if ((m->oflags & VPO_UNMANAGED) != 0 || !moea_initialized) { > pvo_head = &moea_pvo_kunmanaged; > zone = moea_upvo_zone; > pvo_flags = 0; > } else { > pvo_head = vm_page_to_pvoh(m); > zone = moea_mpvo_zone; > pvo_flags = PVO_MANAGED; > } > > pte_lo = moea_calc_wimg(VM_PAGE_TO_PHYS(m), pmap_page_get_memattr(m)); > > if (prot & VM_PROT_WRITE) { > pte_lo |= PTE_BW; > if (pmap_bootstrapped && > (m->oflags & VPO_UNMANAGED) == 0) > vm_page_aflag_set(m, PGA_WRITEABLE); > } else > pte_lo |= PTE_BR; > > if ((flags & PMAP_ENTER_WIRED) != 0) > pvo_flags |= PVO_WIRED; > > error = moea_pvo_enter(pmap, zone, pvo_head, va, VM_PAGE_TO_PHYS(m), > pte_lo, pvo_flags); > > /* > * Flush the real page from the instruction cache. This has be done > * for all user mappings to prevent information leakage via the > * instruction cache. moea_pvo_enter() returns ENOENT for the first > * mapping for a page. > */ > if (pmap != kernel_pmap && error == ENOENT && > (pte_lo & (PTE_I | PTE_G)) == 0) > moea_syncicache(VM_PAGE_TO_PHYS(m), PAGE_SIZE); > > return (error); > } > > /* > * Maps a sequence of resident pages belonging to the same object. > * The sequence begins with the given page m_start. This page is > * mapped at the given virtual address start. Each subsequent page is > * mapped at a virtual address that is offset from start by the same > * amount as the page is offset from m_start within the object. The > * last page in the sequence is the page with the largest offset from > * m_start that can be mapped at a virtual address less than the given > * virtual address end. Not every virtual page between start and end > * is mapped; only those for which a resident page exists with the > * corresponding offset from m_start are mapped. > */ > void > moea_enter_object(mmu_t mmu, pmap_t pm, vm_offset_t start, vm_offset_t end, > vm_page_t m_start, vm_prot_t prot) > { > vm_page_t m; > vm_pindex_t diff, psize; > > VM_OBJECT_ASSERT_LOCKED(m_start->object); > > psize = atop(end - start); > m = m_start; > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pm); > while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { > moea_enter_locked(pm, start + ptoa(diff), m, prot & > (VM_PROT_READ | VM_PROT_EXECUTE), 0, 0); > m = TAILQ_NEXT(m, listq); > } > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pm); > } > > void > moea_enter_quick(mmu_t mmu, pmap_t pm, vm_offset_t va, vm_page_t m, > vm_prot_t prot) > { > > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pm); > moea_enter_locked(pm, va, m, prot & (VM_PROT_READ | VM_PROT_EXECUTE), > 0, 0); > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pm); > } > > vm_paddr_t > moea_extract(mmu_t mmu, pmap_t pm, vm_offset_t va) > { > struct pvo_entry *pvo; > vm_paddr_t pa; > > PMAP_LOCK(pm); > pvo = moea_pvo_find_va(pm, va & ~ADDR_POFF, NULL); > if (pvo == NULL) > pa = 0; > else > pa = (pvo->pvo_pte.pte.pte_lo & PTE_RPGN) | (va & ADDR_POFF); > PMAP_UNLOCK(pm); > return (pa); > } > > /* > * Atomically extract and hold the physical page with the given > * pmap and virtual address pair if that mapping permits the given > * protection. > */ > vm_page_t > moea_extract_and_hold(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_prot_t prot) > { > struct pvo_entry *pvo; > vm_page_t m; > vm_paddr_t pa; > > m = NULL; > pa = 0; > PMAP_LOCK(pmap); > retry: > pvo = moea_pvo_find_va(pmap, va & ~ADDR_POFF, NULL); > if (pvo != NULL && (pvo->pvo_pte.pte.pte_hi & PTE_VALID) && > ((pvo->pvo_pte.pte.pte_lo & PTE_PP) == PTE_RW || > (prot & VM_PROT_WRITE) == 0)) { > if (vm_page_pa_tryrelock(pmap, pvo->pvo_pte.pte.pte_lo & PTE_RPGN, &pa)) > goto retry; > m = PHYS_TO_VM_PAGE(pvo->pvo_pte.pte.pte_lo & PTE_RPGN); > vm_page_hold(m); > } > PA_UNLOCK_COND(pa); > PMAP_UNLOCK(pmap); > return (m); > } > > void > moea_init(mmu_t mmu) > { > > moea_upvo_zone = uma_zcreate("UPVO entry", sizeof (struct pvo_entry), > NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, > UMA_ZONE_VM | UMA_ZONE_NOFREE); > moea_mpvo_zone = uma_zcreate("MPVO entry", sizeof(struct pvo_entry), > NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, > UMA_ZONE_VM | UMA_ZONE_NOFREE); > moea_initialized = TRUE; > } > > boolean_t > moea_is_referenced(mmu_t mmu, vm_page_t m) > { > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea_is_referenced: page %p is not managed", m)); > rw_wlock(&pvh_global_lock); > rv = moea_query_bit(m, PTE_REF); > rw_wunlock(&pvh_global_lock); > return (rv); > } > > boolean_t > moea_is_modified(mmu_t mmu, vm_page_t m) > { > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea_is_modified: page %p is not managed", m)); > > /* > * If the page is not exclusive busied, then PGA_WRITEABLE cannot be > * concurrently set while the object is locked. Thus, if PGA_WRITEABLE > * is clear, no PTEs can have PTE_CHG set. > */ > VM_OBJECT_ASSERT_WLOCKED(m->object); > if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) > return (FALSE); > rw_wlock(&pvh_global_lock); > rv = moea_query_bit(m, PTE_CHG); > rw_wunlock(&pvh_global_lock); > return (rv); > } > > boolean_t > moea_is_prefaultable(mmu_t mmu, pmap_t pmap, vm_offset_t va) > { > struct pvo_entry *pvo; > boolean_t rv; > > PMAP_LOCK(pmap); > pvo = moea_pvo_find_va(pmap, va & ~ADDR_POFF, NULL); > rv = pvo == NULL || (pvo->pvo_pte.pte.pte_hi & PTE_VALID) == 0; > PMAP_UNLOCK(pmap); > return (rv); > } > > void > moea_clear_modify(mmu_t mmu, vm_page_t m) > { > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea_clear_modify: page %p is not managed", m)); > VM_OBJECT_ASSERT_WLOCKED(m->object); > KASSERT(!vm_page_xbusied(m), > ("moea_clear_modify: page %p is exclusive busy", m)); > > /* > * If the page is not PGA_WRITEABLE, then no PTEs can have PTE_CHG > * set. If the object containing the page is locked and the page is > * not exclusive busied, then PGA_WRITEABLE cannot be concurrently set. > */ > if ((m->aflags & PGA_WRITEABLE) == 0) > return; > rw_wlock(&pvh_global_lock); > moea_clear_bit(m, PTE_CHG); > rw_wunlock(&pvh_global_lock); > } > > /* > * Clear the write and modified bits in each of the given page's mappings. > */ > void > moea_remove_write(mmu_t mmu, vm_page_t m) > { > struct pvo_entry *pvo; > struct pte *pt; > pmap_t pmap; > u_int lo; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea_remove_write: page %p is not managed", m)); > > /* > * If the page is not exclusive busied, then PGA_WRITEABLE cannot be > * set by another thread while the object is locked. Thus, > * if PGA_WRITEABLE is clear, no page table entries need updating. > */ > VM_OBJECT_ASSERT_WLOCKED(m->object); > if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) > return; > rw_wlock(&pvh_global_lock); > lo = moea_attr_fetch(m); > powerpc_sync(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > pmap = pvo->pvo_pmap; > PMAP_LOCK(pmap); > if ((pvo->pvo_pte.pte.pte_lo & PTE_PP) != PTE_BR) { > pt = moea_pvo_to_pte(pvo, -1); > pvo->pvo_pte.pte.pte_lo &= ~PTE_PP; > pvo->pvo_pte.pte.pte_lo |= PTE_BR; > if (pt != NULL) { > moea_pte_synch(pt, &pvo->pvo_pte.pte); > lo |= pvo->pvo_pte.pte.pte_lo; > pvo->pvo_pte.pte.pte_lo &= ~PTE_CHG; > moea_pte_change(pt, &pvo->pvo_pte.pte, > pvo->pvo_vaddr); > mtx_unlock(&moea_table_mutex); > } > } > PMAP_UNLOCK(pmap); > } > if ((lo & PTE_CHG) != 0) { > moea_attr_clear(m, PTE_CHG); > vm_page_dirty(m); > } > vm_page_aflag_clear(m, PGA_WRITEABLE); > rw_wunlock(&pvh_global_lock); > } > > /* > * moea_ts_referenced: > * > * Return a count of reference bits for a page, clearing those bits. > * It is not necessary for every reference bit to be cleared, but it > * is necessary that 0 only be returned when there are truly no > * reference bits set. > * > * XXX: The exact number of bits to check and clear is a matter that > * should be tested and standardized at some point in the future for > * optimal aging of shared pages. > */ > int > moea_ts_referenced(mmu_t mmu, vm_page_t m) > { > int count; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea_ts_referenced: page %p is not managed", m)); > rw_wlock(&pvh_global_lock); > count = moea_clear_bit(m, PTE_REF); > rw_wunlock(&pvh_global_lock); > return (count); > } > > /* > * Modify the WIMG settings of all mappings for a page. > */ > void > moea_page_set_memattr(mmu_t mmu, vm_page_t m, vm_memattr_t ma) > { > struct pvo_entry *pvo; > struct pvo_head *pvo_head; > struct pte *pt; > pmap_t pmap; > u_int lo; > > if ((m->oflags & VPO_UNMANAGED) != 0) { > m->md.mdpg_cache_attrs = ma; > return; > } > > rw_wlock(&pvh_global_lock); > pvo_head = vm_page_to_pvoh(m); > lo = moea_calc_wimg(VM_PAGE_TO_PHYS(m), ma); > > LIST_FOREACH(pvo, pvo_head, pvo_vlink) { > pmap = pvo->pvo_pmap; > PMAP_LOCK(pmap); > pt = moea_pvo_to_pte(pvo, -1); > pvo->pvo_pte.pte.pte_lo &= ~PTE_WIMG; > pvo->pvo_pte.pte.pte_lo |= lo; > if (pt != NULL) { > moea_pte_change(pt, &pvo->pvo_pte.pte, > pvo->pvo_vaddr); > if (pvo->pvo_pmap == kernel_pmap) > isync(); > } > mtx_unlock(&moea_table_mutex); > PMAP_UNLOCK(pmap); > } > m->md.mdpg_cache_attrs = ma; > rw_wunlock(&pvh_global_lock); > } > > /* > * Map a wired page into kernel virtual address space. > */ > void > moea_kenter(mmu_t mmu, vm_offset_t va, vm_paddr_t pa) > { > > moea_kenter_attr(mmu, va, pa, VM_MEMATTR_DEFAULT); > } > > void > moea_kenter_attr(mmu_t mmu, vm_offset_t va, vm_offset_t pa, vm_memattr_t ma) > { > u_int pte_lo; > int error; > > #if 0 > if (va < VM_MIN_KERNEL_ADDRESS) > panic("moea_kenter: attempt to enter non-kernel address %#x", > va); > #endif > > pte_lo = moea_calc_wimg(pa, ma); > > PMAP_LOCK(kernel_pmap); > error = moea_pvo_enter(kernel_pmap, moea_upvo_zone, > &moea_pvo_kunmanaged, va, pa, pte_lo, PVO_WIRED); > > if (error != 0 && error != ENOENT) > panic("moea_kenter: failed to enter va %#x pa %#x: %d", va, > pa, error); > > PMAP_UNLOCK(kernel_pmap); > } > > /* > * Extract the physical page address associated with the given kernel virtual > * address. > */ > vm_paddr_t > moea_kextract(mmu_t mmu, vm_offset_t va) > { > struct pvo_entry *pvo; > vm_paddr_t pa; > > /* > * Allow direct mappings on 32-bit OEA > */ > if (va < VM_MIN_KERNEL_ADDRESS) { > return (va); > } > > PMAP_LOCK(kernel_pmap); > pvo = moea_pvo_find_va(kernel_pmap, va & ~ADDR_POFF, NULL); > KASSERT(pvo != NULL, ("moea_kextract: no addr found")); > pa = (pvo->pvo_pte.pte.pte_lo & PTE_RPGN) | (va & ADDR_POFF); > PMAP_UNLOCK(kernel_pmap); > return (pa); > } > > /* > * Remove a wired page from kernel virtual address space. > */ > void > moea_kremove(mmu_t mmu, vm_offset_t va) > { > > moea_remove(mmu, kernel_pmap, va, va + PAGE_SIZE); > } > > /* > * Map a range of physical addresses into kernel virtual address space. > * > * The value passed in *virt is a suggested virtual address for the mapping. > * Architectures which can support a direct-mapped physical to virtual region > * can return the appropriate address within that region, leaving '*virt' > * unchanged. We cannot and therefore do not; *virt is updated with the > * first usable address after the mapped region. > */ > vm_offset_t > moea_map(mmu_t mmu, vm_offset_t *virt, vm_paddr_t pa_start, > vm_paddr_t pa_end, int prot) > { > vm_offset_t sva, va; > > sva = *virt; > va = sva; > for (; pa_start < pa_end; pa_start += PAGE_SIZE, va += PAGE_SIZE) > moea_kenter(mmu, va, pa_start); > *virt = va; > return (sva); > } > > /* > * Returns true if the pmap's pv is one of the first > * 16 pvs linked to from this page. This count may > * be changed upwards or downwards in the future; it > * is only necessary that true be returned for a small > * subset of pmaps for proper page aging. > */ > boolean_t > moea_page_exists_quick(mmu_t mmu, pmap_t pmap, vm_page_t m) > { > int loops; > struct pvo_entry *pvo; > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea_page_exists_quick: page %p is not managed", m)); > loops = 0; > rv = FALSE; > rw_wlock(&pvh_global_lock); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > if (pvo->pvo_pmap == pmap) { > rv = TRUE; > break; > } > if (++loops >= 16) > break; > } > rw_wunlock(&pvh_global_lock); > return (rv); > } > > /* > * Return the number of managed mappings to the given physical page > * that are wired. > */ > int > moea_page_wired_mappings(mmu_t mmu, vm_page_t m) > { > struct pvo_entry *pvo; > int count; > > count = 0; > if ((m->oflags & VPO_UNMANAGED) != 0) > return (count); > rw_wlock(&pvh_global_lock); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) > if ((pvo->pvo_vaddr & PVO_WIRED) != 0) > count++; > rw_wunlock(&pvh_global_lock); > return (count); > } > > static u_int moea_vsidcontext; > > void > moea_pinit(mmu_t mmu, pmap_t pmap) > { > int i, mask; > u_int entropy; > > KASSERT((int)pmap < VM_MIN_KERNEL_ADDRESS, ("moea_pinit: virt pmap")); > RB_INIT(&pmap->pmap_pvo); > > entropy = 0; > __asm __volatile("mftb %0" : "=r"(entropy)); > > if ((pmap->pmap_phys = (pmap_t)moea_kextract(mmu, (vm_offset_t)pmap)) > == NULL) { > pmap->pmap_phys = pmap; > } > > > mtx_lock(&moea_vsid_mutex); > /* > * Allocate some segment registers for this pmap. > */ > for (i = 0; i < NPMAPS; i += VSID_NBPW) { > u_int hash, n; > > /* > * Create a new value by mutiplying by a prime and adding in > * entropy from the timebase register. This is to make the > * VSID more random so that the PT hash function collides > * less often. (Note that the prime casues gcc to do shifts > * instead of a multiply.) > */ > moea_vsidcontext = (moea_vsidcontext * 0x1105) + entropy; > hash = moea_vsidcontext & (NPMAPS - 1); > if (hash == 0) /* 0 is special, avoid it */ > continue; > n = hash >> 5; > mask = 1 << (hash & (VSID_NBPW - 1)); > hash = (moea_vsidcontext & 0xfffff); > if (moea_vsid_bitmap[n] & mask) { /* collision? */ > /* anything free in this bucket? */ > if (moea_vsid_bitmap[n] == 0xffffffff) { > entropy = (moea_vsidcontext >> 20); > continue; > } > i = ffs(~moea_vsid_bitmap[n]) - 1; > mask = 1 << i; > hash &= 0xfffff & ~(VSID_NBPW - 1); > hash |= i; > } > KASSERT(!(moea_vsid_bitmap[n] & mask), > ("Allocating in-use VSID group %#x\n", hash)); > moea_vsid_bitmap[n] |= mask; > for (i = 0; i < 16; i++) > pmap->pm_sr[i] = VSID_MAKE(i, hash); > mtx_unlock(&moea_vsid_mutex); > return; > } > > mtx_unlock(&moea_vsid_mutex); > panic("moea_pinit: out of segments"); > } > > /* > * Initialize the pmap associated with process 0. > */ > void > moea_pinit0(mmu_t mmu, pmap_t pm) > { > > PMAP_LOCK_INIT(pm); > moea_pinit(mmu, pm); > bzero(&pm->pm_stats, sizeof(pm->pm_stats)); > } > > /* > * Set the physical protection on the specified range of this map as requested. > */ > void > moea_protect(mmu_t mmu, pmap_t pm, vm_offset_t sva, vm_offset_t eva, > vm_prot_t prot) > { > struct pvo_entry *pvo, *tpvo, key; > struct pte *pt; > > KASSERT(pm == &curproc->p_vmspace->vm_pmap || pm == kernel_pmap, > ("moea_protect: non current pmap")); > > if ((prot & VM_PROT_READ) == VM_PROT_NONE) { > moea_remove(mmu, pm, sva, eva); > return; > } > > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pm); > key.pvo_vaddr = sva; > for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); > pvo != NULL && PVO_VADDR(pvo) < eva; pvo = tpvo) { > tpvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo); > > /* > * Grab the PTE pointer before we diddle with the cached PTE > * copy. > */ > pt = moea_pvo_to_pte(pvo, -1); > /* > * Change the protection of the page. > */ > pvo->pvo_pte.pte.pte_lo &= ~PTE_PP; > pvo->pvo_pte.pte.pte_lo |= PTE_BR; > > /* > * If the PVO is in the page table, update that pte as well. > */ > if (pt != NULL) { > moea_pte_change(pt, &pvo->pvo_pte.pte, pvo->pvo_vaddr); > mtx_unlock(&moea_table_mutex); > } > } > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pm); > } > > /* > * Map a list of wired pages into kernel virtual address space. This is > * intended for temporary mappings which do not need page modification or > * references recorded. Existing mappings in the region are overwritten. > */ > void > moea_qenter(mmu_t mmu, vm_offset_t sva, vm_page_t *m, int count) > { > vm_offset_t va; > > va = sva; > while (count-- > 0) { > moea_kenter(mmu, va, VM_PAGE_TO_PHYS(*m)); > va += PAGE_SIZE; > m++; > } > } > > /* > * Remove page mappings from kernel virtual address space. Intended for > * temporary mappings entered by moea_qenter. > */ > void > moea_qremove(mmu_t mmu, vm_offset_t sva, int count) > { > vm_offset_t va; > > va = sva; > while (count-- > 0) { > moea_kremove(mmu, va); > va += PAGE_SIZE; > } > } > > void > moea_release(mmu_t mmu, pmap_t pmap) > { > int idx, mask; > > /* > * Free segment register's VSID > */ > if (pmap->pm_sr[0] == 0) > panic("moea_release"); > > mtx_lock(&moea_vsid_mutex); > idx = VSID_TO_HASH(pmap->pm_sr[0]) & (NPMAPS-1); > mask = 1 << (idx % VSID_NBPW); > idx /= VSID_NBPW; > moea_vsid_bitmap[idx] &= ~mask; > mtx_unlock(&moea_vsid_mutex); > } > > /* > * Remove the given range of addresses from the specified map. > */ > void > moea_remove(mmu_t mmu, pmap_t pm, vm_offset_t sva, vm_offset_t eva) > { > struct pvo_entry *pvo, *tpvo, key; > > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pm); > key.pvo_vaddr = sva; > for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); > pvo != NULL && PVO_VADDR(pvo) < eva; pvo = tpvo) { > tpvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo); > moea_pvo_remove(pvo, -1); > } > PMAP_UNLOCK(pm); > rw_wunlock(&pvh_global_lock); > } > > /* > * Remove physical page from all pmaps in which it resides. moea_pvo_remove() > * will reflect changes in pte's back to the vm_page. > */ > void > moea_remove_all(mmu_t mmu, vm_page_t m) > { > struct pvo_head *pvo_head; > struct pvo_entry *pvo, *next_pvo; > pmap_t pmap; > > rw_wlock(&pvh_global_lock); > pvo_head = vm_page_to_pvoh(m); > for (pvo = LIST_FIRST(pvo_head); pvo != NULL; pvo = next_pvo) { > next_pvo = LIST_NEXT(pvo, pvo_vlink); > > pmap = pvo->pvo_pmap; > PMAP_LOCK(pmap); > moea_pvo_remove(pvo, -1); > PMAP_UNLOCK(pmap); > } > if ((m->aflags & PGA_WRITEABLE) && moea_query_bit(m, PTE_CHG)) { > moea_attr_clear(m, PTE_CHG); > vm_page_dirty(m); > } > vm_page_aflag_clear(m, PGA_WRITEABLE); > rw_wunlock(&pvh_global_lock); > } > > /* > * Allocate a physical page of memory directly from the phys_avail map. > * Can only be called from moea_bootstrap before avail start and end are > * calculated. > */ > static vm_offset_t > moea_bootstrap_alloc(vm_size_t size, u_int align) > { > vm_offset_t s, e; > int i, j; > > size = round_page(size); > for (i = 0; phys_avail[i + 1] != 0; i += 2) { > if (align != 0) > s = (phys_avail[i] + align - 1) & ~(align - 1); > else > s = phys_avail[i]; > e = s + size; > > if (s < phys_avail[i] || e > phys_avail[i + 1]) > continue; > > if (s == phys_avail[i]) { > phys_avail[i] += size; > } else if (e == phys_avail[i + 1]) { > phys_avail[i + 1] -= size; > } else { > for (j = phys_avail_count * 2; j > i; j -= 2) { > phys_avail[j] = phys_avail[j - 2]; > phys_avail[j + 1] = phys_avail[j - 1]; > } > > phys_avail[i + 3] = phys_avail[i + 1]; > phys_avail[i + 1] = s; > phys_avail[i + 2] = e; > phys_avail_count++; > } > > return (s); > } > panic("moea_bootstrap_alloc: could not allocate memory"); > } > > static void > moea_syncicache(vm_offset_t pa, vm_size_t len) > { > __syncicache((void *)pa, len); > } > > static int > moea_pvo_enter(pmap_t pm, uma_zone_t zone, struct pvo_head *pvo_head, > vm_offset_t va, vm_offset_t pa, u_int pte_lo, int flags) > { > struct pvo_entry *pvo; > u_int sr; > int first; > u_int ptegidx; > int i; > int bootstrap; > > moea_pvo_enter_calls++; > first = 0; > bootstrap = 0; > > /* > * Compute the PTE Group index. > */ > va &= ~ADDR_POFF; > sr = va_to_sr(pm->pm_sr, va); > ptegidx = va_to_pteg(sr, va); > > /* > * Remove any existing mapping for this page. Reuse the pvo entry if > * there is a mapping. > */ > mtx_lock(&moea_table_mutex); > LIST_FOREACH(pvo, &moea_pvo_table[ptegidx], pvo_olink) { > if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) { > if ((pvo->pvo_pte.pte.pte_lo & PTE_RPGN) == pa && > (pvo->pvo_pte.pte.pte_lo & PTE_PP) == > (pte_lo & PTE_PP)) { > /* > * The PTE is not changing. Instead, this may > * be a request to change the mapping's wired > * attribute. > */ > mtx_unlock(&moea_table_mutex); > if ((flags & PVO_WIRED) != 0 && > (pvo->pvo_vaddr & PVO_WIRED) == 0) { > pvo->pvo_vaddr |= PVO_WIRED; > pm->pm_stats.wired_count++; > } else if ((flags & PVO_WIRED) == 0 && > (pvo->pvo_vaddr & PVO_WIRED) != 0) { > pvo->pvo_vaddr &= ~PVO_WIRED; > pm->pm_stats.wired_count--; > } > return (0); > } > moea_pvo_remove(pvo, -1); > break; > } > } > > /* > * If we aren't overwriting a mapping, try to allocate. > */ > if (moea_initialized) { > pvo = uma_zalloc(zone, M_NOWAIT); > } else { > if (moea_bpvo_pool_index >= BPVO_POOL_SIZE) { > panic("moea_enter: bpvo pool exhausted, %d, %d, %d", > moea_bpvo_pool_index, BPVO_POOL_SIZE, > BPVO_POOL_SIZE * sizeof(struct pvo_entry)); > } > pvo = &moea_bpvo_pool[moea_bpvo_pool_index]; > moea_bpvo_pool_index++; > bootstrap = 1; > } > > if (pvo == NULL) { > mtx_unlock(&moea_table_mutex); > return (ENOMEM); > } > > moea_pvo_entries++; > pvo->pvo_vaddr = va; > pvo->pvo_pmap = pm; > LIST_INSERT_HEAD(&moea_pvo_table[ptegidx], pvo, pvo_olink); > pvo->pvo_vaddr &= ~ADDR_POFF; > if (flags & PVO_WIRED) > pvo->pvo_vaddr |= PVO_WIRED; > if (pvo_head != &moea_pvo_kunmanaged) > pvo->pvo_vaddr |= PVO_MANAGED; > if (bootstrap) > pvo->pvo_vaddr |= PVO_BOOTSTRAP; > > moea_pte_create(&pvo->pvo_pte.pte, sr, va, pa | pte_lo); > > /* > * Add to pmap list > */ > RB_INSERT(pvo_tree, &pm->pmap_pvo, pvo); > > /* > * Remember if the list was empty and therefore will be the first > * item. > */ > if (LIST_FIRST(pvo_head) == NULL) > first = 1; > LIST_INSERT_HEAD(pvo_head, pvo, pvo_vlink); > > if (pvo->pvo_vaddr & PVO_WIRED) > pm->pm_stats.wired_count++; > pm->pm_stats.resident_count++; > > i = moea_pte_insert(ptegidx, &pvo->pvo_pte.pte); > KASSERT(i < 8, ("Invalid PTE index")); > if (i >= 0) { > PVO_PTEGIDX_SET(pvo, i); > } else { > panic("moea_pvo_enter: overflow"); > moea_pte_overflow++; > } > mtx_unlock(&moea_table_mutex); > > return (first ? ENOENT : 0); > } > > static void > moea_pvo_remove(struct pvo_entry *pvo, int pteidx) > { > struct pte *pt; > > /* > * If there is an active pte entry, we need to deactivate it (and > * save the ref & cfg bits). > */ > pt = moea_pvo_to_pte(pvo, pteidx); > if (pt != NULL) { > moea_pte_unset(pt, &pvo->pvo_pte.pte, pvo->pvo_vaddr); > mtx_unlock(&moea_table_mutex); > PVO_PTEGIDX_CLR(pvo); > } else { > moea_pte_overflow--; > } > > /* > * Update our statistics. > */ > pvo->pvo_pmap->pm_stats.resident_count--; > if (pvo->pvo_vaddr & PVO_WIRED) > pvo->pvo_pmap->pm_stats.wired_count--; > > /* > * Save the REF/CHG bits into their cache if the page is managed. > */ > if ((pvo->pvo_vaddr & PVO_MANAGED) == PVO_MANAGED) { > struct vm_page *pg; > > pg = PHYS_TO_VM_PAGE(pvo->pvo_pte.pte.pte_lo & PTE_RPGN); > if (pg != NULL) { > moea_attr_save(pg, pvo->pvo_pte.pte.pte_lo & > (PTE_REF | PTE_CHG)); > } > } > > /* > * Remove this PVO from the PV and pmap lists. > */ > LIST_REMOVE(pvo, pvo_vlink); > RB_REMOVE(pvo_tree, &pvo->pvo_pmap->pmap_pvo, pvo); > > /* > * Remove this from the overflow list and return it to the pool > * if we aren't going to reuse it. > */ > LIST_REMOVE(pvo, pvo_olink); > if (!(pvo->pvo_vaddr & PVO_BOOTSTRAP)) > uma_zfree(pvo->pvo_vaddr & PVO_MANAGED ? moea_mpvo_zone : > moea_upvo_zone, pvo); > moea_pvo_entries--; > moea_pvo_remove_calls++; > } > > static __inline int > moea_pvo_pte_index(const struct pvo_entry *pvo, int ptegidx) > { > int pteidx; > > /* > * We can find the actual pte entry without searching by grabbing > * the PTEG index from 3 unused bits in pte_lo[11:9] and by > * noticing the HID bit. > */ > pteidx = ptegidx * 8 + PVO_PTEGIDX_GET(pvo); > if (pvo->pvo_pte.pte.pte_hi & PTE_HID) > pteidx ^= moea_pteg_mask * 8; > > return (pteidx); > } > > static struct pvo_entry * > moea_pvo_find_va(pmap_t pm, vm_offset_t va, int *pteidx_p) > { > struct pvo_entry *pvo; > int ptegidx; > u_int sr; > > va &= ~ADDR_POFF; > sr = va_to_sr(pm->pm_sr, va); > ptegidx = va_to_pteg(sr, va); > > mtx_lock(&moea_table_mutex); > LIST_FOREACH(pvo, &moea_pvo_table[ptegidx], pvo_olink) { > if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) { > if (pteidx_p) > *pteidx_p = moea_pvo_pte_index(pvo, ptegidx); > break; > } > } > mtx_unlock(&moea_table_mutex); > > return (pvo); > } > > static struct pte * > moea_pvo_to_pte(const struct pvo_entry *pvo, int pteidx) > { > struct pte *pt; > > /* > * If we haven't been supplied the ptegidx, calculate it. > */ > if (pteidx == -1) { > int ptegidx; > u_int sr; > > sr = va_to_sr(pvo->pvo_pmap->pm_sr, pvo->pvo_vaddr); > ptegidx = va_to_pteg(sr, pvo->pvo_vaddr); > pteidx = moea_pvo_pte_index(pvo, ptegidx); > } > > pt = &moea_pteg_table[pteidx >> 3].pt[pteidx & 7]; > mtx_lock(&moea_table_mutex); > > if ((pvo->pvo_pte.pte.pte_hi & PTE_VALID) && !PVO_PTEGIDX_ISSET(pvo)) { > panic("moea_pvo_to_pte: pvo %p has valid pte in pvo but no " > "valid pte index", pvo); > } > > if ((pvo->pvo_pte.pte.pte_hi & PTE_VALID) == 0 && PVO_PTEGIDX_ISSET(pvo)) { > panic("moea_pvo_to_pte: pvo %p has valid pte index in pvo " > "pvo but no valid pte", pvo); > } > > if ((pt->pte_hi ^ (pvo->pvo_pte.pte.pte_hi & ~PTE_VALID)) == PTE_VALID) { > if ((pvo->pvo_pte.pte.pte_hi & PTE_VALID) == 0) { > panic("moea_pvo_to_pte: pvo %p has valid pte in " > "moea_pteg_table %p but invalid in pvo", pvo, pt); > } > > if (((pt->pte_lo ^ pvo->pvo_pte.pte.pte_lo) & ~(PTE_CHG|PTE_REF)) > != 0) { > panic("moea_pvo_to_pte: pvo %p pte does not match " > "pte %p in moea_pteg_table", pvo, pt); > } > > mtx_assert(&moea_table_mutex, MA_OWNED); > return (pt); > } > > if (pvo->pvo_pte.pte.pte_hi & PTE_VALID) { > panic("moea_pvo_to_pte: pvo %p has invalid pte %p in " > "moea_pteg_table but valid in pvo: %8x, %8x", pvo, pt, pvo->pvo_pte.pte.pte_hi, pt->pte_hi); > } > > mtx_unlock(&moea_table_mutex); > return (NULL); > } > > /* > * XXX: THIS STUFF SHOULD BE IN pte.c? > */ > int > moea_pte_spill(vm_offset_t addr) > { > struct pvo_entry *source_pvo, *victim_pvo; > struct pvo_entry *pvo; > int ptegidx, i, j; > u_int sr; > struct pteg *pteg; > struct pte *pt; > > moea_pte_spills++; > > sr = mfsrin(addr); > ptegidx = va_to_pteg(sr, addr); > > /* > * Have to substitute some entry. Use the primary hash for this. > * Use low bits of timebase as random generator. > */ > pteg = &moea_pteg_table[ptegidx]; > mtx_lock(&moea_table_mutex); > __asm __volatile("mftb %0" : "=r"(i)); > i &= 7; > pt = &pteg->pt[i]; > > source_pvo = NULL; > victim_pvo = NULL; > LIST_FOREACH(pvo, &moea_pvo_table[ptegidx], pvo_olink) { > /* > * We need to find a pvo entry for this address. > */ > if (source_pvo == NULL && > moea_pte_match(&pvo->pvo_pte.pte, sr, addr, > pvo->pvo_pte.pte.pte_hi & PTE_HID)) { > /* > * Now found an entry to be spilled into the pteg. > * The PTE is now valid, so we know it's active. > */ > j = moea_pte_insert(ptegidx, &pvo->pvo_pte.pte); > > if (j >= 0) { > PVO_PTEGIDX_SET(pvo, j); > moea_pte_overflow--; > mtx_unlock(&moea_table_mutex); > return (1); > } > > source_pvo = pvo; > > if (victim_pvo != NULL) > break; > } > > /* > * We also need the pvo entry of the victim we are replacing > * so save the R & C bits of the PTE. > */ > if ((pt->pte_hi & PTE_HID) == 0 && victim_pvo == NULL && > moea_pte_compare(pt, &pvo->pvo_pte.pte)) { > victim_pvo = pvo; > if (source_pvo != NULL) > break; > } > } > > if (source_pvo == NULL) { > mtx_unlock(&moea_table_mutex); > return (0); > } > > if (victim_pvo == NULL) { > if ((pt->pte_hi & PTE_HID) == 0) > panic("moea_pte_spill: victim p-pte (%p) has no pvo" > "entry", pt); > > /* > * If this is a secondary PTE, we need to search it's primary > * pvo bucket for the matching PVO. > */ > LIST_FOREACH(pvo, &moea_pvo_table[ptegidx ^ moea_pteg_mask], > pvo_olink) { > /* > * We also need the pvo entry of the victim we are > * replacing so save the R & C bits of the PTE. > */ > if (moea_pte_compare(pt, &pvo->pvo_pte.pte)) { > victim_pvo = pvo; > break; > } > } > > if (victim_pvo == NULL) > panic("moea_pte_spill: victim s-pte (%p) has no pvo" > "entry", pt); > } > > /* > * We are invalidating the TLB entry for the EA we are replacing even > * though it's valid. If we don't, we lose any ref/chg bit changes > * contained in the TLB entry. > */ > source_pvo->pvo_pte.pte.pte_hi &= ~PTE_HID; > > moea_pte_unset(pt, &victim_pvo->pvo_pte.pte, victim_pvo->pvo_vaddr); > moea_pte_set(pt, &source_pvo->pvo_pte.pte); > > PVO_PTEGIDX_CLR(victim_pvo); > PVO_PTEGIDX_SET(source_pvo, i); > moea_pte_replacements++; > > mtx_unlock(&moea_table_mutex); > return (1); > } > > static __inline struct pvo_entry * > moea_pte_spillable_ident(u_int ptegidx) > { > struct pte *pt; > struct pvo_entry *pvo_walk, *pvo = NULL; > > LIST_FOREACH(pvo_walk, &moea_pvo_table[ptegidx], pvo_olink) { > if (pvo_walk->pvo_vaddr & PVO_WIRED) > continue; > > if (!(pvo_walk->pvo_pte.pte.pte_hi & PTE_VALID)) > continue; > > pt = moea_pvo_to_pte(pvo_walk, -1); > > if (pt == NULL) > continue; > > pvo = pvo_walk; > > mtx_unlock(&moea_table_mutex); > if (!(pt->pte_lo & PTE_REF)) > return (pvo_walk); > } > > return (pvo); > } > > static int > moea_pte_insert(u_int ptegidx, struct pte *pvo_pt) > { > struct pte *pt; > struct pvo_entry *victim_pvo; > int i; > int victim_idx; > u_int pteg_bkpidx = ptegidx; > > mtx_assert(&moea_table_mutex, MA_OWNED); > > /* > * First try primary hash. > */ > for (pt = moea_pteg_table[ptegidx].pt, i = 0; i < 8; i++, pt++) { > if ((pt->pte_hi & PTE_VALID) == 0) { > pvo_pt->pte_hi &= ~PTE_HID; > moea_pte_set(pt, pvo_pt); > return (i); > } > } > > /* > * Now try secondary hash. > */ > ptegidx ^= moea_pteg_mask; > > for (pt = moea_pteg_table[ptegidx].pt, i = 0; i < 8; i++, pt++) { > if ((pt->pte_hi & PTE_VALID) == 0) { > pvo_pt->pte_hi |= PTE_HID; > moea_pte_set(pt, pvo_pt); > return (i); > } > } > > /* Try again, but this time try to force a PTE out. */ > ptegidx = pteg_bkpidx; > > victim_pvo = moea_pte_spillable_ident(ptegidx); > if (victim_pvo == NULL) { > ptegidx ^= moea_pteg_mask; > victim_pvo = moea_pte_spillable_ident(ptegidx); > } > > if (victim_pvo == NULL) { > panic("moea_pte_insert: overflow"); > return (-1); > } > > victim_idx = moea_pvo_pte_index(victim_pvo, ptegidx); > > if (pteg_bkpidx == ptegidx) > pvo_pt->pte_hi &= ~PTE_HID; > else > pvo_pt->pte_hi |= PTE_HID; > > /* > * Synchronize the sacrifice PTE with its PVO, then mark both > * invalid. The PVO will be reused when/if the VM system comes > * here after a fault. > */ > pt = &moea_pteg_table[victim_idx >> 3].pt[victim_idx & 7]; > > if (pt->pte_hi != victim_pvo->pvo_pte.pte.pte_hi) > panic("Victim PVO doesn't match PTE! PVO: %8x, PTE: %8x", victim_pvo->pvo_pte.pte.pte_hi, pt->pte_hi); > > /* > * Set the new PTE. > */ > moea_pte_unset(pt, &victim_pvo->pvo_pte.pte, victim_pvo->pvo_vaddr); > PVO_PTEGIDX_CLR(victim_pvo); > moea_pte_overflow++; > moea_pte_set(pt, pvo_pt); > > return (victim_idx & 7); > } > > static boolean_t > moea_query_bit(vm_page_t m, int ptebit) > { > struct pvo_entry *pvo; > struct pte *pt; > > rw_assert(&pvh_global_lock, RA_WLOCKED); > if (moea_attr_fetch(m) & ptebit) > return (TRUE); > > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > > /* > * See if we saved the bit off. If so, cache it and return > * success. > */ > if (pvo->pvo_pte.pte.pte_lo & ptebit) { > moea_attr_save(m, ptebit); > return (TRUE); > } > } > > /* > * No luck, now go through the hard part of looking at the PTEs > * themselves. Sync so that any pending REF/CHG bits are flushed to > * the PTEs. > */ > powerpc_sync(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > > /* > * See if this pvo has a valid PTE. if so, fetch the > * REF/CHG bits from the valid PTE. If the appropriate > * ptebit is set, cache it and return success. > */ > pt = moea_pvo_to_pte(pvo, -1); > if (pt != NULL) { > moea_pte_synch(pt, &pvo->pvo_pte.pte); > mtx_unlock(&moea_table_mutex); > if (pvo->pvo_pte.pte.pte_lo & ptebit) { > moea_attr_save(m, ptebit); > return (TRUE); > } > } > } > > return (FALSE); > } > > static u_int > moea_clear_bit(vm_page_t m, int ptebit) > { > u_int count; > struct pvo_entry *pvo; > struct pte *pt; > > rw_assert(&pvh_global_lock, RA_WLOCKED); > > /* > * Clear the cached value. > */ > moea_attr_clear(m, ptebit); > > /* > * Sync so that any pending REF/CHG bits are flushed to the PTEs (so > * we can reset the right ones). note that since the pvo entries and > * list heads are accessed via BAT0 and are never placed in the page > * table, we don't have to worry about further accesses setting the > * REF/CHG bits. > */ > powerpc_sync(); > > /* > * For each pvo entry, clear the pvo's ptebit. If this pvo has a > * valid pte clear the ptebit from the valid pte. > */ > count = 0; > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > pt = moea_pvo_to_pte(pvo, -1); > if (pt != NULL) { > moea_pte_synch(pt, &pvo->pvo_pte.pte); > if (pvo->pvo_pte.pte.pte_lo & ptebit) { > count++; > moea_pte_clear(pt, PVO_VADDR(pvo), ptebit); > } > mtx_unlock(&moea_table_mutex); > } > pvo->pvo_pte.pte.pte_lo &= ~ptebit; > } > > return (count); > } > > /* > * Return true if the physical range is encompassed by the battable[idx] > */ > static int > moea_bat_mapped(int idx, vm_offset_t pa, vm_size_t size) > { > u_int prot; > u_int32_t start; > u_int32_t end; > u_int32_t bat_ble; > > /* > * Return immediately if not a valid mapping > */ > if (!(battable[idx].batu & BAT_Vs)) > return (EINVAL); > > /* > * The BAT entry must be cache-inhibited, guarded, and r/w > * so it can function as an i/o page > */ > prot = battable[idx].batl & (BAT_I|BAT_G|BAT_PP_RW); > if (prot != (BAT_I|BAT_G|BAT_PP_RW)) > return (EPERM); > > /* > * The address should be within the BAT range. Assume that the > * start address in the BAT has the correct alignment (thus > * not requiring masking) > */ > start = battable[idx].batl & BAT_PBS; > bat_ble = (battable[idx].batu & ~(BAT_EBS)) | 0x03; > end = start | (bat_ble << 15) | 0x7fff; > > if ((pa < start) || ((pa + size) > end)) > return (ERANGE); > > return (0); > } > > boolean_t > moea_dev_direct_mapped(mmu_t mmu, vm_paddr_t pa, vm_size_t size) > { > int i; > > /* > * This currently does not work for entries that > * overlap 256M BAT segments. > */ > > for(i = 0; i < 16; i++) > if (moea_bat_mapped(i, pa, size) == 0) > return (0); > > return (EFAULT); > } > > /* > * Map a set of physical memory pages into the kernel virtual > * address space. Return a pointer to where it is mapped. This > * routine is intended to be used for mapping device memory, > * NOT real memory. > */ > void * > moea_mapdev(mmu_t mmu, vm_paddr_t pa, vm_size_t size) > { > > return (moea_mapdev_attr(mmu, pa, size, VM_MEMATTR_DEFAULT)); > } > > void * > moea_mapdev_attr(mmu_t mmu, vm_offset_t pa, vm_size_t size, vm_memattr_t ma) > { > vm_offset_t va, tmpva, ppa, offset; > int i; > > ppa = trunc_page(pa); > offset = pa & PAGE_MASK; > size = roundup(offset + size, PAGE_SIZE); > > /* > * If the physical address lies within a valid BAT table entry, > * return the 1:1 mapping. This currently doesn't work > * for regions that overlap 256M BAT segments. > */ > for (i = 0; i < 16; i++) { > if (moea_bat_mapped(i, pa, size) == 0) > return ((void *) pa); > } > > va = kva_alloc(size); > if (!va) > panic("moea_mapdev: Couldn't alloc kernel virtual memory"); > > for (tmpva = va; size > 0;) { > moea_kenter_attr(mmu, tmpva, ppa, ma); > tlbie(tmpva); > size -= PAGE_SIZE; > tmpva += PAGE_SIZE; > ppa += PAGE_SIZE; > } > > return ((void *)(va + offset)); > } > > void > moea_unmapdev(mmu_t mmu, vm_offset_t va, vm_size_t size) > { > vm_offset_t base, offset; > > /* > * If this is outside kernel virtual space, then it's a > * battable entry and doesn't require unmapping > */ > if ((va >= VM_MIN_KERNEL_ADDRESS) && (va <= virtual_end)) { > base = trunc_page(va); > offset = va & PAGE_MASK; > size = roundup(offset + size, PAGE_SIZE); > kva_free(base, size); > } > } > > static void > moea_sync_icache(mmu_t mmu, pmap_t pm, vm_offset_t va, vm_size_t sz) > { > struct pvo_entry *pvo; > vm_offset_t lim; > vm_paddr_t pa; > vm_size_t len; > > PMAP_LOCK(pm); > while (sz > 0) { > lim = round_page(va); > len = MIN(lim - va, sz); > pvo = moea_pvo_find_va(pm, va & ~ADDR_POFF, NULL); > if (pvo != NULL) { > pa = (pvo->pvo_pte.pte.pte_lo & PTE_RPGN) | > (va & ADDR_POFF); > moea_syncicache(pa, len); > } > va += len; > sz -= len; > } > PMAP_UNLOCK(pm); > } > >-vm_offset_t >-moea_dumpsys_map(mmu_t mmu, struct pmap_md *md, vm_size_t ofs, >- vm_size_t *sz) >+void >+moea_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t sz, void **va) > { >- if (md->md_vaddr == ~0UL) >- return (md->md_paddr + ofs); >- else >- return (md->md_vaddr + ofs); >+ >+ *va = (void *)pa; > } > >-struct pmap_md * >-moea_scan_md(mmu_t mmu, struct pmap_md *prev) >+extern struct dump_pa dump_map[PHYS_AVAIL_SZ + 1]; >+ >+void >+moea_scan_init(mmu_t mmu) > { >- static struct pmap_md md; > struct pvo_entry *pvo; > vm_offset_t va; >- >- if (dumpsys_minidump) { >- md.md_paddr = ~0UL; /* Minidumps use virtual addresses. */ >- if (prev == NULL) { >- /* 1st: kernel .data and .bss. */ >- md.md_index = 1; >- md.md_vaddr = trunc_page((uintptr_t)_etext); >- md.md_size = round_page((uintptr_t)_end) - md.md_vaddr; >- return (&md); >+ int i; >+ >+ if (!do_minidump) { >+ /* Initialize phys. segments for dumpsys(). */ >+ memset(&dump_map, 0, sizeof(dump_map)); >+ mem_regions(&pregions, &pregions_sz, ®ions, ®ions_sz); >+ for (i = 0; i < pregions_sz; i++) { >+ dump_map[i].md_start = pregions[i].mr_start; >+ dump_map[i].md_size = pregions[i].mr_size; > } >- switch (prev->md_index) { >- case 1: >- /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ >- md.md_index = 2; >- md.md_vaddr = (vm_offset_t)msgbufp->msg_ptr; >- md.md_size = round_page(msgbufp->msg_size); >+ return; >+ } >+ >+ /* Virtual segments for minidumps: */ >+ memset(&dump_map, 0, sizeof(dump_map)); >+ >+ /* 1st: kernel .data and .bss. */ >+ dump_map[0].md_start = trunc_page((uintptr_t)_etext); >+ dump_map[0].md_size = round_page((uintptr_t)_end) - dump_map[0].md_start; >+ >+ /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ >+ dump_map[1].md_start = (vm_paddr_t)msgbufp->msg_ptr; >+ dump_map[1].md_size = round_page(msgbufp->msg_size); >+ >+ /* 3rd: kernel VM. */ >+ va = dump_map[1].md_start + dump_map[1].md_size; >+ /* Find start of next chunk (from va). */ >+ while (va < virtual_end) { >+ /* Don't dump the buffer cache. */ >+ if (va >= kmi.buffer_sva && va < kmi.buffer_eva) { >+ va = kmi.buffer_eva; >+ continue; >+ } >+ pvo = moea_pvo_find_va(kernel_pmap, va & ~ADDR_POFF, NULL); >+ if (pvo != NULL && (pvo->pvo_pte.pte.pte_hi & PTE_VALID)) > break; >- case 2: >- /* 3rd: kernel VM. */ >- va = prev->md_vaddr + prev->md_size; >- /* Find start of next chunk (from va). */ >- while (va < virtual_end) { >- /* Don't dump the buffer cache. */ >- if (va >= kmi.buffer_sva && >- va < kmi.buffer_eva) { >- va = kmi.buffer_eva; >- continue; >- } >- pvo = moea_pvo_find_va(kernel_pmap, >- va & ~ADDR_POFF, NULL); >- if (pvo != NULL && >- (pvo->pvo_pte.pte.pte_hi & PTE_VALID)) >- break; >- va += PAGE_SIZE; >- } >- if (va < virtual_end) { >- md.md_vaddr = va; >- va += PAGE_SIZE; >- /* Find last page in chunk. */ >- while (va < virtual_end) { >- /* Don't run into the buffer cache. */ >- if (va == kmi.buffer_sva) >- break; >- pvo = moea_pvo_find_va(kernel_pmap, >- va & ~ADDR_POFF, NULL); >- if (pvo == NULL || >- !(pvo->pvo_pte.pte.pte_hi & PTE_VALID)) >- break; >- va += PAGE_SIZE; >- } >- md.md_size = va - md.md_vaddr; >+ va += PAGE_SIZE; >+ } >+ if (va < virtual_end) { >+ dump_map[2].md_start = va; >+ va += PAGE_SIZE; >+ /* Find last page in chunk. */ >+ while (va < virtual_end) { >+ /* Don't run into the buffer cache. */ >+ if (va == kmi.buffer_sva) > break; >- } >- md.md_index = 3; >- /* FALLTHROUGH */ >- default: >- return (NULL); >- } >- } else { /* minidumps */ >- mem_regions(&pregions, &pregions_sz, >- ®ions, ®ions_sz); >- >- if (prev == NULL) { >- /* first physical chunk. */ >- md.md_paddr = pregions[0].mr_start; >- md.md_size = pregions[0].mr_size; >- md.md_vaddr = ~0UL; >- md.md_index = 1; >- } else if (md.md_index < pregions_sz) { >- md.md_paddr = pregions[md.md_index].mr_start; >- md.md_size = pregions[md.md_index].mr_size; >- md.md_vaddr = ~0UL; >- md.md_index++; >- } else { >- /* There's no next physical chunk. */ >- return (NULL); >+ pvo = moea_pvo_find_va(kernel_pmap, va & ~ADDR_POFF, >+ NULL); >+ if (pvo == NULL || >+ !(pvo->pvo_pte.pte.pte_hi & PTE_VALID)) >+ break; >+ va += PAGE_SIZE; > } >+ dump_map[2].md_size = va - dump_map[2].md_start; > } >- >- return (&md); > } >diff --git a/sys/powerpc/aim/mmu_oea64.c b/sys/powerpc/aim/mmu_oea64.c >index 2db7fcb..80bf82d 100644 >--- a/sys/powerpc/aim/mmu_oea64.c >+++ b/sys/powerpc/aim/mmu_oea64.c >@@ -1,2708 +1,2683 @@ > /*- > * Copyright (c) 2001 The NetBSD Foundation, Inc. > * All rights reserved. > * > * This code is derived from software contributed to The NetBSD Foundation > * by Matt Thomas <matt@3am-software.com> of Allegro Networks, Inc. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS > * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED > * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR > * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS > * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR > * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF > * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS > * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN > * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) > * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE > * POSSIBILITY OF SUCH DAMAGE. > */ > /*- > * Copyright (C) 1995, 1996 Wolfgang Solfrank. > * Copyright (C) 1995, 1996 TooLs GmbH. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 3. All advertising materials mentioning features or use of this software > * must display the following acknowledgement: > * This product includes software developed by TooLs GmbH. > * 4. The name of TooLs GmbH may not be used to endorse or promote products > * derived from this software without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY TOOLS GMBH ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, > * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; > * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, > * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR > * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > * > * $NetBSD: pmap.c,v 1.28 2000/03/26 20:42:36 kleink Exp $ > */ > /*- > * Copyright (C) 2001 Benno Rice. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY Benno Rice ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, > * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; > * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, > * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR > * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > /* > * Manages physical address maps. > * > * Since the information managed by this module is also stored by the > * logical address mapping module, this module may throw away valid virtual > * to physical mappings at almost any time. However, invalidations of > * mappings must be done as requested. > * > * In order to cope with hardware architectures which make virtual to > * physical map invalidates expensive, this module may delay invalidate > * reduced protection operations until such time as they are actually > * necessary. This module is given full information as to which processors > * are currently using which maps, and to when physical maps must be made > * correct. > */ > > #include "opt_compat.h" > #include "opt_kstack_pages.h" > > #include <sys/param.h> > #include <sys/kernel.h> >+#include <sys/conf.h> > #include <sys/queue.h> > #include <sys/cpuset.h> >+#include <sys/kerneldump.h> > #include <sys/ktr.h> > #include <sys/lock.h> > #include <sys/msgbuf.h> > #include <sys/malloc.h> > #include <sys/mutex.h> > #include <sys/proc.h> > #include <sys/rwlock.h> > #include <sys/sched.h> > #include <sys/sysctl.h> > #include <sys/systm.h> > #include <sys/vmmeter.h> > > #include <sys/kdb.h> > > #include <dev/ofw/openfirm.h> > > #include <vm/vm.h> > #include <vm/vm_param.h> > #include <vm/vm_kern.h> > #include <vm/vm_page.h> > #include <vm/vm_map.h> > #include <vm/vm_object.h> > #include <vm/vm_extern.h> > #include <vm/vm_pageout.h> > #include <vm/uma.h> > > #include <machine/_inttypes.h> > #include <machine/cpu.h> > #include <machine/platform.h> > #include <machine/frame.h> > #include <machine/md_var.h> > #include <machine/psl.h> > #include <machine/bat.h> > #include <machine/hid.h> > #include <machine/pte.h> > #include <machine/sr.h> > #include <machine/trap.h> > #include <machine/mmuvar.h> > > #include "mmu_oea64.h" > #include "mmu_if.h" > #include "moea64_if.h" > > void moea64_release_vsid(uint64_t vsid); > uintptr_t moea64_get_unique_vsid(void); > > #define DISABLE_TRANS(msr) msr = mfmsr(); mtmsr(msr & ~PSL_DR) > #define ENABLE_TRANS(msr) mtmsr(msr) > > #define VSID_MAKE(sr, hash) ((sr) | (((hash) & 0xfffff) << 4)) > #define VSID_TO_HASH(vsid) (((vsid) >> 4) & 0xfffff) > #define VSID_HASH_MASK 0x0000007fffffffffULL > > /* > * Locking semantics: > * -- Read lock: if no modifications are being made to either the PVO lists > * or page table or if any modifications being made result in internal > * changes (e.g. wiring, protection) such that the existence of the PVOs > * is unchanged and they remain associated with the same pmap (in which > * case the changes should be protected by the pmap lock) > * -- Write lock: required if PTEs/PVOs are being inserted or removed. > */ > > #define LOCK_TABLE_RD() rw_rlock(&moea64_table_lock) > #define UNLOCK_TABLE_RD() rw_runlock(&moea64_table_lock) > #define LOCK_TABLE_WR() rw_wlock(&moea64_table_lock) > #define UNLOCK_TABLE_WR() rw_wunlock(&moea64_table_lock) > > struct ofw_map { > cell_t om_va; > cell_t om_len; > uint64_t om_pa; > cell_t om_mode; > }; > > extern unsigned char _etext[]; > extern unsigned char _end[]; > >-extern int dumpsys_minidump; >- > /* > * Map of physical memory regions. > */ > static struct mem_region *regions; > static struct mem_region *pregions; > static u_int phys_avail_count; > static int regions_sz, pregions_sz; > > extern void bs_remap_earlyboot(void); > > /* > * Lock for the pteg and pvo tables. > */ > struct rwlock moea64_table_lock; > struct mtx moea64_slb_mutex; > > /* > * PTEG data. > */ > u_int moea64_pteg_count; > u_int moea64_pteg_mask; > > /* > * PVO data. > */ > struct pvo_head *moea64_pvo_table; /* pvo entries by pteg index */ > > uma_zone_t moea64_upvo_zone; /* zone for pvo entries for unmanaged pages */ > uma_zone_t moea64_mpvo_zone; /* zone for pvo entries for managed pages */ > > #define BPVO_POOL_SIZE 327680 > static struct pvo_entry *moea64_bpvo_pool; > static int moea64_bpvo_pool_index = 0; > > #define VSID_NBPW (sizeof(u_int32_t) * 8) > #ifdef __powerpc64__ > #define NVSIDS (NPMAPS * 16) > #define VSID_HASHMASK 0xffffffffUL > #else > #define NVSIDS NPMAPS > #define VSID_HASHMASK 0xfffffUL > #endif > static u_int moea64_vsid_bitmap[NVSIDS / VSID_NBPW]; > > static boolean_t moea64_initialized = FALSE; > > /* > * Statistics. > */ > u_int moea64_pte_valid = 0; > u_int moea64_pte_overflow = 0; > u_int moea64_pvo_entries = 0; > u_int moea64_pvo_enter_calls = 0; > u_int moea64_pvo_remove_calls = 0; > SYSCTL_INT(_machdep, OID_AUTO, moea64_pte_valid, CTLFLAG_RD, > &moea64_pte_valid, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea64_pte_overflow, CTLFLAG_RD, > &moea64_pte_overflow, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea64_pvo_entries, CTLFLAG_RD, > &moea64_pvo_entries, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea64_pvo_enter_calls, CTLFLAG_RD, > &moea64_pvo_enter_calls, 0, ""); > SYSCTL_INT(_machdep, OID_AUTO, moea64_pvo_remove_calls, CTLFLAG_RD, > &moea64_pvo_remove_calls, 0, ""); > > vm_offset_t moea64_scratchpage_va[2]; > struct pvo_entry *moea64_scratchpage_pvo[2]; > uintptr_t moea64_scratchpage_pte[2]; > struct mtx moea64_scratchpage_mtx; > > uint64_t moea64_large_page_mask = 0; > uint64_t moea64_large_page_size = 0; > int moea64_large_page_shift = 0; > > /* > * PVO calls. > */ > static int moea64_pvo_enter(mmu_t, pmap_t, uma_zone_t, struct pvo_head *, > vm_offset_t, vm_offset_t, uint64_t, int, int8_t); > static void moea64_pvo_remove(mmu_t, struct pvo_entry *); > static struct pvo_entry *moea64_pvo_find_va(pmap_t, vm_offset_t); > > /* > * Utility routines. > */ > static boolean_t moea64_query_bit(mmu_t, vm_page_t, u_int64_t); > static u_int moea64_clear_bit(mmu_t, vm_page_t, u_int64_t); > static void moea64_kremove(mmu_t, vm_offset_t); > static void moea64_syncicache(mmu_t, pmap_t pmap, vm_offset_t va, > vm_offset_t pa, vm_size_t sz); > > /* > * Kernel MMU interface > */ > void moea64_clear_modify(mmu_t, vm_page_t); > void moea64_copy_page(mmu_t, vm_page_t, vm_page_t); > void moea64_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize); > int moea64_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t, > u_int flags, int8_t psind); > void moea64_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_page_t, > vm_prot_t); > void moea64_enter_quick(mmu_t, pmap_t, vm_offset_t, vm_page_t, vm_prot_t); > vm_paddr_t moea64_extract(mmu_t, pmap_t, vm_offset_t); > vm_page_t moea64_extract_and_hold(mmu_t, pmap_t, vm_offset_t, vm_prot_t); > void moea64_init(mmu_t); > boolean_t moea64_is_modified(mmu_t, vm_page_t); > boolean_t moea64_is_prefaultable(mmu_t, pmap_t, vm_offset_t); > boolean_t moea64_is_referenced(mmu_t, vm_page_t); > int moea64_ts_referenced(mmu_t, vm_page_t); > vm_offset_t moea64_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, int); > boolean_t moea64_page_exists_quick(mmu_t, pmap_t, vm_page_t); > int moea64_page_wired_mappings(mmu_t, vm_page_t); > void moea64_pinit(mmu_t, pmap_t); > void moea64_pinit0(mmu_t, pmap_t); > void moea64_protect(mmu_t, pmap_t, vm_offset_t, vm_offset_t, vm_prot_t); > void moea64_qenter(mmu_t, vm_offset_t, vm_page_t *, int); > void moea64_qremove(mmu_t, vm_offset_t, int); > void moea64_release(mmu_t, pmap_t); > void moea64_remove(mmu_t, pmap_t, vm_offset_t, vm_offset_t); > void moea64_remove_pages(mmu_t, pmap_t); > void moea64_remove_all(mmu_t, vm_page_t); > void moea64_remove_write(mmu_t, vm_page_t); > void moea64_unwire(mmu_t, pmap_t, vm_offset_t, vm_offset_t); > void moea64_zero_page(mmu_t, vm_page_t); > void moea64_zero_page_area(mmu_t, vm_page_t, int, int); > void moea64_zero_page_idle(mmu_t, vm_page_t); > void moea64_activate(mmu_t, struct thread *); > void moea64_deactivate(mmu_t, struct thread *); > void *moea64_mapdev(mmu_t, vm_paddr_t, vm_size_t); > void *moea64_mapdev_attr(mmu_t, vm_offset_t, vm_size_t, vm_memattr_t); > void moea64_unmapdev(mmu_t, vm_offset_t, vm_size_t); > vm_paddr_t moea64_kextract(mmu_t, vm_offset_t); > void moea64_page_set_memattr(mmu_t, vm_page_t m, vm_memattr_t ma); > void moea64_kenter_attr(mmu_t, vm_offset_t, vm_offset_t, vm_memattr_t ma); > void moea64_kenter(mmu_t, vm_offset_t, vm_paddr_t); > boolean_t moea64_dev_direct_mapped(mmu_t, vm_paddr_t, vm_size_t); > static void moea64_sync_icache(mmu_t, pmap_t, vm_offset_t, vm_size_t); >-vm_offset_t moea64_dumpsys_map(mmu_t mmu, struct pmap_md *md, vm_size_t ofs, >- vm_size_t *sz); >-struct pmap_md * moea64_scan_md(mmu_t mmu, struct pmap_md *prev); >+void moea64_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t sz, >+ void **va); >+void moea64_scan_init(mmu_t mmu); > > static mmu_method_t moea64_methods[] = { > MMUMETHOD(mmu_clear_modify, moea64_clear_modify), > MMUMETHOD(mmu_copy_page, moea64_copy_page), > MMUMETHOD(mmu_copy_pages, moea64_copy_pages), > MMUMETHOD(mmu_enter, moea64_enter), > MMUMETHOD(mmu_enter_object, moea64_enter_object), > MMUMETHOD(mmu_enter_quick, moea64_enter_quick), > MMUMETHOD(mmu_extract, moea64_extract), > MMUMETHOD(mmu_extract_and_hold, moea64_extract_and_hold), > MMUMETHOD(mmu_init, moea64_init), > MMUMETHOD(mmu_is_modified, moea64_is_modified), > MMUMETHOD(mmu_is_prefaultable, moea64_is_prefaultable), > MMUMETHOD(mmu_is_referenced, moea64_is_referenced), > MMUMETHOD(mmu_ts_referenced, moea64_ts_referenced), > MMUMETHOD(mmu_map, moea64_map), > MMUMETHOD(mmu_page_exists_quick,moea64_page_exists_quick), > MMUMETHOD(mmu_page_wired_mappings,moea64_page_wired_mappings), > MMUMETHOD(mmu_pinit, moea64_pinit), > MMUMETHOD(mmu_pinit0, moea64_pinit0), > MMUMETHOD(mmu_protect, moea64_protect), > MMUMETHOD(mmu_qenter, moea64_qenter), > MMUMETHOD(mmu_qremove, moea64_qremove), > MMUMETHOD(mmu_release, moea64_release), > MMUMETHOD(mmu_remove, moea64_remove), > MMUMETHOD(mmu_remove_pages, moea64_remove_pages), > MMUMETHOD(mmu_remove_all, moea64_remove_all), > MMUMETHOD(mmu_remove_write, moea64_remove_write), > MMUMETHOD(mmu_sync_icache, moea64_sync_icache), > MMUMETHOD(mmu_unwire, moea64_unwire), > MMUMETHOD(mmu_zero_page, moea64_zero_page), > MMUMETHOD(mmu_zero_page_area, moea64_zero_page_area), > MMUMETHOD(mmu_zero_page_idle, moea64_zero_page_idle), > MMUMETHOD(mmu_activate, moea64_activate), > MMUMETHOD(mmu_deactivate, moea64_deactivate), > MMUMETHOD(mmu_page_set_memattr, moea64_page_set_memattr), > > /* Internal interfaces */ > MMUMETHOD(mmu_mapdev, moea64_mapdev), > MMUMETHOD(mmu_mapdev_attr, moea64_mapdev_attr), > MMUMETHOD(mmu_unmapdev, moea64_unmapdev), > MMUMETHOD(mmu_kextract, moea64_kextract), > MMUMETHOD(mmu_kenter, moea64_kenter), > MMUMETHOD(mmu_kenter_attr, moea64_kenter_attr), > MMUMETHOD(mmu_dev_direct_mapped,moea64_dev_direct_mapped), >- MMUMETHOD(mmu_scan_md, moea64_scan_md), >+ MMUMETHOD(mmu_scan_init, moea64_scan_init), > MMUMETHOD(mmu_dumpsys_map, moea64_dumpsys_map), > > { 0, 0 } > }; > > MMU_DEF(oea64_mmu, "mmu_oea64_base", moea64_methods, 0); > > static __inline u_int > va_to_pteg(uint64_t vsid, vm_offset_t addr, int large) > { > uint64_t hash; > int shift; > > shift = large ? moea64_large_page_shift : ADDR_PIDX_SHFT; > hash = (vsid & VSID_HASH_MASK) ^ (((uint64_t)addr & ADDR_PIDX) >> > shift); > return (hash & moea64_pteg_mask); > } > > static __inline struct pvo_head * > vm_page_to_pvoh(vm_page_t m) > { > > return (&m->md.mdpg_pvoh); > } > > static __inline void > moea64_pte_create(struct lpte *pt, uint64_t vsid, vm_offset_t va, > uint64_t pte_lo, int flags) > { > > /* > * Construct a PTE. Default to IMB initially. Valid bit only gets > * set when the real pte is set in memory. > * > * Note: Don't set the valid bit for correct operation of tlb update. > */ > pt->pte_hi = (vsid << LPTE_VSID_SHIFT) | > (((uint64_t)(va & ADDR_PIDX) >> ADDR_API_SHFT64) & LPTE_API); > > if (flags & PVO_LARGE) > pt->pte_hi |= LPTE_BIG; > > pt->pte_lo = pte_lo; > } > > static __inline uint64_t > moea64_calc_wimg(vm_offset_t pa, vm_memattr_t ma) > { > uint64_t pte_lo; > int i; > > if (ma != VM_MEMATTR_DEFAULT) { > switch (ma) { > case VM_MEMATTR_UNCACHEABLE: > return (LPTE_I | LPTE_G); > case VM_MEMATTR_WRITE_COMBINING: > case VM_MEMATTR_WRITE_BACK: > case VM_MEMATTR_PREFETCHABLE: > return (LPTE_I); > case VM_MEMATTR_WRITE_THROUGH: > return (LPTE_W | LPTE_M); > } > } > > /* > * Assume the page is cache inhibited and access is guarded unless > * it's in our available memory array. > */ > pte_lo = LPTE_I | LPTE_G; > for (i = 0; i < pregions_sz; i++) { > if ((pa >= pregions[i].mr_start) && > (pa < (pregions[i].mr_start + pregions[i].mr_size))) { > pte_lo &= ~(LPTE_I | LPTE_G); > pte_lo |= LPTE_M; > break; > } > } > > return pte_lo; > } > > /* > * Quick sort callout for comparing memory regions. > */ > static int om_cmp(const void *a, const void *b); > > static int > om_cmp(const void *a, const void *b) > { > const struct ofw_map *mapa; > const struct ofw_map *mapb; > > mapa = a; > mapb = b; > if (mapa->om_pa < mapb->om_pa) > return (-1); > else if (mapa->om_pa > mapb->om_pa) > return (1); > else > return (0); > } > > static void > moea64_add_ofw_mappings(mmu_t mmup, phandle_t mmu, size_t sz) > { > struct ofw_map translations[sz/(4*sizeof(cell_t))]; /*>= 4 cells per */ > pcell_t acells, trans_cells[sz/sizeof(cell_t)]; > register_t msr; > vm_offset_t off; > vm_paddr_t pa_base; > int i, j; > > bzero(translations, sz); > OF_getprop(OF_finddevice("/"), "#address-cells", &acells, > sizeof(acells)); > if (OF_getprop(mmu, "translations", trans_cells, sz) == -1) > panic("moea64_bootstrap: can't get ofw translations"); > > CTR0(KTR_PMAP, "moea64_add_ofw_mappings: translations"); > sz /= sizeof(cell_t); > for (i = 0, j = 0; i < sz; j++) { > translations[j].om_va = trans_cells[i++]; > translations[j].om_len = trans_cells[i++]; > translations[j].om_pa = trans_cells[i++]; > if (acells == 2) { > translations[j].om_pa <<= 32; > translations[j].om_pa |= trans_cells[i++]; > } > translations[j].om_mode = trans_cells[i++]; > } > KASSERT(i == sz, ("Translations map has incorrect cell count (%d/%zd)", > i, sz)); > > sz = j; > qsort(translations, sz, sizeof (*translations), om_cmp); > > for (i = 0; i < sz; i++) { > pa_base = translations[i].om_pa; > #ifndef __powerpc64__ > if ((translations[i].om_pa >> 32) != 0) > panic("OFW translations above 32-bit boundary!"); > #endif > > if (pa_base % PAGE_SIZE) > panic("OFW translation not page-aligned (phys)!"); > if (translations[i].om_va % PAGE_SIZE) > panic("OFW translation not page-aligned (virt)!"); > > CTR3(KTR_PMAP, "translation: pa=%#zx va=%#x len=%#x", > pa_base, translations[i].om_va, translations[i].om_len); > > /* Now enter the pages for this mapping */ > > DISABLE_TRANS(msr); > for (off = 0; off < translations[i].om_len; off += PAGE_SIZE) { > if (moea64_pvo_find_va(kernel_pmap, > translations[i].om_va + off) != NULL) > continue; > > moea64_kenter(mmup, translations[i].om_va + off, > pa_base + off); > } > ENABLE_TRANS(msr); > } > } > > #ifdef __powerpc64__ > static void > moea64_probe_large_page(void) > { > uint16_t pvr = mfpvr() >> 16; > > switch (pvr) { > case IBM970: > case IBM970FX: > case IBM970MP: > powerpc_sync(); isync(); > mtspr(SPR_HID4, mfspr(SPR_HID4) & ~HID4_970_DISABLE_LG_PG); > powerpc_sync(); isync(); > > /* FALLTHROUGH */ > default: > moea64_large_page_size = 0x1000000; /* 16 MB */ > moea64_large_page_shift = 24; > } > > moea64_large_page_mask = moea64_large_page_size - 1; > } > > static void > moea64_bootstrap_slb_prefault(vm_offset_t va, int large) > { > struct slb *cache; > struct slb entry; > uint64_t esid, slbe; > uint64_t i; > > cache = PCPU_GET(slb); > esid = va >> ADDR_SR_SHFT; > slbe = (esid << SLBE_ESID_SHIFT) | SLBE_VALID; > > for (i = 0; i < 64; i++) { > if (cache[i].slbe == (slbe | i)) > return; > } > > entry.slbe = slbe; > entry.slbv = KERNEL_VSID(esid) << SLBV_VSID_SHIFT; > if (large) > entry.slbv |= SLBV_L; > > slb_insert_kernel(entry.slbe, entry.slbv); > } > #endif > > static void > moea64_setup_direct_map(mmu_t mmup, vm_offset_t kernelstart, > vm_offset_t kernelend) > { > register_t msr; > vm_paddr_t pa; > vm_offset_t size, off; > uint64_t pte_lo; > int i; > > if (moea64_large_page_size == 0) > hw_direct_map = 0; > > DISABLE_TRANS(msr); > if (hw_direct_map) { > LOCK_TABLE_WR(); > PMAP_LOCK(kernel_pmap); > for (i = 0; i < pregions_sz; i++) { > for (pa = pregions[i].mr_start; pa < pregions[i].mr_start + > pregions[i].mr_size; pa += moea64_large_page_size) { > pte_lo = LPTE_M; > > /* > * Set memory access as guarded if prefetch within > * the page could exit the available physmem area. > */ > if (pa & moea64_large_page_mask) { > pa &= moea64_large_page_mask; > pte_lo |= LPTE_G; > } > if (pa + moea64_large_page_size > > pregions[i].mr_start + pregions[i].mr_size) > pte_lo |= LPTE_G; > > moea64_pvo_enter(mmup, kernel_pmap, moea64_upvo_zone, > NULL, pa, pa, pte_lo, > PVO_WIRED | PVO_LARGE, 0); > } > } > PMAP_UNLOCK(kernel_pmap); > UNLOCK_TABLE_WR(); > } else { > size = sizeof(struct pvo_head) * moea64_pteg_count; > off = (vm_offset_t)(moea64_pvo_table); > for (pa = off; pa < off + size; pa += PAGE_SIZE) > moea64_kenter(mmup, pa, pa); > size = BPVO_POOL_SIZE*sizeof(struct pvo_entry); > off = (vm_offset_t)(moea64_bpvo_pool); > for (pa = off; pa < off + size; pa += PAGE_SIZE) > moea64_kenter(mmup, pa, pa); > > /* > * Map certain important things, like ourselves. > * > * NOTE: We do not map the exception vector space. That code is > * used only in real mode, and leaving it unmapped allows us to > * catch NULL pointer deferences, instead of making NULL a valid > * address. > */ > > for (pa = kernelstart & ~PAGE_MASK; pa < kernelend; > pa += PAGE_SIZE) > moea64_kenter(mmup, pa, pa); > } > ENABLE_TRANS(msr); > > /* > * Allow user to override unmapped_buf_allowed for testing. > * XXXKIB Only direct map implementation was tested. > */ > if (!TUNABLE_INT_FETCH("vfs.unmapped_buf_allowed", > &unmapped_buf_allowed)) > unmapped_buf_allowed = hw_direct_map; > } > > void > moea64_early_bootstrap(mmu_t mmup, vm_offset_t kernelstart, vm_offset_t kernelend) > { > int i, j; > vm_size_t physsz, hwphyssz; > > #ifndef __powerpc64__ > /* We don't have a direct map since there is no BAT */ > hw_direct_map = 0; > > /* Make sure battable is zero, since we have no BAT */ > for (i = 0; i < 16; i++) { > battable[i].batu = 0; > battable[i].batl = 0; > } > #else > moea64_probe_large_page(); > > /* Use a direct map if we have large page support */ > if (moea64_large_page_size > 0) > hw_direct_map = 1; > else > hw_direct_map = 0; > #endif > > /* Get physical memory regions from firmware */ > mem_regions(&pregions, &pregions_sz, ®ions, ®ions_sz); > CTR0(KTR_PMAP, "moea64_bootstrap: physical memory"); > > if (sizeof(phys_avail)/sizeof(phys_avail[0]) < regions_sz) > panic("moea64_bootstrap: phys_avail too small"); > > phys_avail_count = 0; > physsz = 0; > hwphyssz = 0; > TUNABLE_ULONG_FETCH("hw.physmem", (u_long *) &hwphyssz); > for (i = 0, j = 0; i < regions_sz; i++, j += 2) { > CTR3(KTR_PMAP, "region: %#zx - %#zx (%#zx)", > regions[i].mr_start, regions[i].mr_start + > regions[i].mr_size, regions[i].mr_size); > if (hwphyssz != 0 && > (physsz + regions[i].mr_size) >= hwphyssz) { > if (physsz < hwphyssz) { > phys_avail[j] = regions[i].mr_start; > phys_avail[j + 1] = regions[i].mr_start + > hwphyssz - physsz; > physsz = hwphyssz; > phys_avail_count++; > } > break; > } > phys_avail[j] = regions[i].mr_start; > phys_avail[j + 1] = regions[i].mr_start + regions[i].mr_size; > phys_avail_count++; > physsz += regions[i].mr_size; > } > > /* Check for overlap with the kernel and exception vectors */ > for (j = 0; j < 2*phys_avail_count; j+=2) { > if (phys_avail[j] < EXC_LAST) > phys_avail[j] += EXC_LAST; > > if (kernelstart >= phys_avail[j] && > kernelstart < phys_avail[j+1]) { > if (kernelend < phys_avail[j+1]) { > phys_avail[2*phys_avail_count] = > (kernelend & ~PAGE_MASK) + PAGE_SIZE; > phys_avail[2*phys_avail_count + 1] = > phys_avail[j+1]; > phys_avail_count++; > } > > phys_avail[j+1] = kernelstart & ~PAGE_MASK; > } > > if (kernelend >= phys_avail[j] && > kernelend < phys_avail[j+1]) { > if (kernelstart > phys_avail[j]) { > phys_avail[2*phys_avail_count] = phys_avail[j]; > phys_avail[2*phys_avail_count + 1] = > kernelstart & ~PAGE_MASK; > phys_avail_count++; > } > > phys_avail[j] = (kernelend & ~PAGE_MASK) + PAGE_SIZE; > } > } > > physmem = btoc(physsz); > > #ifdef PTEGCOUNT > moea64_pteg_count = PTEGCOUNT; > #else > moea64_pteg_count = 0x1000; > > while (moea64_pteg_count < physmem) > moea64_pteg_count <<= 1; > > moea64_pteg_count >>= 1; > #endif /* PTEGCOUNT */ > } > > void > moea64_mid_bootstrap(mmu_t mmup, vm_offset_t kernelstart, vm_offset_t kernelend) > { > vm_size_t size; > register_t msr; > int i; > > /* > * Set PTEG mask > */ > moea64_pteg_mask = moea64_pteg_count - 1; > > /* > * Allocate pv/overflow lists. > */ > size = sizeof(struct pvo_head) * moea64_pteg_count; > > moea64_pvo_table = (struct pvo_head *)moea64_bootstrap_alloc(size, > PAGE_SIZE); > CTR1(KTR_PMAP, "moea64_bootstrap: PVO table at %p", moea64_pvo_table); > > DISABLE_TRANS(msr); > for (i = 0; i < moea64_pteg_count; i++) > LIST_INIT(&moea64_pvo_table[i]); > ENABLE_TRANS(msr); > > /* > * Initialize the lock that synchronizes access to the pteg and pvo > * tables. > */ > rw_init_flags(&moea64_table_lock, "pmap tables", RW_RECURSE); > mtx_init(&moea64_slb_mutex, "SLB table", NULL, MTX_DEF); > > /* > * Initialise the unmanaged pvo pool. > */ > moea64_bpvo_pool = (struct pvo_entry *)moea64_bootstrap_alloc( > BPVO_POOL_SIZE*sizeof(struct pvo_entry), 0); > moea64_bpvo_pool_index = 0; > > /* > * Make sure kernel vsid is allocated as well as VSID 0. > */ > #ifndef __powerpc64__ > moea64_vsid_bitmap[(KERNEL_VSIDBITS & (NVSIDS - 1)) / VSID_NBPW] > |= 1 << (KERNEL_VSIDBITS % VSID_NBPW); > moea64_vsid_bitmap[0] |= 1; > #endif > > /* > * Initialize the kernel pmap (which is statically allocated). > */ > #ifdef __powerpc64__ > for (i = 0; i < 64; i++) { > pcpup->pc_slb[i].slbv = 0; > pcpup->pc_slb[i].slbe = 0; > } > #else > for (i = 0; i < 16; i++) > kernel_pmap->pm_sr[i] = EMPTY_SEGMENT + i; > #endif > > kernel_pmap->pmap_phys = kernel_pmap; > CPU_FILL(&kernel_pmap->pm_active); > RB_INIT(&kernel_pmap->pmap_pvo); > > PMAP_LOCK_INIT(kernel_pmap); > > /* > * Now map in all the other buffers we allocated earlier > */ > > moea64_setup_direct_map(mmup, kernelstart, kernelend); > } > > void > moea64_late_bootstrap(mmu_t mmup, vm_offset_t kernelstart, vm_offset_t kernelend) > { > ihandle_t mmui; > phandle_t chosen; > phandle_t mmu; > size_t sz; > int i; > vm_offset_t pa, va; > void *dpcpu; > > /* > * Set up the Open Firmware pmap and add its mappings if not in real > * mode. > */ > > chosen = OF_finddevice("/chosen"); > if (chosen != -1 && OF_getprop(chosen, "mmu", &mmui, 4) != -1) { > mmu = OF_instance_to_package(mmui); > if (mmu == -1 || (sz = OF_getproplen(mmu, "translations")) == -1) > sz = 0; > if (sz > 6144 /* tmpstksz - 2 KB headroom */) > panic("moea64_bootstrap: too many ofw translations"); > > if (sz > 0) > moea64_add_ofw_mappings(mmup, mmu, sz); > } > > /* > * Calculate the last available physical address. > */ > for (i = 0; phys_avail[i + 2] != 0; i += 2) > ; > Maxmem = powerpc_btop(phys_avail[i + 1]); > > /* > * Initialize MMU and remap early physical mappings > */ > MMU_CPU_BOOTSTRAP(mmup,0); > mtmsr(mfmsr() | PSL_DR | PSL_IR); > pmap_bootstrapped++; > bs_remap_earlyboot(); > > /* > * Set the start and end of kva. > */ > virtual_avail = VM_MIN_KERNEL_ADDRESS; > virtual_end = VM_MAX_SAFE_KERNEL_ADDRESS; > > /* > * Map the entire KVA range into the SLB. We must not fault there. > */ > #ifdef __powerpc64__ > for (va = virtual_avail; va < virtual_end; va += SEGMENT_LENGTH) > moea64_bootstrap_slb_prefault(va, 0); > #endif > > /* > * Figure out how far we can extend virtual_end into segment 16 > * without running into existing mappings. Segment 16 is guaranteed > * to contain neither RAM nor devices (at least on Apple hardware), > * but will generally contain some OFW mappings we should not > * step on. > */ > > #ifndef __powerpc64__ /* KVA is in high memory on PPC64 */ > PMAP_LOCK(kernel_pmap); > while (virtual_end < VM_MAX_KERNEL_ADDRESS && > moea64_pvo_find_va(kernel_pmap, virtual_end+1) == NULL) > virtual_end += PAGE_SIZE; > PMAP_UNLOCK(kernel_pmap); > #endif > > /* > * Allocate a kernel stack with a guard page for thread0 and map it > * into the kernel page map. > */ > pa = moea64_bootstrap_alloc(KSTACK_PAGES * PAGE_SIZE, PAGE_SIZE); > va = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; > virtual_avail = va + KSTACK_PAGES * PAGE_SIZE; > CTR2(KTR_PMAP, "moea64_bootstrap: kstack0 at %#x (%#x)", pa, va); > thread0.td_kstack = va; > thread0.td_kstack_pages = KSTACK_PAGES; > for (i = 0; i < KSTACK_PAGES; i++) { > moea64_kenter(mmup, va, pa); > pa += PAGE_SIZE; > va += PAGE_SIZE; > } > > /* > * Allocate virtual address space for the message buffer. > */ > pa = msgbuf_phys = moea64_bootstrap_alloc(msgbufsize, PAGE_SIZE); > msgbufp = (struct msgbuf *)virtual_avail; > va = virtual_avail; > virtual_avail += round_page(msgbufsize); > while (va < virtual_avail) { > moea64_kenter(mmup, va, pa); > pa += PAGE_SIZE; > va += PAGE_SIZE; > } > > /* > * Allocate virtual address space for the dynamic percpu area. > */ > pa = moea64_bootstrap_alloc(DPCPU_SIZE, PAGE_SIZE); > dpcpu = (void *)virtual_avail; > va = virtual_avail; > virtual_avail += DPCPU_SIZE; > while (va < virtual_avail) { > moea64_kenter(mmup, va, pa); > pa += PAGE_SIZE; > va += PAGE_SIZE; > } > dpcpu_init(dpcpu, 0); > > /* > * Allocate some things for page zeroing. We put this directly > * in the page table, marked with LPTE_LOCKED, to avoid any > * of the PVO book-keeping or other parts of the VM system > * from even knowing that this hack exists. > */ > > if (!hw_direct_map) { > mtx_init(&moea64_scratchpage_mtx, "pvo zero page", NULL, > MTX_DEF); > for (i = 0; i < 2; i++) { > moea64_scratchpage_va[i] = (virtual_end+1) - PAGE_SIZE; > virtual_end -= PAGE_SIZE; > > moea64_kenter(mmup, moea64_scratchpage_va[i], 0); > > moea64_scratchpage_pvo[i] = moea64_pvo_find_va( > kernel_pmap, (vm_offset_t)moea64_scratchpage_va[i]); > LOCK_TABLE_RD(); > moea64_scratchpage_pte[i] = MOEA64_PVO_TO_PTE( > mmup, moea64_scratchpage_pvo[i]); > moea64_scratchpage_pvo[i]->pvo_pte.lpte.pte_hi > |= LPTE_LOCKED; > MOEA64_PTE_CHANGE(mmup, moea64_scratchpage_pte[i], > &moea64_scratchpage_pvo[i]->pvo_pte.lpte, > moea64_scratchpage_pvo[i]->pvo_vpn); > UNLOCK_TABLE_RD(); > } > } > } > > /* > * Activate a user pmap. The pmap must be activated before its address > * space can be accessed in any way. > */ > void > moea64_activate(mmu_t mmu, struct thread *td) > { > pmap_t pm; > > pm = &td->td_proc->p_vmspace->vm_pmap; > CPU_SET(PCPU_GET(cpuid), &pm->pm_active); > > #ifdef __powerpc64__ > PCPU_SET(userslb, pm->pm_slb); > #else > PCPU_SET(curpmap, pm->pmap_phys); > #endif > } > > void > moea64_deactivate(mmu_t mmu, struct thread *td) > { > pmap_t pm; > > pm = &td->td_proc->p_vmspace->vm_pmap; > CPU_CLR(PCPU_GET(cpuid), &pm->pm_active); > #ifdef __powerpc64__ > PCPU_SET(userslb, NULL); > #else > PCPU_SET(curpmap, NULL); > #endif > } > > void > moea64_unwire(mmu_t mmu, pmap_t pm, vm_offset_t sva, vm_offset_t eva) > { > struct pvo_entry key, *pvo; > uintptr_t pt; > > LOCK_TABLE_RD(); > PMAP_LOCK(pm); > key.pvo_vaddr = sva; > for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); > pvo != NULL && PVO_VADDR(pvo) < eva; > pvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo)) { > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > if ((pvo->pvo_vaddr & PVO_WIRED) == 0) > panic("moea64_unwire: pvo %p is missing PVO_WIRED", > pvo); > pvo->pvo_vaddr &= ~PVO_WIRED; > if ((pvo->pvo_pte.lpte.pte_hi & LPTE_WIRED) == 0) > panic("moea64_unwire: pte %p is missing LPTE_WIRED", > &pvo->pvo_pte.lpte); > pvo->pvo_pte.lpte.pte_hi &= ~LPTE_WIRED; > if (pt != -1) { > /* > * The PTE's wired attribute is not a hardware > * feature, so there is no need to invalidate any TLB > * entries. > */ > MOEA64_PTE_CHANGE(mmu, pt, &pvo->pvo_pte.lpte, > pvo->pvo_vpn); > } > pm->pm_stats.wired_count--; > } > UNLOCK_TABLE_RD(); > PMAP_UNLOCK(pm); > } > > /* > * This goes through and sets the physical address of our > * special scratch PTE to the PA we want to zero or copy. Because > * of locking issues (this can get called in pvo_enter() by > * the UMA allocator), we can't use most other utility functions here > */ > > static __inline > void moea64_set_scratchpage_pa(mmu_t mmup, int which, vm_offset_t pa) { > > KASSERT(!hw_direct_map, ("Using OEA64 scratchpage with a direct map!")); > mtx_assert(&moea64_scratchpage_mtx, MA_OWNED); > > moea64_scratchpage_pvo[which]->pvo_pte.lpte.pte_lo &= > ~(LPTE_WIMG | LPTE_RPGN); > moea64_scratchpage_pvo[which]->pvo_pte.lpte.pte_lo |= > moea64_calc_wimg(pa, VM_MEMATTR_DEFAULT) | (uint64_t)pa; > MOEA64_PTE_CHANGE(mmup, moea64_scratchpage_pte[which], > &moea64_scratchpage_pvo[which]->pvo_pte.lpte, > moea64_scratchpage_pvo[which]->pvo_vpn); > isync(); > } > > void > moea64_copy_page(mmu_t mmu, vm_page_t msrc, vm_page_t mdst) > { > vm_offset_t dst; > vm_offset_t src; > > dst = VM_PAGE_TO_PHYS(mdst); > src = VM_PAGE_TO_PHYS(msrc); > > if (hw_direct_map) { > bcopy((void *)src, (void *)dst, PAGE_SIZE); > } else { > mtx_lock(&moea64_scratchpage_mtx); > > moea64_set_scratchpage_pa(mmu, 0, src); > moea64_set_scratchpage_pa(mmu, 1, dst); > > bcopy((void *)moea64_scratchpage_va[0], > (void *)moea64_scratchpage_va[1], PAGE_SIZE); > > mtx_unlock(&moea64_scratchpage_mtx); > } > } > > static inline void > moea64_copy_pages_dmap(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize) > { > void *a_cp, *b_cp; > vm_offset_t a_pg_offset, b_pg_offset; > int cnt; > > while (xfersize > 0) { > a_pg_offset = a_offset & PAGE_MASK; > cnt = min(xfersize, PAGE_SIZE - a_pg_offset); > a_cp = (char *)VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT]) + > a_pg_offset; > b_pg_offset = b_offset & PAGE_MASK; > cnt = min(cnt, PAGE_SIZE - b_pg_offset); > b_cp = (char *)VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT]) + > b_pg_offset; > bcopy(a_cp, b_cp, cnt); > a_offset += cnt; > b_offset += cnt; > xfersize -= cnt; > } > } > > static inline void > moea64_copy_pages_nodmap(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize) > { > void *a_cp, *b_cp; > vm_offset_t a_pg_offset, b_pg_offset; > int cnt; > > mtx_lock(&moea64_scratchpage_mtx); > while (xfersize > 0) { > a_pg_offset = a_offset & PAGE_MASK; > cnt = min(xfersize, PAGE_SIZE - a_pg_offset); > moea64_set_scratchpage_pa(mmu, 0, > VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT])); > a_cp = (char *)moea64_scratchpage_va[0] + a_pg_offset; > b_pg_offset = b_offset & PAGE_MASK; > cnt = min(cnt, PAGE_SIZE - b_pg_offset); > moea64_set_scratchpage_pa(mmu, 1, > VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT])); > b_cp = (char *)moea64_scratchpage_va[1] + b_pg_offset; > bcopy(a_cp, b_cp, cnt); > a_offset += cnt; > b_offset += cnt; > xfersize -= cnt; > } > mtx_unlock(&moea64_scratchpage_mtx); > } > > void > moea64_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize) > { > > if (hw_direct_map) { > moea64_copy_pages_dmap(mmu, ma, a_offset, mb, b_offset, > xfersize); > } else { > moea64_copy_pages_nodmap(mmu, ma, a_offset, mb, b_offset, > xfersize); > } > } > > void > moea64_zero_page_area(mmu_t mmu, vm_page_t m, int off, int size) > { > vm_offset_t pa = VM_PAGE_TO_PHYS(m); > > if (size + off > PAGE_SIZE) > panic("moea64_zero_page: size + off > PAGE_SIZE"); > > if (hw_direct_map) { > bzero((caddr_t)pa + off, size); > } else { > mtx_lock(&moea64_scratchpage_mtx); > moea64_set_scratchpage_pa(mmu, 0, pa); > bzero((caddr_t)moea64_scratchpage_va[0] + off, size); > mtx_unlock(&moea64_scratchpage_mtx); > } > } > > /* > * Zero a page of physical memory by temporarily mapping it > */ > void > moea64_zero_page(mmu_t mmu, vm_page_t m) > { > vm_offset_t pa = VM_PAGE_TO_PHYS(m); > vm_offset_t va, off; > > if (!hw_direct_map) { > mtx_lock(&moea64_scratchpage_mtx); > > moea64_set_scratchpage_pa(mmu, 0, pa); > va = moea64_scratchpage_va[0]; > } else { > va = pa; > } > > for (off = 0; off < PAGE_SIZE; off += cacheline_size) > __asm __volatile("dcbz 0,%0" :: "r"(va + off)); > > if (!hw_direct_map) > mtx_unlock(&moea64_scratchpage_mtx); > } > > void > moea64_zero_page_idle(mmu_t mmu, vm_page_t m) > { > > moea64_zero_page(mmu, m); > } > > /* > * Map the given physical page at the specified virtual address in the > * target pmap with the protection requested. If specified the page > * will be wired down. > */ > > int > moea64_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, > vm_prot_t prot, u_int flags, int8_t psind) > { > struct pvo_head *pvo_head; > uma_zone_t zone; > uint64_t pte_lo; > u_int pvo_flags; > int error; > > if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m)) > VM_OBJECT_ASSERT_LOCKED(m->object); > > if ((m->oflags & VPO_UNMANAGED) != 0 || !moea64_initialized) { > pvo_head = NULL; > zone = moea64_upvo_zone; > pvo_flags = 0; > } else { > pvo_head = vm_page_to_pvoh(m); > zone = moea64_mpvo_zone; > pvo_flags = PVO_MANAGED; > } > > pte_lo = moea64_calc_wimg(VM_PAGE_TO_PHYS(m), pmap_page_get_memattr(m)); > > if (prot & VM_PROT_WRITE) { > pte_lo |= LPTE_BW; > if (pmap_bootstrapped && > (m->oflags & VPO_UNMANAGED) == 0) > vm_page_aflag_set(m, PGA_WRITEABLE); > } else > pte_lo |= LPTE_BR; > > if ((prot & VM_PROT_EXECUTE) == 0) > pte_lo |= LPTE_NOEXEC; > > if ((flags & PMAP_ENTER_WIRED) != 0) > pvo_flags |= PVO_WIRED; > > for (;;) { > LOCK_TABLE_WR(); > PMAP_LOCK(pmap); > error = moea64_pvo_enter(mmu, pmap, zone, pvo_head, va, > VM_PAGE_TO_PHYS(m), pte_lo, pvo_flags, psind); > PMAP_UNLOCK(pmap); > UNLOCK_TABLE_WR(); > if (error != ENOMEM) > break; > if ((flags & PMAP_ENTER_NOSLEEP) != 0) > return (KERN_RESOURCE_SHORTAGE); > VM_OBJECT_ASSERT_UNLOCKED(m->object); > VM_WAIT; > } > > /* > * Flush the page from the instruction cache if this page is > * mapped executable and cacheable. > */ > if (pmap != kernel_pmap && !(m->aflags & PGA_EXECUTABLE) && > (pte_lo & (LPTE_I | LPTE_G | LPTE_NOEXEC)) == 0) { > vm_page_aflag_set(m, PGA_EXECUTABLE); > moea64_syncicache(mmu, pmap, va, VM_PAGE_TO_PHYS(m), PAGE_SIZE); > } > return (KERN_SUCCESS); > } > > static void > moea64_syncicache(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_offset_t pa, > vm_size_t sz) > { > > /* > * This is much trickier than on older systems because > * we can't sync the icache on physical addresses directly > * without a direct map. Instead we check a couple of cases > * where the memory is already mapped in and, failing that, > * use the same trick we use for page zeroing to create > * a temporary mapping for this physical address. > */ > > if (!pmap_bootstrapped) { > /* > * If PMAP is not bootstrapped, we are likely to be > * in real mode. > */ > __syncicache((void *)pa, sz); > } else if (pmap == kernel_pmap) { > __syncicache((void *)va, sz); > } else if (hw_direct_map) { > __syncicache((void *)pa, sz); > } else { > /* Use the scratch page to set up a temp mapping */ > > mtx_lock(&moea64_scratchpage_mtx); > > moea64_set_scratchpage_pa(mmu, 1, pa & ~ADDR_POFF); > __syncicache((void *)(moea64_scratchpage_va[1] + > (va & ADDR_POFF)), sz); > > mtx_unlock(&moea64_scratchpage_mtx); > } > } > > /* > * Maps a sequence of resident pages belonging to the same object. > * The sequence begins with the given page m_start. This page is > * mapped at the given virtual address start. Each subsequent page is > * mapped at a virtual address that is offset from start by the same > * amount as the page is offset from m_start within the object. The > * last page in the sequence is the page with the largest offset from > * m_start that can be mapped at a virtual address less than the given > * virtual address end. Not every virtual page between start and end > * is mapped; only those for which a resident page exists with the > * corresponding offset from m_start are mapped. > */ > void > moea64_enter_object(mmu_t mmu, pmap_t pm, vm_offset_t start, vm_offset_t end, > vm_page_t m_start, vm_prot_t prot) > { > vm_page_t m; > vm_pindex_t diff, psize; > > VM_OBJECT_ASSERT_LOCKED(m_start->object); > > psize = atop(end - start); > m = m_start; > while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { > moea64_enter(mmu, pm, start + ptoa(diff), m, prot & > (VM_PROT_READ | VM_PROT_EXECUTE), PMAP_ENTER_NOSLEEP, 0); > m = TAILQ_NEXT(m, listq); > } > } > > void > moea64_enter_quick(mmu_t mmu, pmap_t pm, vm_offset_t va, vm_page_t m, > vm_prot_t prot) > { > > moea64_enter(mmu, pm, va, m, prot & (VM_PROT_READ | VM_PROT_EXECUTE), > PMAP_ENTER_NOSLEEP, 0); > } > > vm_paddr_t > moea64_extract(mmu_t mmu, pmap_t pm, vm_offset_t va) > { > struct pvo_entry *pvo; > vm_paddr_t pa; > > PMAP_LOCK(pm); > pvo = moea64_pvo_find_va(pm, va); > if (pvo == NULL) > pa = 0; > else > pa = (pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) | > (va - PVO_VADDR(pvo)); > PMAP_UNLOCK(pm); > return (pa); > } > > /* > * Atomically extract and hold the physical page with the given > * pmap and virtual address pair if that mapping permits the given > * protection. > */ > vm_page_t > moea64_extract_and_hold(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_prot_t prot) > { > struct pvo_entry *pvo; > vm_page_t m; > vm_paddr_t pa; > > m = NULL; > pa = 0; > PMAP_LOCK(pmap); > retry: > pvo = moea64_pvo_find_va(pmap, va & ~ADDR_POFF); > if (pvo != NULL && (pvo->pvo_pte.lpte.pte_hi & LPTE_VALID) && > ((pvo->pvo_pte.lpte.pte_lo & LPTE_PP) == LPTE_RW || > (prot & VM_PROT_WRITE) == 0)) { > if (vm_page_pa_tryrelock(pmap, > pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN, &pa)) > goto retry; > m = PHYS_TO_VM_PAGE(pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN); > vm_page_hold(m); > } > PA_UNLOCK_COND(pa); > PMAP_UNLOCK(pmap); > return (m); > } > > static mmu_t installed_mmu; > > static void * > moea64_uma_page_alloc(uma_zone_t zone, int bytes, u_int8_t *flags, int wait) > { > /* > * This entire routine is a horrible hack to avoid bothering kmem > * for new KVA addresses. Because this can get called from inside > * kmem allocation routines, calling kmem for a new address here > * can lead to multiply locking non-recursive mutexes. > */ > vm_offset_t va; > > vm_page_t m; > int pflags, needed_lock; > > *flags = UMA_SLAB_PRIV; > needed_lock = !PMAP_LOCKED(kernel_pmap); > pflags = malloc2vm_flags(wait) | VM_ALLOC_WIRED; > > for (;;) { > m = vm_page_alloc(NULL, 0, pflags | VM_ALLOC_NOOBJ); > if (m == NULL) { > if (wait & M_NOWAIT) > return (NULL); > VM_WAIT; > } else > break; > } > > va = VM_PAGE_TO_PHYS(m); > > LOCK_TABLE_WR(); > if (needed_lock) > PMAP_LOCK(kernel_pmap); > > moea64_pvo_enter(installed_mmu, kernel_pmap, moea64_upvo_zone, > NULL, va, VM_PAGE_TO_PHYS(m), LPTE_M, PVO_WIRED | PVO_BOOTSTRAP, > 0); > > if (needed_lock) > PMAP_UNLOCK(kernel_pmap); > UNLOCK_TABLE_WR(); > > if ((wait & M_ZERO) && (m->flags & PG_ZERO) == 0) > bzero((void *)va, PAGE_SIZE); > > return (void *)va; > } > > extern int elf32_nxstack; > > void > moea64_init(mmu_t mmu) > { > > CTR0(KTR_PMAP, "moea64_init"); > > moea64_upvo_zone = uma_zcreate("UPVO entry", sizeof (struct pvo_entry), > NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, > UMA_ZONE_VM | UMA_ZONE_NOFREE); > moea64_mpvo_zone = uma_zcreate("MPVO entry", sizeof(struct pvo_entry), > NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, > UMA_ZONE_VM | UMA_ZONE_NOFREE); > > if (!hw_direct_map) { > installed_mmu = mmu; > uma_zone_set_allocf(moea64_upvo_zone,moea64_uma_page_alloc); > uma_zone_set_allocf(moea64_mpvo_zone,moea64_uma_page_alloc); > } > > #ifdef COMPAT_FREEBSD32 > elf32_nxstack = 1; > #endif > > moea64_initialized = TRUE; > } > > boolean_t > moea64_is_referenced(mmu_t mmu, vm_page_t m) > { > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea64_is_referenced: page %p is not managed", m)); > return (moea64_query_bit(mmu, m, PTE_REF)); > } > > boolean_t > moea64_is_modified(mmu_t mmu, vm_page_t m) > { > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea64_is_modified: page %p is not managed", m)); > > /* > * If the page is not exclusive busied, then PGA_WRITEABLE cannot be > * concurrently set while the object is locked. Thus, if PGA_WRITEABLE > * is clear, no PTEs can have LPTE_CHG set. > */ > VM_OBJECT_ASSERT_LOCKED(m->object); > if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) > return (FALSE); > return (moea64_query_bit(mmu, m, LPTE_CHG)); > } > > boolean_t > moea64_is_prefaultable(mmu_t mmu, pmap_t pmap, vm_offset_t va) > { > struct pvo_entry *pvo; > boolean_t rv; > > PMAP_LOCK(pmap); > pvo = moea64_pvo_find_va(pmap, va & ~ADDR_POFF); > rv = pvo == NULL || (pvo->pvo_pte.lpte.pte_hi & LPTE_VALID) == 0; > PMAP_UNLOCK(pmap); > return (rv); > } > > void > moea64_clear_modify(mmu_t mmu, vm_page_t m) > { > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea64_clear_modify: page %p is not managed", m)); > VM_OBJECT_ASSERT_WLOCKED(m->object); > KASSERT(!vm_page_xbusied(m), > ("moea64_clear_modify: page %p is exclusive busied", m)); > > /* > * If the page is not PGA_WRITEABLE, then no PTEs can have LPTE_CHG > * set. If the object containing the page is locked and the page is > * not exclusive busied, then PGA_WRITEABLE cannot be concurrently set. > */ > if ((m->aflags & PGA_WRITEABLE) == 0) > return; > moea64_clear_bit(mmu, m, LPTE_CHG); > } > > /* > * Clear the write and modified bits in each of the given page's mappings. > */ > void > moea64_remove_write(mmu_t mmu, vm_page_t m) > { > struct pvo_entry *pvo; > uintptr_t pt; > pmap_t pmap; > uint64_t lo = 0; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea64_remove_write: page %p is not managed", m)); > > /* > * If the page is not exclusive busied, then PGA_WRITEABLE cannot be > * set by another thread while the object is locked. Thus, > * if PGA_WRITEABLE is clear, no page table entries need updating. > */ > VM_OBJECT_ASSERT_WLOCKED(m->object); > if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) > return; > powerpc_sync(); > LOCK_TABLE_RD(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > pmap = pvo->pvo_pmap; > PMAP_LOCK(pmap); > if ((pvo->pvo_pte.lpte.pte_lo & LPTE_PP) != LPTE_BR) { > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > pvo->pvo_pte.lpte.pte_lo &= ~LPTE_PP; > pvo->pvo_pte.lpte.pte_lo |= LPTE_BR; > if (pt != -1) { > MOEA64_PTE_SYNCH(mmu, pt, &pvo->pvo_pte.lpte); > lo |= pvo->pvo_pte.lpte.pte_lo; > pvo->pvo_pte.lpte.pte_lo &= ~LPTE_CHG; > MOEA64_PTE_CHANGE(mmu, pt, > &pvo->pvo_pte.lpte, pvo->pvo_vpn); > if (pvo->pvo_pmap == kernel_pmap) > isync(); > } > } > if ((lo & LPTE_CHG) != 0) > vm_page_dirty(m); > PMAP_UNLOCK(pmap); > } > UNLOCK_TABLE_RD(); > vm_page_aflag_clear(m, PGA_WRITEABLE); > } > > /* > * moea64_ts_referenced: > * > * Return a count of reference bits for a page, clearing those bits. > * It is not necessary for every reference bit to be cleared, but it > * is necessary that 0 only be returned when there are truly no > * reference bits set. > * > * XXX: The exact number of bits to check and clear is a matter that > * should be tested and standardized at some point in the future for > * optimal aging of shared pages. > */ > int > moea64_ts_referenced(mmu_t mmu, vm_page_t m) > { > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea64_ts_referenced: page %p is not managed", m)); > return (moea64_clear_bit(mmu, m, LPTE_REF)); > } > > /* > * Modify the WIMG settings of all mappings for a page. > */ > void > moea64_page_set_memattr(mmu_t mmu, vm_page_t m, vm_memattr_t ma) > { > struct pvo_entry *pvo; > struct pvo_head *pvo_head; > uintptr_t pt; > pmap_t pmap; > uint64_t lo; > > if ((m->oflags & VPO_UNMANAGED) != 0) { > m->md.mdpg_cache_attrs = ma; > return; > } > > pvo_head = vm_page_to_pvoh(m); > lo = moea64_calc_wimg(VM_PAGE_TO_PHYS(m), ma); > LOCK_TABLE_RD(); > LIST_FOREACH(pvo, pvo_head, pvo_vlink) { > pmap = pvo->pvo_pmap; > PMAP_LOCK(pmap); > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > pvo->pvo_pte.lpte.pte_lo &= ~LPTE_WIMG; > pvo->pvo_pte.lpte.pte_lo |= lo; > if (pt != -1) { > MOEA64_PTE_CHANGE(mmu, pt, &pvo->pvo_pte.lpte, > pvo->pvo_vpn); > if (pvo->pvo_pmap == kernel_pmap) > isync(); > } > PMAP_UNLOCK(pmap); > } > UNLOCK_TABLE_RD(); > m->md.mdpg_cache_attrs = ma; > } > > /* > * Map a wired page into kernel virtual address space. > */ > void > moea64_kenter_attr(mmu_t mmu, vm_offset_t va, vm_offset_t pa, vm_memattr_t ma) > { > uint64_t pte_lo; > int error; > > pte_lo = moea64_calc_wimg(pa, ma); > > LOCK_TABLE_WR(); > PMAP_LOCK(kernel_pmap); > error = moea64_pvo_enter(mmu, kernel_pmap, moea64_upvo_zone, > NULL, va, pa, pte_lo, PVO_WIRED, 0); > PMAP_UNLOCK(kernel_pmap); > UNLOCK_TABLE_WR(); > > if (error != 0 && error != ENOENT) > panic("moea64_kenter: failed to enter va %#zx pa %#zx: %d", va, > pa, error); > } > > void > moea64_kenter(mmu_t mmu, vm_offset_t va, vm_paddr_t pa) > { > > moea64_kenter_attr(mmu, va, pa, VM_MEMATTR_DEFAULT); > } > > /* > * Extract the physical page address associated with the given kernel virtual > * address. > */ > vm_paddr_t > moea64_kextract(mmu_t mmu, vm_offset_t va) > { > struct pvo_entry *pvo; > vm_paddr_t pa; > > /* > * Shortcut the direct-mapped case when applicable. We never put > * anything but 1:1 mappings below VM_MIN_KERNEL_ADDRESS. > */ > if (va < VM_MIN_KERNEL_ADDRESS) > return (va); > > PMAP_LOCK(kernel_pmap); > pvo = moea64_pvo_find_va(kernel_pmap, va); > KASSERT(pvo != NULL, ("moea64_kextract: no addr found for %#" PRIxPTR, > va)); > pa = (pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) | (va - PVO_VADDR(pvo)); > PMAP_UNLOCK(kernel_pmap); > return (pa); > } > > /* > * Remove a wired page from kernel virtual address space. > */ > void > moea64_kremove(mmu_t mmu, vm_offset_t va) > { > moea64_remove(mmu, kernel_pmap, va, va + PAGE_SIZE); > } > > /* > * Map a range of physical addresses into kernel virtual address space. > * > * The value passed in *virt is a suggested virtual address for the mapping. > * Architectures which can support a direct-mapped physical to virtual region > * can return the appropriate address within that region, leaving '*virt' > * unchanged. We cannot and therefore do not; *virt is updated with the > * first usable address after the mapped region. > */ > vm_offset_t > moea64_map(mmu_t mmu, vm_offset_t *virt, vm_paddr_t pa_start, > vm_paddr_t pa_end, int prot) > { > vm_offset_t sva, va; > > sva = *virt; > va = sva; > for (; pa_start < pa_end; pa_start += PAGE_SIZE, va += PAGE_SIZE) > moea64_kenter(mmu, va, pa_start); > *virt = va; > > return (sva); > } > > /* > * Returns true if the pmap's pv is one of the first > * 16 pvs linked to from this page. This count may > * be changed upwards or downwards in the future; it > * is only necessary that true be returned for a small > * subset of pmaps for proper page aging. > */ > boolean_t > moea64_page_exists_quick(mmu_t mmu, pmap_t pmap, vm_page_t m) > { > int loops; > struct pvo_entry *pvo; > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("moea64_page_exists_quick: page %p is not managed", m)); > loops = 0; > rv = FALSE; > LOCK_TABLE_RD(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > if (pvo->pvo_pmap == pmap) { > rv = TRUE; > break; > } > if (++loops >= 16) > break; > } > UNLOCK_TABLE_RD(); > return (rv); > } > > /* > * Return the number of managed mappings to the given physical page > * that are wired. > */ > int > moea64_page_wired_mappings(mmu_t mmu, vm_page_t m) > { > struct pvo_entry *pvo; > int count; > > count = 0; > if ((m->oflags & VPO_UNMANAGED) != 0) > return (count); > LOCK_TABLE_RD(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) > if ((pvo->pvo_vaddr & PVO_WIRED) != 0) > count++; > UNLOCK_TABLE_RD(); > return (count); > } > > static uintptr_t moea64_vsidcontext; > > uintptr_t > moea64_get_unique_vsid(void) { > u_int entropy; > register_t hash; > uint32_t mask; > int i; > > entropy = 0; > __asm __volatile("mftb %0" : "=r"(entropy)); > > mtx_lock(&moea64_slb_mutex); > for (i = 0; i < NVSIDS; i += VSID_NBPW) { > u_int n; > > /* > * Create a new value by mutiplying by a prime and adding in > * entropy from the timebase register. This is to make the > * VSID more random so that the PT hash function collides > * less often. (Note that the prime casues gcc to do shifts > * instead of a multiply.) > */ > moea64_vsidcontext = (moea64_vsidcontext * 0x1105) + entropy; > hash = moea64_vsidcontext & (NVSIDS - 1); > if (hash == 0) /* 0 is special, avoid it */ > continue; > n = hash >> 5; > mask = 1 << (hash & (VSID_NBPW - 1)); > hash = (moea64_vsidcontext & VSID_HASHMASK); > if (moea64_vsid_bitmap[n] & mask) { /* collision? */ > /* anything free in this bucket? */ > if (moea64_vsid_bitmap[n] == 0xffffffff) { > entropy = (moea64_vsidcontext >> 20); > continue; > } > i = ffs(~moea64_vsid_bitmap[n]) - 1; > mask = 1 << i; > hash &= VSID_HASHMASK & ~(VSID_NBPW - 1); > hash |= i; > } > KASSERT(!(moea64_vsid_bitmap[n] & mask), > ("Allocating in-use VSID %#zx\n", hash)); > moea64_vsid_bitmap[n] |= mask; > mtx_unlock(&moea64_slb_mutex); > return (hash); > } > > mtx_unlock(&moea64_slb_mutex); > panic("%s: out of segments",__func__); > } > > #ifdef __powerpc64__ > void > moea64_pinit(mmu_t mmu, pmap_t pmap) > { > > RB_INIT(&pmap->pmap_pvo); > > pmap->pm_slb_tree_root = slb_alloc_tree(); > pmap->pm_slb = slb_alloc_user_cache(); > pmap->pm_slb_len = 0; > } > #else > void > moea64_pinit(mmu_t mmu, pmap_t pmap) > { > int i; > uint32_t hash; > > RB_INIT(&pmap->pmap_pvo); > > if (pmap_bootstrapped) > pmap->pmap_phys = (pmap_t)moea64_kextract(mmu, > (vm_offset_t)pmap); > else > pmap->pmap_phys = pmap; > > /* > * Allocate some segment registers for this pmap. > */ > hash = moea64_get_unique_vsid(); > > for (i = 0; i < 16; i++) > pmap->pm_sr[i] = VSID_MAKE(i, hash); > > KASSERT(pmap->pm_sr[0] != 0, ("moea64_pinit: pm_sr[0] = 0")); > } > #endif > > /* > * Initialize the pmap associated with process 0. > */ > void > moea64_pinit0(mmu_t mmu, pmap_t pm) > { > > PMAP_LOCK_INIT(pm); > moea64_pinit(mmu, pm); > bzero(&pm->pm_stats, sizeof(pm->pm_stats)); > } > > /* > * Set the physical protection on the specified range of this map as requested. > */ > static void > moea64_pvo_protect(mmu_t mmu, pmap_t pm, struct pvo_entry *pvo, vm_prot_t prot) > { > uintptr_t pt; > struct vm_page *pg; > uint64_t oldlo; > > PMAP_LOCK_ASSERT(pm, MA_OWNED); > > /* > * Grab the PTE pointer before we diddle with the cached PTE > * copy. > */ > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > > /* > * Change the protection of the page. > */ > oldlo = pvo->pvo_pte.lpte.pte_lo; > pvo->pvo_pte.lpte.pte_lo &= ~LPTE_PP; > pvo->pvo_pte.lpte.pte_lo &= ~LPTE_NOEXEC; > if ((prot & VM_PROT_EXECUTE) == 0) > pvo->pvo_pte.lpte.pte_lo |= LPTE_NOEXEC; > if (prot & VM_PROT_WRITE) > pvo->pvo_pte.lpte.pte_lo |= LPTE_BW; > else > pvo->pvo_pte.lpte.pte_lo |= LPTE_BR; > > pg = PHYS_TO_VM_PAGE(pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN); > > /* > * If the PVO is in the page table, update that pte as well. > */ > if (pt != -1) > MOEA64_PTE_CHANGE(mmu, pt, &pvo->pvo_pte.lpte, > pvo->pvo_vpn); > if (pm != kernel_pmap && pg != NULL && !(pg->aflags & PGA_EXECUTABLE) && > (pvo->pvo_pte.lpte.pte_lo & (LPTE_I | LPTE_G | LPTE_NOEXEC)) == 0) { > if ((pg->oflags & VPO_UNMANAGED) == 0) > vm_page_aflag_set(pg, PGA_EXECUTABLE); > moea64_syncicache(mmu, pm, PVO_VADDR(pvo), > pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN, PAGE_SIZE); > } > > /* > * Update vm about the REF/CHG bits if the page is managed and we have > * removed write access. > */ > if ((pvo->pvo_vaddr & PVO_MANAGED) == PVO_MANAGED && > (oldlo & LPTE_PP) != LPTE_BR && !(prot & VM_PROT_WRITE)) { > if (pg != NULL) { > if (pvo->pvo_pte.lpte.pte_lo & LPTE_CHG) > vm_page_dirty(pg); > if (pvo->pvo_pte.lpte.pte_lo & LPTE_REF) > vm_page_aflag_set(pg, PGA_REFERENCED); > } > } > } > > void > moea64_protect(mmu_t mmu, pmap_t pm, vm_offset_t sva, vm_offset_t eva, > vm_prot_t prot) > { > struct pvo_entry *pvo, *tpvo, key; > > CTR4(KTR_PMAP, "moea64_protect: pm=%p sva=%#x eva=%#x prot=%#x", pm, > sva, eva, prot); > > KASSERT(pm == &curproc->p_vmspace->vm_pmap || pm == kernel_pmap, > ("moea64_protect: non current pmap")); > > if ((prot & VM_PROT_READ) == VM_PROT_NONE) { > moea64_remove(mmu, pm, sva, eva); > return; > } > > LOCK_TABLE_RD(); > PMAP_LOCK(pm); > key.pvo_vaddr = sva; > for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); > pvo != NULL && PVO_VADDR(pvo) < eva; pvo = tpvo) { > tpvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo); > moea64_pvo_protect(mmu, pm, pvo, prot); > } > UNLOCK_TABLE_RD(); > PMAP_UNLOCK(pm); > } > > /* > * Map a list of wired pages into kernel virtual address space. This is > * intended for temporary mappings which do not need page modification or > * references recorded. Existing mappings in the region are overwritten. > */ > void > moea64_qenter(mmu_t mmu, vm_offset_t va, vm_page_t *m, int count) > { > while (count-- > 0) { > moea64_kenter(mmu, va, VM_PAGE_TO_PHYS(*m)); > va += PAGE_SIZE; > m++; > } > } > > /* > * Remove page mappings from kernel virtual address space. Intended for > * temporary mappings entered by moea64_qenter. > */ > void > moea64_qremove(mmu_t mmu, vm_offset_t va, int count) > { > while (count-- > 0) { > moea64_kremove(mmu, va); > va += PAGE_SIZE; > } > } > > void > moea64_release_vsid(uint64_t vsid) > { > int idx, mask; > > mtx_lock(&moea64_slb_mutex); > idx = vsid & (NVSIDS-1); > mask = 1 << (idx % VSID_NBPW); > idx /= VSID_NBPW; > KASSERT(moea64_vsid_bitmap[idx] & mask, > ("Freeing unallocated VSID %#jx", vsid)); > moea64_vsid_bitmap[idx] &= ~mask; > mtx_unlock(&moea64_slb_mutex); > } > > > void > moea64_release(mmu_t mmu, pmap_t pmap) > { > > /* > * Free segment registers' VSIDs > */ > #ifdef __powerpc64__ > slb_free_tree(pmap); > slb_free_user_cache(pmap->pm_slb); > #else > KASSERT(pmap->pm_sr[0] != 0, ("moea64_release: pm_sr[0] = 0")); > > moea64_release_vsid(VSID_TO_HASH(pmap->pm_sr[0])); > #endif > } > > /* > * Remove all pages mapped by the specified pmap > */ > void > moea64_remove_pages(mmu_t mmu, pmap_t pm) > { > struct pvo_entry *pvo, *tpvo; > > LOCK_TABLE_WR(); > PMAP_LOCK(pm); > RB_FOREACH_SAFE(pvo, pvo_tree, &pm->pmap_pvo, tpvo) { > if (!(pvo->pvo_vaddr & PVO_WIRED)) > moea64_pvo_remove(mmu, pvo); > } > UNLOCK_TABLE_WR(); > PMAP_UNLOCK(pm); > } > > /* > * Remove the given range of addresses from the specified map. > */ > void > moea64_remove(mmu_t mmu, pmap_t pm, vm_offset_t sva, vm_offset_t eva) > { > struct pvo_entry *pvo, *tpvo, key; > > /* > * Perform an unsynchronized read. This is, however, safe. > */ > if (pm->pm_stats.resident_count == 0) > return; > > LOCK_TABLE_WR(); > PMAP_LOCK(pm); > key.pvo_vaddr = sva; > for (pvo = RB_NFIND(pvo_tree, &pm->pmap_pvo, &key); > pvo != NULL && PVO_VADDR(pvo) < eva; pvo = tpvo) { > tpvo = RB_NEXT(pvo_tree, &pm->pmap_pvo, pvo); > moea64_pvo_remove(mmu, pvo); > } > UNLOCK_TABLE_WR(); > PMAP_UNLOCK(pm); > } > > /* > * Remove physical page from all pmaps in which it resides. moea64_pvo_remove() > * will reflect changes in pte's back to the vm_page. > */ > void > moea64_remove_all(mmu_t mmu, vm_page_t m) > { > struct pvo_entry *pvo, *next_pvo; > pmap_t pmap; > > LOCK_TABLE_WR(); > LIST_FOREACH_SAFE(pvo, vm_page_to_pvoh(m), pvo_vlink, next_pvo) { > pmap = pvo->pvo_pmap; > PMAP_LOCK(pmap); > moea64_pvo_remove(mmu, pvo); > PMAP_UNLOCK(pmap); > } > UNLOCK_TABLE_WR(); > if ((m->aflags & PGA_WRITEABLE) && moea64_is_modified(mmu, m)) > vm_page_dirty(m); > vm_page_aflag_clear(m, PGA_WRITEABLE); > vm_page_aflag_clear(m, PGA_EXECUTABLE); > } > > /* > * Allocate a physical page of memory directly from the phys_avail map. > * Can only be called from moea64_bootstrap before avail start and end are > * calculated. > */ > vm_offset_t > moea64_bootstrap_alloc(vm_size_t size, u_int align) > { > vm_offset_t s, e; > int i, j; > > size = round_page(size); > for (i = 0; phys_avail[i + 1] != 0; i += 2) { > if (align != 0) > s = (phys_avail[i] + align - 1) & ~(align - 1); > else > s = phys_avail[i]; > e = s + size; > > if (s < phys_avail[i] || e > phys_avail[i + 1]) > continue; > > if (s + size > platform_real_maxaddr()) > continue; > > if (s == phys_avail[i]) { > phys_avail[i] += size; > } else if (e == phys_avail[i + 1]) { > phys_avail[i + 1] -= size; > } else { > for (j = phys_avail_count * 2; j > i; j -= 2) { > phys_avail[j] = phys_avail[j - 2]; > phys_avail[j + 1] = phys_avail[j - 1]; > } > > phys_avail[i + 3] = phys_avail[i + 1]; > phys_avail[i + 1] = s; > phys_avail[i + 2] = e; > phys_avail_count++; > } > > return (s); > } > panic("moea64_bootstrap_alloc: could not allocate memory"); > } > > static int > moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t zone, > struct pvo_head *pvo_head, vm_offset_t va, vm_offset_t pa, > uint64_t pte_lo, int flags, int8_t psind __unused) > { > struct pvo_entry *pvo; > uintptr_t pt; > uint64_t vsid; > int first; > u_int ptegidx; > int i; > int bootstrap; > > /* > * One nasty thing that can happen here is that the UMA calls to > * allocate new PVOs need to map more memory, which calls pvo_enter(), > * which calls UMA... > * > * We break the loop by detecting recursion and allocating out of > * the bootstrap pool. > */ > > first = 0; > bootstrap = (flags & PVO_BOOTSTRAP); > > if (!moea64_initialized) > bootstrap = 1; > > PMAP_LOCK_ASSERT(pm, MA_OWNED); > rw_assert(&moea64_table_lock, RA_WLOCKED); > > /* > * Compute the PTE Group index. > */ > va &= ~ADDR_POFF; > vsid = va_to_vsid(pm, va); > ptegidx = va_to_pteg(vsid, va, flags & PVO_LARGE); > > /* > * Remove any existing mapping for this page. Reuse the pvo entry if > * there is a mapping. > */ > moea64_pvo_enter_calls++; > > LIST_FOREACH(pvo, &moea64_pvo_table[ptegidx], pvo_olink) { > if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) { > if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) == pa && > (pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC | LPTE_PP)) > == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) { > /* > * The physical page and protection are not > * changing. Instead, this may be a request > * to change the mapping's wired attribute. > */ > pt = -1; > if ((flags & PVO_WIRED) != 0 && > (pvo->pvo_vaddr & PVO_WIRED) == 0) { > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > pvo->pvo_vaddr |= PVO_WIRED; > pvo->pvo_pte.lpte.pte_hi |= LPTE_WIRED; > pm->pm_stats.wired_count++; > } else if ((flags & PVO_WIRED) == 0 && > (pvo->pvo_vaddr & PVO_WIRED) != 0) { > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > pvo->pvo_vaddr &= ~PVO_WIRED; > pvo->pvo_pte.lpte.pte_hi &= ~LPTE_WIRED; > pm->pm_stats.wired_count--; > } > if (!(pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) { > KASSERT(pt == -1, > ("moea64_pvo_enter: valid pt")); > /* Re-insert if spilled */ > i = MOEA64_PTE_INSERT(mmu, ptegidx, > &pvo->pvo_pte.lpte); > if (i >= 0) > PVO_PTEGIDX_SET(pvo, i); > moea64_pte_overflow--; > } else if (pt != -1) { > /* > * The PTE's wired attribute is not a > * hardware feature, so there is no > * need to invalidate any TLB entries. > */ > MOEA64_PTE_CHANGE(mmu, pt, > &pvo->pvo_pte.lpte, pvo->pvo_vpn); > } > return (0); > } > moea64_pvo_remove(mmu, pvo); > break; > } > } > > /* > * If we aren't overwriting a mapping, try to allocate. > */ > if (bootstrap) { > if (moea64_bpvo_pool_index >= BPVO_POOL_SIZE) { > panic("moea64_enter: bpvo pool exhausted, %d, %d, %zd", > moea64_bpvo_pool_index, BPVO_POOL_SIZE, > BPVO_POOL_SIZE * sizeof(struct pvo_entry)); > } > pvo = &moea64_bpvo_pool[moea64_bpvo_pool_index]; > moea64_bpvo_pool_index++; > bootstrap = 1; > } else { > pvo = uma_zalloc(zone, M_NOWAIT); > } > > if (pvo == NULL) > return (ENOMEM); > > moea64_pvo_entries++; > pvo->pvo_vaddr = va; > pvo->pvo_vpn = (uint64_t)((va & ADDR_PIDX) >> ADDR_PIDX_SHFT) > | (vsid << 16); > pvo->pvo_pmap = pm; > LIST_INSERT_HEAD(&moea64_pvo_table[ptegidx], pvo, pvo_olink); > pvo->pvo_vaddr &= ~ADDR_POFF; > > if (flags & PVO_WIRED) > pvo->pvo_vaddr |= PVO_WIRED; > if (pvo_head != NULL) > pvo->pvo_vaddr |= PVO_MANAGED; > if (bootstrap) > pvo->pvo_vaddr |= PVO_BOOTSTRAP; > if (flags & PVO_LARGE) > pvo->pvo_vaddr |= PVO_LARGE; > > moea64_pte_create(&pvo->pvo_pte.lpte, vsid, va, > (uint64_t)(pa) | pte_lo, flags); > > /* > * Add to pmap list > */ > RB_INSERT(pvo_tree, &pm->pmap_pvo, pvo); > > /* > * Remember if the list was empty and therefore will be the first > * item. > */ > if (pvo_head != NULL) { > if (LIST_FIRST(pvo_head) == NULL) > first = 1; > LIST_INSERT_HEAD(pvo_head, pvo, pvo_vlink); > } > > if (pvo->pvo_vaddr & PVO_WIRED) { > pvo->pvo_pte.lpte.pte_hi |= LPTE_WIRED; > pm->pm_stats.wired_count++; > } > pm->pm_stats.resident_count++; > > /* > * We hope this succeeds but it isn't required. > */ > i = MOEA64_PTE_INSERT(mmu, ptegidx, &pvo->pvo_pte.lpte); > if (i >= 0) { > PVO_PTEGIDX_SET(pvo, i); > } else { > panic("moea64_pvo_enter: overflow"); > moea64_pte_overflow++; > } > > if (pm == kernel_pmap) > isync(); > > #ifdef __powerpc64__ > /* > * Make sure all our bootstrap mappings are in the SLB as soon > * as virtual memory is switched on. > */ > if (!pmap_bootstrapped) > moea64_bootstrap_slb_prefault(va, flags & PVO_LARGE); > #endif > > return (first ? ENOENT : 0); > } > > static void > moea64_pvo_remove(mmu_t mmu, struct pvo_entry *pvo) > { > struct vm_page *pg; > uintptr_t pt; > > PMAP_LOCK_ASSERT(pvo->pvo_pmap, MA_OWNED); > rw_assert(&moea64_table_lock, RA_WLOCKED); > > /* > * If there is an active pte entry, we need to deactivate it (and > * save the ref & cfg bits). > */ > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > if (pt != -1) { > MOEA64_PTE_UNSET(mmu, pt, &pvo->pvo_pte.lpte, pvo->pvo_vpn); > PVO_PTEGIDX_CLR(pvo); > } else { > moea64_pte_overflow--; > } > > /* > * Update our statistics. > */ > pvo->pvo_pmap->pm_stats.resident_count--; > if (pvo->pvo_vaddr & PVO_WIRED) > pvo->pvo_pmap->pm_stats.wired_count--; > > /* > * Remove this PVO from the pmap list. > */ > RB_REMOVE(pvo_tree, &pvo->pvo_pmap->pmap_pvo, pvo); > > /* > * Remove this from the overflow list and return it to the pool > * if we aren't going to reuse it. > */ > LIST_REMOVE(pvo, pvo_olink); > > /* > * Update vm about the REF/CHG bits if the page is managed. > */ > pg = PHYS_TO_VM_PAGE(pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN); > > if ((pvo->pvo_vaddr & PVO_MANAGED) == PVO_MANAGED && pg != NULL) { > LIST_REMOVE(pvo, pvo_vlink); > if ((pvo->pvo_pte.lpte.pte_lo & LPTE_PP) != LPTE_BR) { > if (pvo->pvo_pte.lpte.pte_lo & LPTE_CHG) > vm_page_dirty(pg); > if (pvo->pvo_pte.lpte.pte_lo & LPTE_REF) > vm_page_aflag_set(pg, PGA_REFERENCED); > if (LIST_EMPTY(vm_page_to_pvoh(pg))) > vm_page_aflag_clear(pg, PGA_WRITEABLE); > } > if (LIST_EMPTY(vm_page_to_pvoh(pg))) > vm_page_aflag_clear(pg, PGA_EXECUTABLE); > } > > moea64_pvo_entries--; > moea64_pvo_remove_calls++; > > if (!(pvo->pvo_vaddr & PVO_BOOTSTRAP)) > uma_zfree((pvo->pvo_vaddr & PVO_MANAGED) ? moea64_mpvo_zone : > moea64_upvo_zone, pvo); > } > > static struct pvo_entry * > moea64_pvo_find_va(pmap_t pm, vm_offset_t va) > { > struct pvo_entry key; > > key.pvo_vaddr = va & ~ADDR_POFF; > return (RB_FIND(pvo_tree, &pm->pmap_pvo, &key)); > } > > static boolean_t > moea64_query_bit(mmu_t mmu, vm_page_t m, u_int64_t ptebit) > { > struct pvo_entry *pvo; > uintptr_t pt; > > LOCK_TABLE_RD(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > /* > * See if we saved the bit off. If so, return success. > */ > if (pvo->pvo_pte.lpte.pte_lo & ptebit) { > UNLOCK_TABLE_RD(); > return (TRUE); > } > } > > /* > * No luck, now go through the hard part of looking at the PTEs > * themselves. Sync so that any pending REF/CHG bits are flushed to > * the PTEs. > */ > powerpc_sync(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > > /* > * See if this pvo has a valid PTE. if so, fetch the > * REF/CHG bits from the valid PTE. If the appropriate > * ptebit is set, return success. > */ > PMAP_LOCK(pvo->pvo_pmap); > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > if (pt != -1) { > MOEA64_PTE_SYNCH(mmu, pt, &pvo->pvo_pte.lpte); > if (pvo->pvo_pte.lpte.pte_lo & ptebit) { > PMAP_UNLOCK(pvo->pvo_pmap); > UNLOCK_TABLE_RD(); > return (TRUE); > } > } > PMAP_UNLOCK(pvo->pvo_pmap); > } > > UNLOCK_TABLE_RD(); > return (FALSE); > } > > static u_int > moea64_clear_bit(mmu_t mmu, vm_page_t m, u_int64_t ptebit) > { > u_int count; > struct pvo_entry *pvo; > uintptr_t pt; > > /* > * Sync so that any pending REF/CHG bits are flushed to the PTEs (so > * we can reset the right ones). note that since the pvo entries and > * list heads are accessed via BAT0 and are never placed in the page > * table, we don't have to worry about further accesses setting the > * REF/CHG bits. > */ > powerpc_sync(); > > /* > * For each pvo entry, clear the pvo's ptebit. If this pvo has a > * valid pte clear the ptebit from the valid pte. > */ > count = 0; > LOCK_TABLE_RD(); > LIST_FOREACH(pvo, vm_page_to_pvoh(m), pvo_vlink) { > PMAP_LOCK(pvo->pvo_pmap); > pt = MOEA64_PVO_TO_PTE(mmu, pvo); > if (pt != -1) { > MOEA64_PTE_SYNCH(mmu, pt, &pvo->pvo_pte.lpte); > if (pvo->pvo_pte.lpte.pte_lo & ptebit) { > count++; > MOEA64_PTE_CLEAR(mmu, pt, &pvo->pvo_pte.lpte, > pvo->pvo_vpn, ptebit); > } > } > pvo->pvo_pte.lpte.pte_lo &= ~ptebit; > PMAP_UNLOCK(pvo->pvo_pmap); > } > > UNLOCK_TABLE_RD(); > return (count); > } > > boolean_t > moea64_dev_direct_mapped(mmu_t mmu, vm_paddr_t pa, vm_size_t size) > { > struct pvo_entry *pvo, key; > vm_offset_t ppa; > int error = 0; > > PMAP_LOCK(kernel_pmap); > key.pvo_vaddr = ppa = pa & ~ADDR_POFF; > for (pvo = RB_FIND(pvo_tree, &kernel_pmap->pmap_pvo, &key); > ppa < pa + size; ppa += PAGE_SIZE, > pvo = RB_NEXT(pvo_tree, &kernel_pmap->pmap_pvo, pvo)) { > if (pvo == NULL || > (pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) != ppa) { > error = EFAULT; > break; > } > } > PMAP_UNLOCK(kernel_pmap); > > return (error); > } > > /* > * Map a set of physical memory pages into the kernel virtual > * address space. Return a pointer to where it is mapped. This > * routine is intended to be used for mapping device memory, > * NOT real memory. > */ > void * > moea64_mapdev_attr(mmu_t mmu, vm_offset_t pa, vm_size_t size, vm_memattr_t ma) > { > vm_offset_t va, tmpva, ppa, offset; > > ppa = trunc_page(pa); > offset = pa & PAGE_MASK; > size = roundup2(offset + size, PAGE_SIZE); > > va = kva_alloc(size); > > if (!va) > panic("moea64_mapdev: Couldn't alloc kernel virtual memory"); > > for (tmpva = va; size > 0;) { > moea64_kenter_attr(mmu, tmpva, ppa, ma); > size -= PAGE_SIZE; > tmpva += PAGE_SIZE; > ppa += PAGE_SIZE; > } > > return ((void *)(va + offset)); > } > > void * > moea64_mapdev(mmu_t mmu, vm_paddr_t pa, vm_size_t size) > { > > return moea64_mapdev_attr(mmu, pa, size, VM_MEMATTR_DEFAULT); > } > > void > moea64_unmapdev(mmu_t mmu, vm_offset_t va, vm_size_t size) > { > vm_offset_t base, offset; > > base = trunc_page(va); > offset = va & PAGE_MASK; > size = roundup2(offset + size, PAGE_SIZE); > > kva_free(base, size); > } > > void > moea64_sync_icache(mmu_t mmu, pmap_t pm, vm_offset_t va, vm_size_t sz) > { > struct pvo_entry *pvo; > vm_offset_t lim; > vm_paddr_t pa; > vm_size_t len; > > PMAP_LOCK(pm); > while (sz > 0) { > lim = round_page(va); > len = MIN(lim - va, sz); > pvo = moea64_pvo_find_va(pm, va & ~ADDR_POFF); > if (pvo != NULL && !(pvo->pvo_pte.lpte.pte_lo & LPTE_I)) { > pa = (pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN) | > (va & ADDR_POFF); > moea64_syncicache(mmu, pm, va, pa, len); > } > va += len; > sz -= len; > } > PMAP_UNLOCK(pm); > } > >-vm_offset_t >-moea64_dumpsys_map(mmu_t mmu, struct pmap_md *md, vm_size_t ofs, >- vm_size_t *sz) >+void >+moea64_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t sz, void **va) > { >- if (md->md_vaddr == ~0UL) >- return (md->md_paddr + ofs); >- else >- return (md->md_vaddr + ofs); >+ >+ *va = (void *)pa; > } > >-struct pmap_md * >-moea64_scan_md(mmu_t mmu, struct pmap_md *prev) >+extern struct dump_pa dump_map[PHYS_AVAIL_SZ + 1]; >+ >+void >+moea64_scan_init(mmu_t mmu) > { >- static struct pmap_md md; > struct pvo_entry *pvo; > vm_offset_t va; >- >- if (dumpsys_minidump) { >- md.md_paddr = ~0UL; /* Minidumps use virtual addresses. */ >- if (prev == NULL) { >- /* 1st: kernel .data and .bss. */ >- md.md_index = 1; >- md.md_vaddr = trunc_page((uintptr_t)_etext); >- md.md_size = round_page((uintptr_t)_end) - md.md_vaddr; >- return (&md); >+ int i; >+ >+ if (!do_minidump) { >+ /* Initialize phys. segments for dumpsys(). */ >+ memset(&dump_map, 0, sizeof(dump_map)); >+ mem_regions(&pregions, &pregions_sz, ®ions, ®ions_sz); >+ for (i = 0; i < pregions_sz; i++) { >+ dump_map[i].md_start = pregions[i].mr_start; >+ dump_map[i].md_size = pregions[i].mr_size; > } >- switch (prev->md_index) { >- case 1: >- /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ >- md.md_index = 2; >- md.md_vaddr = (vm_offset_t)msgbufp->msg_ptr; >- md.md_size = round_page(msgbufp->msg_size); >+ return; >+ } >+ >+ /* Virtual segments for minidumps: */ >+ memset(&dump_map, 0, sizeof(dump_map)); >+ >+ /* 1st: kernel .data and .bss. */ >+ dump_map[0].md_start = trunc_page((uintptr_t)_etext); >+ dump_map[0].md_size = round_page((uintptr_t)_end) - dump_map[0].md_start; >+ >+ /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ >+ dump_map[1].md_start = (vm_paddr_t)msgbufp->msg_ptr; >+ dump_map[1].md_size = round_page(msgbufp->msg_size); >+ >+ /* 3rd: kernel VM. */ >+ va = dump_map[1].md_start + dump_map[1].md_size; >+ /* Find start of next chunk (from va). */ >+ while (va < virtual_end) { >+ /* Don't dump the buffer cache. */ >+ if (va >= kmi.buffer_sva && va < kmi.buffer_eva) { >+ va = kmi.buffer_eva; >+ continue; >+ } >+ pvo = moea64_pvo_find_va(kernel_pmap, va & ~ADDR_POFF); >+ if (pvo != NULL && (pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) > break; >- case 2: >- /* 3rd: kernel VM. */ >- va = prev->md_vaddr + prev->md_size; >- /* Find start of next chunk (from va). */ >- while (va < virtual_end) { >- /* Don't dump the buffer cache. */ >- if (va >= kmi.buffer_sva && >- va < kmi.buffer_eva) { >- va = kmi.buffer_eva; >- continue; >- } >- pvo = moea64_pvo_find_va(kernel_pmap, >- va & ~ADDR_POFF); >- if (pvo != NULL && >- (pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) >- break; >- va += PAGE_SIZE; >- } >- if (va < virtual_end) { >- md.md_vaddr = va; >- va += PAGE_SIZE; >- /* Find last page in chunk. */ >- while (va < virtual_end) { >- /* Don't run into the buffer cache. */ >- if (va == kmi.buffer_sva) >- break; >- pvo = moea64_pvo_find_va(kernel_pmap, >- va & ~ADDR_POFF); >- if (pvo == NULL || >- !(pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) >- break; >- va += PAGE_SIZE; >- } >- md.md_size = va - md.md_vaddr; >+ va += PAGE_SIZE; >+ } >+ if (va < virtual_end) { >+ dump_map[2].md_start = va; >+ va += PAGE_SIZE; >+ /* Find last page in chunk. */ >+ while (va < virtual_end) { >+ /* Don't run into the buffer cache. */ >+ if (va == kmi.buffer_sva) > break; >- } >- md.md_index = 3; >- /* FALLTHROUGH */ >- default: >- return (NULL); >- } >- } else { /* minidumps */ >- if (prev == NULL) { >- /* first physical chunk. */ >- md.md_paddr = pregions[0].mr_start; >- md.md_size = pregions[0].mr_size; >- md.md_vaddr = ~0UL; >- md.md_index = 1; >- } else if (md.md_index < pregions_sz) { >- md.md_paddr = pregions[md.md_index].mr_start; >- md.md_size = pregions[md.md_index].mr_size; >- md.md_vaddr = ~0UL; >- md.md_index++; >- } else { >- /* There's no next physical chunk. */ >- return (NULL); >+ pvo = moea64_pvo_find_va(kernel_pmap, va & ~ADDR_POFF); >+ if (pvo == NULL || >+ !(pvo->pvo_pte.lpte.pte_hi & LPTE_VALID)) >+ break; >+ va += PAGE_SIZE; > } >+ dump_map[2].md_size = va - dump_map[2].md_start; > } >- >- return (&md); > } >diff --git a/sys/powerpc/booke/pmap.c b/sys/powerpc/booke/pmap.c >index 942ad12..26096c7 100644 >--- a/sys/powerpc/booke/pmap.c >+++ b/sys/powerpc/booke/pmap.c >@@ -1,3326 +1,3311 @@ > /*- > * Copyright (C) 2007-2009 Semihalf, Rafal Jaworowski <raj@semihalf.com> > * Copyright (C) 2006 Semihalf, Marian Balakowicz <m8@semihalf.com> > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN > * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED > * TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR > * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF > * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING > * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS > * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > * > * Some hw specific parts of this pmap were derived or influenced > * by NetBSD's ibm4xx pmap module. More generic code is shared with > * a few other pmap modules from the FreeBSD tree. > */ > > /* > * VM layout notes: > * > * Kernel and user threads run within one common virtual address space > * defined by AS=0. > * > * Virtual address space layout: > * ----------------------------- > * 0x0000_0000 - 0xafff_ffff : user process > * 0xb000_0000 - 0xbfff_ffff : pmap_mapdev()-ed area (PCI/PCIE etc.) > * 0xc000_0000 - 0xc0ff_ffff : kernel reserved > * 0xc000_0000 - data_end : kernel code+data, env, metadata etc. > * 0xc100_0000 - 0xfeef_ffff : KVA > * 0xc100_0000 - 0xc100_3fff : reserved for page zero/copy > * 0xc100_4000 - 0xc200_3fff : reserved for ptbl bufs > * 0xc200_4000 - 0xc200_8fff : guard page + kstack0 > * 0xc200_9000 - 0xfeef_ffff : actual free KVA space > * 0xfef0_0000 - 0xffff_ffff : I/O devices region > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > #include <sys/param.h> >+#include <sys/conf.h> > #include <sys/malloc.h> > #include <sys/ktr.h> > #include <sys/proc.h> > #include <sys/user.h> > #include <sys/queue.h> > #include <sys/systm.h> > #include <sys/kernel.h> >+#include <sys/kerneldump.h> > #include <sys/linker.h> > #include <sys/msgbuf.h> > #include <sys/lock.h> > #include <sys/mutex.h> > #include <sys/rwlock.h> > #include <sys/sched.h> > #include <sys/smp.h> > #include <sys/vmmeter.h> > > #include <vm/vm.h> > #include <vm/vm_page.h> > #include <vm/vm_kern.h> > #include <vm/vm_pageout.h> > #include <vm/vm_extern.h> > #include <vm/vm_object.h> > #include <vm/vm_param.h> > #include <vm/vm_map.h> > #include <vm/vm_pager.h> > #include <vm/uma.h> > > #include <machine/cpu.h> > #include <machine/pcb.h> > #include <machine/platform.h> > > #include <machine/tlb.h> > #include <machine/spr.h> > #include <machine/md_var.h> > #include <machine/mmuvar.h> > #include <machine/pmap.h> > #include <machine/pte.h> > > #include "mmu_if.h" > > #ifdef DEBUG > #define debugf(fmt, args...) printf(fmt, ##args) > #else > #define debugf(fmt, args...) > #endif > > #define TODO panic("%s: not implemented", __func__); > >-extern int dumpsys_minidump; >- > extern unsigned char _etext[]; > extern unsigned char _end[]; > > extern uint32_t *bootinfo; > > #ifdef SMP > extern uint32_t bp_ntlb1s; > #endif > > vm_paddr_t kernload; > vm_offset_t kernstart; > vm_size_t kernsize; > > /* Message buffer and tables. */ > static vm_offset_t data_start; > static vm_size_t data_end; > > /* Phys/avail memory regions. */ > static struct mem_region *availmem_regions; > static int availmem_regions_sz; > static struct mem_region *physmem_regions; > static int physmem_regions_sz; > > /* Reserved KVA space and mutex for mmu_booke_zero_page. */ > static vm_offset_t zero_page_va; > static struct mtx zero_page_mutex; > > static struct mtx tlbivax_mutex; > > /* > * Reserved KVA space for mmu_booke_zero_page_idle. This is used > * by idle thred only, no lock required. > */ > static vm_offset_t zero_page_idle_va; > > /* Reserved KVA space and mutex for mmu_booke_copy_page. */ > static vm_offset_t copy_page_src_va; > static vm_offset_t copy_page_dst_va; > static struct mtx copy_page_mutex; > > /**************************************************************************/ > /* PMAP */ > /**************************************************************************/ > > static int mmu_booke_enter_locked(mmu_t, pmap_t, vm_offset_t, vm_page_t, > vm_prot_t, u_int flags, int8_t psind); > > unsigned int kptbl_min; /* Index of the first kernel ptbl. */ > unsigned int kernel_ptbls; /* Number of KVA ptbls. */ > > /* > * If user pmap is processed with mmu_booke_remove and the resident count > * drops to 0, there are no more pages to remove, so we need not continue. > */ > #define PMAP_REMOVE_DONE(pmap) \ > ((pmap) != kernel_pmap && (pmap)->pm_stats.resident_count == 0) > > extern void tid_flush(tlbtid_t); > > /**************************************************************************/ > /* TLB and TID handling */ > /**************************************************************************/ > > /* Translation ID busy table */ > static volatile pmap_t tidbusy[MAXCPU][TID_MAX + 1]; > > /* > * TLB0 capabilities (entry, way numbers etc.). These can vary between e500 > * core revisions and should be read from h/w registers during early config. > */ > uint32_t tlb0_entries; > uint32_t tlb0_ways; > uint32_t tlb0_entries_per_way; > > #define TLB0_ENTRIES (tlb0_entries) > #define TLB0_WAYS (tlb0_ways) > #define TLB0_ENTRIES_PER_WAY (tlb0_entries_per_way) > > #define TLB1_ENTRIES 16 > > /* In-ram copy of the TLB1 */ > static tlb_entry_t tlb1[TLB1_ENTRIES]; > > /* Next free entry in the TLB1 */ > static unsigned int tlb1_idx; > static vm_offset_t tlb1_map_base = VM_MAX_KERNEL_ADDRESS; > > static tlbtid_t tid_alloc(struct pmap *); > > static void tlb_print_entry(int, uint32_t, uint32_t, uint32_t, uint32_t); > > static int tlb1_set_entry(vm_offset_t, vm_offset_t, vm_size_t, uint32_t); > static void tlb1_write_entry(unsigned int); > static int tlb1_iomapped(int, vm_paddr_t, vm_size_t, vm_offset_t *); > static vm_size_t tlb1_mapin_region(vm_offset_t, vm_paddr_t, vm_size_t); > > static vm_size_t tsize2size(unsigned int); > static unsigned int size2tsize(vm_size_t); > static unsigned int ilog2(unsigned int); > > static void set_mas4_defaults(void); > > static inline void tlb0_flush_entry(vm_offset_t); > static inline unsigned int tlb0_tableidx(vm_offset_t, unsigned int); > > /**************************************************************************/ > /* Page table management */ > /**************************************************************************/ > > static struct rwlock_padalign pvh_global_lock; > > /* Data for the pv entry allocation mechanism */ > static uma_zone_t pvzone; > static int pv_entry_count = 0, pv_entry_max = 0, pv_entry_high_water = 0; > > #define PV_ENTRY_ZONE_MIN 2048 /* min pv entries in uma zone */ > > #ifndef PMAP_SHPGPERPROC > #define PMAP_SHPGPERPROC 200 > #endif > > static void ptbl_init(void); > static struct ptbl_buf *ptbl_buf_alloc(void); > static void ptbl_buf_free(struct ptbl_buf *); > static void ptbl_free_pmap_ptbl(pmap_t, pte_t *); > > static pte_t *ptbl_alloc(mmu_t, pmap_t, unsigned int, boolean_t); > static void ptbl_free(mmu_t, pmap_t, unsigned int); > static void ptbl_hold(mmu_t, pmap_t, unsigned int); > static int ptbl_unhold(mmu_t, pmap_t, unsigned int); > > static vm_paddr_t pte_vatopa(mmu_t, pmap_t, vm_offset_t); > static pte_t *pte_find(mmu_t, pmap_t, vm_offset_t); > static int pte_enter(mmu_t, pmap_t, vm_page_t, vm_offset_t, uint32_t, boolean_t); > static int pte_remove(mmu_t, pmap_t, vm_offset_t, uint8_t); > > static pv_entry_t pv_alloc(void); > static void pv_free(pv_entry_t); > static void pv_insert(pmap_t, vm_offset_t, vm_page_t); > static void pv_remove(pmap_t, vm_offset_t, vm_page_t); > > /* Number of kva ptbl buffers, each covering one ptbl (PTBL_PAGES). */ > #define PTBL_BUFS (128 * 16) > > struct ptbl_buf { > TAILQ_ENTRY(ptbl_buf) link; /* list link */ > vm_offset_t kva; /* va of mapping */ > }; > > /* ptbl free list and a lock used for access synchronization. */ > static TAILQ_HEAD(, ptbl_buf) ptbl_buf_freelist; > static struct mtx ptbl_buf_freelist_lock; > > /* Base address of kva space allocated fot ptbl bufs. */ > static vm_offset_t ptbl_buf_pool_vabase; > > /* Pointer to ptbl_buf structures. */ > static struct ptbl_buf *ptbl_bufs; > > void pmap_bootstrap_ap(volatile uint32_t *); > > /* > * Kernel MMU interface > */ > static void mmu_booke_clear_modify(mmu_t, vm_page_t); > static void mmu_booke_copy(mmu_t, pmap_t, pmap_t, vm_offset_t, > vm_size_t, vm_offset_t); > static void mmu_booke_copy_page(mmu_t, vm_page_t, vm_page_t); > static void mmu_booke_copy_pages(mmu_t, vm_page_t *, > vm_offset_t, vm_page_t *, vm_offset_t, int); > static int mmu_booke_enter(mmu_t, pmap_t, vm_offset_t, vm_page_t, > vm_prot_t, u_int flags, int8_t psind); > static void mmu_booke_enter_object(mmu_t, pmap_t, vm_offset_t, vm_offset_t, > vm_page_t, vm_prot_t); > static void mmu_booke_enter_quick(mmu_t, pmap_t, vm_offset_t, vm_page_t, > vm_prot_t); > static vm_paddr_t mmu_booke_extract(mmu_t, pmap_t, vm_offset_t); > static vm_page_t mmu_booke_extract_and_hold(mmu_t, pmap_t, vm_offset_t, > vm_prot_t); > static void mmu_booke_init(mmu_t); > static boolean_t mmu_booke_is_modified(mmu_t, vm_page_t); > static boolean_t mmu_booke_is_prefaultable(mmu_t, pmap_t, vm_offset_t); > static boolean_t mmu_booke_is_referenced(mmu_t, vm_page_t); > static int mmu_booke_ts_referenced(mmu_t, vm_page_t); > static vm_offset_t mmu_booke_map(mmu_t, vm_offset_t *, vm_paddr_t, vm_paddr_t, > int); > static int mmu_booke_mincore(mmu_t, pmap_t, vm_offset_t, > vm_paddr_t *); > static void mmu_booke_object_init_pt(mmu_t, pmap_t, vm_offset_t, > vm_object_t, vm_pindex_t, vm_size_t); > static boolean_t mmu_booke_page_exists_quick(mmu_t, pmap_t, vm_page_t); > static void mmu_booke_page_init(mmu_t, vm_page_t); > static int mmu_booke_page_wired_mappings(mmu_t, vm_page_t); > static void mmu_booke_pinit(mmu_t, pmap_t); > static void mmu_booke_pinit0(mmu_t, pmap_t); > static void mmu_booke_protect(mmu_t, pmap_t, vm_offset_t, vm_offset_t, > vm_prot_t); > static void mmu_booke_qenter(mmu_t, vm_offset_t, vm_page_t *, int); > static void mmu_booke_qremove(mmu_t, vm_offset_t, int); > static void mmu_booke_release(mmu_t, pmap_t); > static void mmu_booke_remove(mmu_t, pmap_t, vm_offset_t, vm_offset_t); > static void mmu_booke_remove_all(mmu_t, vm_page_t); > static void mmu_booke_remove_write(mmu_t, vm_page_t); > static void mmu_booke_unwire(mmu_t, pmap_t, vm_offset_t, vm_offset_t); > static void mmu_booke_zero_page(mmu_t, vm_page_t); > static void mmu_booke_zero_page_area(mmu_t, vm_page_t, int, int); > static void mmu_booke_zero_page_idle(mmu_t, vm_page_t); > static void mmu_booke_activate(mmu_t, struct thread *); > static void mmu_booke_deactivate(mmu_t, struct thread *); > static void mmu_booke_bootstrap(mmu_t, vm_offset_t, vm_offset_t); > static void *mmu_booke_mapdev(mmu_t, vm_paddr_t, vm_size_t); > static void *mmu_booke_mapdev_attr(mmu_t, vm_paddr_t, vm_size_t, vm_memattr_t); > static void mmu_booke_unmapdev(mmu_t, vm_offset_t, vm_size_t); > static vm_paddr_t mmu_booke_kextract(mmu_t, vm_offset_t); > static void mmu_booke_kenter(mmu_t, vm_offset_t, vm_paddr_t); > static void mmu_booke_kenter_attr(mmu_t, vm_offset_t, vm_paddr_t, vm_memattr_t); > static void mmu_booke_kremove(mmu_t, vm_offset_t); > static boolean_t mmu_booke_dev_direct_mapped(mmu_t, vm_paddr_t, vm_size_t); > static void mmu_booke_sync_icache(mmu_t, pmap_t, vm_offset_t, > vm_size_t); >-static vm_offset_t mmu_booke_dumpsys_map(mmu_t, struct pmap_md *, >- vm_size_t, vm_size_t *); >-static void mmu_booke_dumpsys_unmap(mmu_t, struct pmap_md *, >- vm_size_t, vm_offset_t); >-static struct pmap_md *mmu_booke_scan_md(mmu_t, struct pmap_md *); >+static void mmu_booke_dumpsys_map(mmu_t, vm_paddr_t pa, size_t, >+ void **); >+static void mmu_booke_dumpsys_unmap(mmu_t, vm_paddr_t pa, size_t, >+ void *); >+static void mmu_booke_scan_init(mmu_t); > > static mmu_method_t mmu_booke_methods[] = { > /* pmap dispatcher interface */ > MMUMETHOD(mmu_clear_modify, mmu_booke_clear_modify), > MMUMETHOD(mmu_copy, mmu_booke_copy), > MMUMETHOD(mmu_copy_page, mmu_booke_copy_page), > MMUMETHOD(mmu_copy_pages, mmu_booke_copy_pages), > MMUMETHOD(mmu_enter, mmu_booke_enter), > MMUMETHOD(mmu_enter_object, mmu_booke_enter_object), > MMUMETHOD(mmu_enter_quick, mmu_booke_enter_quick), > MMUMETHOD(mmu_extract, mmu_booke_extract), > MMUMETHOD(mmu_extract_and_hold, mmu_booke_extract_and_hold), > MMUMETHOD(mmu_init, mmu_booke_init), > MMUMETHOD(mmu_is_modified, mmu_booke_is_modified), > MMUMETHOD(mmu_is_prefaultable, mmu_booke_is_prefaultable), > MMUMETHOD(mmu_is_referenced, mmu_booke_is_referenced), > MMUMETHOD(mmu_ts_referenced, mmu_booke_ts_referenced), > MMUMETHOD(mmu_map, mmu_booke_map), > MMUMETHOD(mmu_mincore, mmu_booke_mincore), > MMUMETHOD(mmu_object_init_pt, mmu_booke_object_init_pt), > MMUMETHOD(mmu_page_exists_quick,mmu_booke_page_exists_quick), > MMUMETHOD(mmu_page_init, mmu_booke_page_init), > MMUMETHOD(mmu_page_wired_mappings, mmu_booke_page_wired_mappings), > MMUMETHOD(mmu_pinit, mmu_booke_pinit), > MMUMETHOD(mmu_pinit0, mmu_booke_pinit0), > MMUMETHOD(mmu_protect, mmu_booke_protect), > MMUMETHOD(mmu_qenter, mmu_booke_qenter), > MMUMETHOD(mmu_qremove, mmu_booke_qremove), > MMUMETHOD(mmu_release, mmu_booke_release), > MMUMETHOD(mmu_remove, mmu_booke_remove), > MMUMETHOD(mmu_remove_all, mmu_booke_remove_all), > MMUMETHOD(mmu_remove_write, mmu_booke_remove_write), > MMUMETHOD(mmu_sync_icache, mmu_booke_sync_icache), > MMUMETHOD(mmu_unwire, mmu_booke_unwire), > MMUMETHOD(mmu_zero_page, mmu_booke_zero_page), > MMUMETHOD(mmu_zero_page_area, mmu_booke_zero_page_area), > MMUMETHOD(mmu_zero_page_idle, mmu_booke_zero_page_idle), > MMUMETHOD(mmu_activate, mmu_booke_activate), > MMUMETHOD(mmu_deactivate, mmu_booke_deactivate), > > /* Internal interfaces */ > MMUMETHOD(mmu_bootstrap, mmu_booke_bootstrap), > MMUMETHOD(mmu_dev_direct_mapped,mmu_booke_dev_direct_mapped), > MMUMETHOD(mmu_mapdev, mmu_booke_mapdev), > MMUMETHOD(mmu_mapdev_attr, mmu_booke_mapdev_attr), > MMUMETHOD(mmu_kenter, mmu_booke_kenter), > MMUMETHOD(mmu_kenter_attr, mmu_booke_kenter_attr), > MMUMETHOD(mmu_kextract, mmu_booke_kextract), > /* MMUMETHOD(mmu_kremove, mmu_booke_kremove), */ > MMUMETHOD(mmu_unmapdev, mmu_booke_unmapdev), > > /* dumpsys() support */ > MMUMETHOD(mmu_dumpsys_map, mmu_booke_dumpsys_map), > MMUMETHOD(mmu_dumpsys_unmap, mmu_booke_dumpsys_unmap), >- MMUMETHOD(mmu_scan_md, mmu_booke_scan_md), >+ MMUMETHOD(mmu_scan_init, mmu_booke_scan_init), > > { 0, 0 } > }; > > MMU_DEF(booke_mmu, MMU_TYPE_BOOKE, mmu_booke_methods, 0); > > static __inline uint32_t > tlb_calc_wimg(vm_offset_t pa, vm_memattr_t ma) > { > uint32_t attrib; > int i; > > if (ma != VM_MEMATTR_DEFAULT) { > switch (ma) { > case VM_MEMATTR_UNCACHEABLE: > return (PTE_I | PTE_G); > case VM_MEMATTR_WRITE_COMBINING: > case VM_MEMATTR_WRITE_BACK: > case VM_MEMATTR_PREFETCHABLE: > return (PTE_I); > case VM_MEMATTR_WRITE_THROUGH: > return (PTE_W | PTE_M); > } > } > > /* > * Assume the page is cache inhibited and access is guarded unless > * it's in our available memory array. > */ > attrib = _TLB_ENTRY_IO; > for (i = 0; i < physmem_regions_sz; i++) { > if ((pa >= physmem_regions[i].mr_start) && > (pa < (physmem_regions[i].mr_start + > physmem_regions[i].mr_size))) { > attrib = _TLB_ENTRY_MEM; > break; > } > } > > return (attrib); > } > > static inline void > tlb_miss_lock(void) > { > #ifdef SMP > struct pcpu *pc; > > if (!smp_started) > return; > > STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) { > if (pc != pcpup) { > > CTR3(KTR_PMAP, "%s: tlb miss LOCK of CPU=%d, " > "tlb_lock=%p", __func__, pc->pc_cpuid, pc->pc_booke_tlb_lock); > > KASSERT((pc->pc_cpuid != PCPU_GET(cpuid)), > ("tlb_miss_lock: tried to lock self")); > > tlb_lock(pc->pc_booke_tlb_lock); > > CTR1(KTR_PMAP, "%s: locked", __func__); > } > } > #endif > } > > static inline void > tlb_miss_unlock(void) > { > #ifdef SMP > struct pcpu *pc; > > if (!smp_started) > return; > > STAILQ_FOREACH(pc, &cpuhead, pc_allcpu) { > if (pc != pcpup) { > CTR2(KTR_PMAP, "%s: tlb miss UNLOCK of CPU=%d", > __func__, pc->pc_cpuid); > > tlb_unlock(pc->pc_booke_tlb_lock); > > CTR1(KTR_PMAP, "%s: unlocked", __func__); > } > } > #endif > } > > /* Return number of entries in TLB0. */ > static __inline void > tlb0_get_tlbconf(void) > { > uint32_t tlb0_cfg; > > tlb0_cfg = mfspr(SPR_TLB0CFG); > tlb0_entries = tlb0_cfg & TLBCFG_NENTRY_MASK; > tlb0_ways = (tlb0_cfg & TLBCFG_ASSOC_MASK) >> TLBCFG_ASSOC_SHIFT; > tlb0_entries_per_way = tlb0_entries / tlb0_ways; > } > > /* Initialize pool of kva ptbl buffers. */ > static void > ptbl_init(void) > { > int i; > > CTR3(KTR_PMAP, "%s: s (ptbl_bufs = 0x%08x size 0x%08x)", __func__, > (uint32_t)ptbl_bufs, sizeof(struct ptbl_buf) * PTBL_BUFS); > CTR3(KTR_PMAP, "%s: s (ptbl_buf_pool_vabase = 0x%08x size = 0x%08x)", > __func__, ptbl_buf_pool_vabase, PTBL_BUFS * PTBL_PAGES * PAGE_SIZE); > > mtx_init(&ptbl_buf_freelist_lock, "ptbl bufs lock", NULL, MTX_DEF); > TAILQ_INIT(&ptbl_buf_freelist); > > for (i = 0; i < PTBL_BUFS; i++) { > ptbl_bufs[i].kva = ptbl_buf_pool_vabase + i * PTBL_PAGES * PAGE_SIZE; > TAILQ_INSERT_TAIL(&ptbl_buf_freelist, &ptbl_bufs[i], link); > } > } > > /* Get a ptbl_buf from the freelist. */ > static struct ptbl_buf * > ptbl_buf_alloc(void) > { > struct ptbl_buf *buf; > > mtx_lock(&ptbl_buf_freelist_lock); > buf = TAILQ_FIRST(&ptbl_buf_freelist); > if (buf != NULL) > TAILQ_REMOVE(&ptbl_buf_freelist, buf, link); > mtx_unlock(&ptbl_buf_freelist_lock); > > CTR2(KTR_PMAP, "%s: buf = %p", __func__, buf); > > return (buf); > } > > /* Return ptbl buff to free pool. */ > static void > ptbl_buf_free(struct ptbl_buf *buf) > { > > CTR2(KTR_PMAP, "%s: buf = %p", __func__, buf); > > mtx_lock(&ptbl_buf_freelist_lock); > TAILQ_INSERT_TAIL(&ptbl_buf_freelist, buf, link); > mtx_unlock(&ptbl_buf_freelist_lock); > } > > /* > * Search the list of allocated ptbl bufs and find on list of allocated ptbls > */ > static void > ptbl_free_pmap_ptbl(pmap_t pmap, pte_t *ptbl) > { > struct ptbl_buf *pbuf; > > CTR2(KTR_PMAP, "%s: ptbl = %p", __func__, ptbl); > > PMAP_LOCK_ASSERT(pmap, MA_OWNED); > > TAILQ_FOREACH(pbuf, &pmap->pm_ptbl_list, link) > if (pbuf->kva == (vm_offset_t)ptbl) { > /* Remove from pmap ptbl buf list. */ > TAILQ_REMOVE(&pmap->pm_ptbl_list, pbuf, link); > > /* Free corresponding ptbl buf. */ > ptbl_buf_free(pbuf); > break; > } > } > > /* Allocate page table. */ > static pte_t * > ptbl_alloc(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx, boolean_t nosleep) > { > vm_page_t mtbl[PTBL_PAGES]; > vm_page_t m; > struct ptbl_buf *pbuf; > unsigned int pidx; > pte_t *ptbl; > int i, j; > > CTR4(KTR_PMAP, "%s: pmap = %p su = %d pdir_idx = %d", __func__, pmap, > (pmap == kernel_pmap), pdir_idx); > > KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), > ("ptbl_alloc: invalid pdir_idx")); > KASSERT((pmap->pm_pdir[pdir_idx] == NULL), > ("pte_alloc: valid ptbl entry exists!")); > > pbuf = ptbl_buf_alloc(); > if (pbuf == NULL) > panic("pte_alloc: couldn't alloc kernel virtual memory"); > > ptbl = (pte_t *)pbuf->kva; > > CTR2(KTR_PMAP, "%s: ptbl kva = %p", __func__, ptbl); > > /* Allocate ptbl pages, this will sleep! */ > for (i = 0; i < PTBL_PAGES; i++) { > pidx = (PTBL_PAGES * pdir_idx) + i; > while ((m = vm_page_alloc(NULL, pidx, > VM_ALLOC_NOOBJ | VM_ALLOC_WIRED)) == NULL) { > PMAP_UNLOCK(pmap); > rw_wunlock(&pvh_global_lock); > if (nosleep) { > ptbl_free_pmap_ptbl(pmap, ptbl); > for (j = 0; j < i; j++) > vm_page_free(mtbl[j]); > atomic_subtract_int(&vm_cnt.v_wire_count, i); > return (NULL); > } > VM_WAIT; > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pmap); > } > mtbl[i] = m; > } > > /* Map allocated pages into kernel_pmap. */ > mmu_booke_qenter(mmu, (vm_offset_t)ptbl, mtbl, PTBL_PAGES); > > /* Zero whole ptbl. */ > bzero((caddr_t)ptbl, PTBL_PAGES * PAGE_SIZE); > > /* Add pbuf to the pmap ptbl bufs list. */ > TAILQ_INSERT_TAIL(&pmap->pm_ptbl_list, pbuf, link); > > return (ptbl); > } > > /* Free ptbl pages and invalidate pdir entry. */ > static void > ptbl_free(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx) > { > pte_t *ptbl; > vm_paddr_t pa; > vm_offset_t va; > vm_page_t m; > int i; > > CTR4(KTR_PMAP, "%s: pmap = %p su = %d pdir_idx = %d", __func__, pmap, > (pmap == kernel_pmap), pdir_idx); > > KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), > ("ptbl_free: invalid pdir_idx")); > > ptbl = pmap->pm_pdir[pdir_idx]; > > CTR2(KTR_PMAP, "%s: ptbl = %p", __func__, ptbl); > > KASSERT((ptbl != NULL), ("ptbl_free: null ptbl")); > > /* > * Invalidate the pdir entry as soon as possible, so that other CPUs > * don't attempt to look up the page tables we are releasing. > */ > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > pmap->pm_pdir[pdir_idx] = NULL; > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > > for (i = 0; i < PTBL_PAGES; i++) { > va = ((vm_offset_t)ptbl + (i * PAGE_SIZE)); > pa = pte_vatopa(mmu, kernel_pmap, va); > m = PHYS_TO_VM_PAGE(pa); > vm_page_free_zero(m); > atomic_subtract_int(&vm_cnt.v_wire_count, 1); > mmu_booke_kremove(mmu, va); > } > > ptbl_free_pmap_ptbl(pmap, ptbl); > } > > /* > * Decrement ptbl pages hold count and attempt to free ptbl pages. > * Called when removing pte entry from ptbl. > * > * Return 1 if ptbl pages were freed. > */ > static int > ptbl_unhold(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx) > { > pte_t *ptbl; > vm_paddr_t pa; > vm_page_t m; > int i; > > CTR4(KTR_PMAP, "%s: pmap = %p su = %d pdir_idx = %d", __func__, pmap, > (pmap == kernel_pmap), pdir_idx); > > KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), > ("ptbl_unhold: invalid pdir_idx")); > KASSERT((pmap != kernel_pmap), > ("ptbl_unhold: unholding kernel ptbl!")); > > ptbl = pmap->pm_pdir[pdir_idx]; > > //debugf("ptbl_unhold: ptbl = 0x%08x\n", (u_int32_t)ptbl); > KASSERT(((vm_offset_t)ptbl >= VM_MIN_KERNEL_ADDRESS), > ("ptbl_unhold: non kva ptbl")); > > /* decrement hold count */ > for (i = 0; i < PTBL_PAGES; i++) { > pa = pte_vatopa(mmu, kernel_pmap, > (vm_offset_t)ptbl + (i * PAGE_SIZE)); > m = PHYS_TO_VM_PAGE(pa); > m->wire_count--; > } > > /* > * Free ptbl pages if there are no pte etries in this ptbl. > * wire_count has the same value for all ptbl pages, so check the last > * page. > */ > if (m->wire_count == 0) { > ptbl_free(mmu, pmap, pdir_idx); > > //debugf("ptbl_unhold: e (freed ptbl)\n"); > return (1); > } > > return (0); > } > > /* > * Increment hold count for ptbl pages. This routine is used when a new pte > * entry is being inserted into the ptbl. > */ > static void > ptbl_hold(mmu_t mmu, pmap_t pmap, unsigned int pdir_idx) > { > vm_paddr_t pa; > pte_t *ptbl; > vm_page_t m; > int i; > > CTR3(KTR_PMAP, "%s: pmap = %p pdir_idx = %d", __func__, pmap, > pdir_idx); > > KASSERT((pdir_idx <= (VM_MAXUSER_ADDRESS / PDIR_SIZE)), > ("ptbl_hold: invalid pdir_idx")); > KASSERT((pmap != kernel_pmap), > ("ptbl_hold: holding kernel ptbl!")); > > ptbl = pmap->pm_pdir[pdir_idx]; > > KASSERT((ptbl != NULL), ("ptbl_hold: null ptbl")); > > for (i = 0; i < PTBL_PAGES; i++) { > pa = pte_vatopa(mmu, kernel_pmap, > (vm_offset_t)ptbl + (i * PAGE_SIZE)); > m = PHYS_TO_VM_PAGE(pa); > m->wire_count++; > } > } > > /* Allocate pv_entry structure. */ > pv_entry_t > pv_alloc(void) > { > pv_entry_t pv; > > pv_entry_count++; > if (pv_entry_count > pv_entry_high_water) > pagedaemon_wakeup(); > pv = uma_zalloc(pvzone, M_NOWAIT); > > return (pv); > } > > /* Free pv_entry structure. */ > static __inline void > pv_free(pv_entry_t pve) > { > > pv_entry_count--; > uma_zfree(pvzone, pve); > } > > > /* Allocate and initialize pv_entry structure. */ > static void > pv_insert(pmap_t pmap, vm_offset_t va, vm_page_t m) > { > pv_entry_t pve; > > //int su = (pmap == kernel_pmap); > //debugf("pv_insert: s (su = %d pmap = 0x%08x va = 0x%08x m = 0x%08x)\n", su, > // (u_int32_t)pmap, va, (u_int32_t)m); > > pve = pv_alloc(); > if (pve == NULL) > panic("pv_insert: no pv entries!"); > > pve->pv_pmap = pmap; > pve->pv_va = va; > > /* add to pv_list */ > PMAP_LOCK_ASSERT(pmap, MA_OWNED); > rw_assert(&pvh_global_lock, RA_WLOCKED); > > TAILQ_INSERT_TAIL(&m->md.pv_list, pve, pv_link); > > //debugf("pv_insert: e\n"); > } > > /* Destroy pv entry. */ > static void > pv_remove(pmap_t pmap, vm_offset_t va, vm_page_t m) > { > pv_entry_t pve; > > //int su = (pmap == kernel_pmap); > //debugf("pv_remove: s (su = %d pmap = 0x%08x va = 0x%08x)\n", su, (u_int32_t)pmap, va); > > PMAP_LOCK_ASSERT(pmap, MA_OWNED); > rw_assert(&pvh_global_lock, RA_WLOCKED); > > /* find pv entry */ > TAILQ_FOREACH(pve, &m->md.pv_list, pv_link) { > if ((pmap == pve->pv_pmap) && (va == pve->pv_va)) { > /* remove from pv_list */ > TAILQ_REMOVE(&m->md.pv_list, pve, pv_link); > if (TAILQ_EMPTY(&m->md.pv_list)) > vm_page_aflag_clear(m, PGA_WRITEABLE); > > /* free pv entry struct */ > pv_free(pve); > break; > } > } > > //debugf("pv_remove: e\n"); > } > > /* > * Clean pte entry, try to free page table page if requested. > * > * Return 1 if ptbl pages were freed, otherwise return 0. > */ > static int > pte_remove(mmu_t mmu, pmap_t pmap, vm_offset_t va, uint8_t flags) > { > unsigned int pdir_idx = PDIR_IDX(va); > unsigned int ptbl_idx = PTBL_IDX(va); > vm_page_t m; > pte_t *ptbl; > pte_t *pte; > > //int su = (pmap == kernel_pmap); > //debugf("pte_remove: s (su = %d pmap = 0x%08x va = 0x%08x flags = %d)\n", > // su, (u_int32_t)pmap, va, flags); > > ptbl = pmap->pm_pdir[pdir_idx]; > KASSERT(ptbl, ("pte_remove: null ptbl")); > > pte = &ptbl[ptbl_idx]; > > if (pte == NULL || !PTE_ISVALID(pte)) > return (0); > > if (PTE_ISWIRED(pte)) > pmap->pm_stats.wired_count--; > > /* Handle managed entry. */ > if (PTE_ISMANAGED(pte)) { > /* Get vm_page_t for mapped pte. */ > m = PHYS_TO_VM_PAGE(PTE_PA(pte)); > > if (PTE_ISMODIFIED(pte)) > vm_page_dirty(m); > > if (PTE_ISREFERENCED(pte)) > vm_page_aflag_set(m, PGA_REFERENCED); > > pv_remove(pmap, va, m); > } > > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > tlb0_flush_entry(va); > pte->flags = 0; > pte->rpn = 0; > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > > pmap->pm_stats.resident_count--; > > if (flags & PTBL_UNHOLD) { > //debugf("pte_remove: e (unhold)\n"); > return (ptbl_unhold(mmu, pmap, pdir_idx)); > } > > //debugf("pte_remove: e\n"); > return (0); > } > > /* > * Insert PTE for a given page and virtual address. > */ > static int > pte_enter(mmu_t mmu, pmap_t pmap, vm_page_t m, vm_offset_t va, uint32_t flags, > boolean_t nosleep) > { > unsigned int pdir_idx = PDIR_IDX(va); > unsigned int ptbl_idx = PTBL_IDX(va); > pte_t *ptbl, *pte; > > CTR4(KTR_PMAP, "%s: su = %d pmap = %p va = %p", __func__, > pmap == kernel_pmap, pmap, va); > > /* Get the page table pointer. */ > ptbl = pmap->pm_pdir[pdir_idx]; > > if (ptbl == NULL) { > /* Allocate page table pages. */ > ptbl = ptbl_alloc(mmu, pmap, pdir_idx, nosleep); > if (ptbl == NULL) { > KASSERT(nosleep, ("nosleep and NULL ptbl")); > return (ENOMEM); > } > } else { > /* > * Check if there is valid mapping for requested > * va, if there is, remove it. > */ > pte = &pmap->pm_pdir[pdir_idx][ptbl_idx]; > if (PTE_ISVALID(pte)) { > pte_remove(mmu, pmap, va, PTBL_HOLD); > } else { > /* > * pte is not used, increment hold count > * for ptbl pages. > */ > if (pmap != kernel_pmap) > ptbl_hold(mmu, pmap, pdir_idx); > } > } > > /* > * Insert pv_entry into pv_list for mapped page if part of managed > * memory. > */ > if ((m->oflags & VPO_UNMANAGED) == 0) { > flags |= PTE_MANAGED; > > /* Create and insert pv entry. */ > pv_insert(pmap, va, m); > } > > pmap->pm_stats.resident_count++; > > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > tlb0_flush_entry(va); > if (pmap->pm_pdir[pdir_idx] == NULL) { > /* > * If we just allocated a new page table, hook it in > * the pdir. > */ > pmap->pm_pdir[pdir_idx] = ptbl; > } > pte = &(pmap->pm_pdir[pdir_idx][ptbl_idx]); > pte->rpn = VM_PAGE_TO_PHYS(m) & ~PTE_PA_MASK; > pte->flags |= (PTE_VALID | flags); > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > return (0); > } > > /* Return the pa for the given pmap/va. */ > static vm_paddr_t > pte_vatopa(mmu_t mmu, pmap_t pmap, vm_offset_t va) > { > vm_paddr_t pa = 0; > pte_t *pte; > > pte = pte_find(mmu, pmap, va); > if ((pte != NULL) && PTE_ISVALID(pte)) > pa = (PTE_PA(pte) | (va & PTE_PA_MASK)); > return (pa); > } > > /* Get a pointer to a PTE in a page table. */ > static pte_t * > pte_find(mmu_t mmu, pmap_t pmap, vm_offset_t va) > { > unsigned int pdir_idx = PDIR_IDX(va); > unsigned int ptbl_idx = PTBL_IDX(va); > > KASSERT((pmap != NULL), ("pte_find: invalid pmap")); > > if (pmap->pm_pdir[pdir_idx]) > return (&(pmap->pm_pdir[pdir_idx][ptbl_idx])); > > return (NULL); > } > > /**************************************************************************/ > /* PMAP related */ > /**************************************************************************/ > > /* > * This is called during booke_init, before the system is really initialized. > */ > static void > mmu_booke_bootstrap(mmu_t mmu, vm_offset_t start, vm_offset_t kernelend) > { > vm_offset_t phys_kernelend; > struct mem_region *mp, *mp1; > int cnt, i, j; > u_int s, e, sz; > u_int phys_avail_count; > vm_size_t physsz, hwphyssz, kstack0_sz; > vm_offset_t kernel_pdir, kstack0, va; > vm_paddr_t kstack0_phys; > void *dpcpu; > pte_t *pte; > > debugf("mmu_booke_bootstrap: entered\n"); > > /* Initialize invalidation mutex */ > mtx_init(&tlbivax_mutex, "tlbivax", NULL, MTX_SPIN); > > /* Read TLB0 size and associativity. */ > tlb0_get_tlbconf(); > > /* > * Align kernel start and end address (kernel image). > * Note that kernel end does not necessarily relate to kernsize. > * kernsize is the size of the kernel that is actually mapped. > * Also note that "start - 1" is deliberate. With SMP, the > * entry point is exactly a page from the actual load address. > * As such, trunc_page() has no effect and we're off by a page. > * Since we always have the ELF header between the load address > * and the entry point, we can safely subtract 1 to compensate. > */ > kernstart = trunc_page(start - 1); > data_start = round_page(kernelend); > data_end = data_start; > > /* > * Addresses of preloaded modules (like file systems) use > * physical addresses. Make sure we relocate those into > * virtual addresses. > */ > preload_addr_relocate = kernstart - kernload; > > /* Allocate the dynamic per-cpu area. */ > dpcpu = (void *)data_end; > data_end += DPCPU_SIZE; > > /* Allocate space for the message buffer. */ > msgbufp = (struct msgbuf *)data_end; > data_end += msgbufsize; > debugf(" msgbufp at 0x%08x end = 0x%08x\n", (uint32_t)msgbufp, > data_end); > > data_end = round_page(data_end); > > /* Allocate space for ptbl_bufs. */ > ptbl_bufs = (struct ptbl_buf *)data_end; > data_end += sizeof(struct ptbl_buf) * PTBL_BUFS; > debugf(" ptbl_bufs at 0x%08x end = 0x%08x\n", (uint32_t)ptbl_bufs, > data_end); > > data_end = round_page(data_end); > > /* Allocate PTE tables for kernel KVA. */ > kernel_pdir = data_end; > kernel_ptbls = (VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS + > PDIR_SIZE - 1) / PDIR_SIZE; > data_end += kernel_ptbls * PTBL_PAGES * PAGE_SIZE; > debugf(" kernel ptbls: %d\n", kernel_ptbls); > debugf(" kernel pdir at 0x%08x end = 0x%08x\n", kernel_pdir, data_end); > > debugf(" data_end: 0x%08x\n", data_end); > if (data_end - kernstart > kernsize) { > kernsize += tlb1_mapin_region(kernstart + kernsize, > kernload + kernsize, (data_end - kernstart) - kernsize); > } > data_end = kernstart + kernsize; > debugf(" updated data_end: 0x%08x\n", data_end); > > /* > * Clear the structures - note we can only do it safely after the > * possible additional TLB1 translations are in place (above) so that > * all range up to the currently calculated 'data_end' is covered. > */ > dpcpu_init(dpcpu, 0); > memset((void *)ptbl_bufs, 0, sizeof(struct ptbl_buf) * PTBL_SIZE); > memset((void *)kernel_pdir, 0, kernel_ptbls * PTBL_PAGES * PAGE_SIZE); > > /*******************************************************/ > /* Set the start and end of kva. */ > /*******************************************************/ > virtual_avail = round_page(data_end); > virtual_end = VM_MAX_KERNEL_ADDRESS; > > /* Allocate KVA space for page zero/copy operations. */ > zero_page_va = virtual_avail; > virtual_avail += PAGE_SIZE; > zero_page_idle_va = virtual_avail; > virtual_avail += PAGE_SIZE; > copy_page_src_va = virtual_avail; > virtual_avail += PAGE_SIZE; > copy_page_dst_va = virtual_avail; > virtual_avail += PAGE_SIZE; > debugf("zero_page_va = 0x%08x\n", zero_page_va); > debugf("zero_page_idle_va = 0x%08x\n", zero_page_idle_va); > debugf("copy_page_src_va = 0x%08x\n", copy_page_src_va); > debugf("copy_page_dst_va = 0x%08x\n", copy_page_dst_va); > > /* Initialize page zero/copy mutexes. */ > mtx_init(&zero_page_mutex, "mmu_booke_zero_page", NULL, MTX_DEF); > mtx_init(©_page_mutex, "mmu_booke_copy_page", NULL, MTX_DEF); > > /* Allocate KVA space for ptbl bufs. */ > ptbl_buf_pool_vabase = virtual_avail; > virtual_avail += PTBL_BUFS * PTBL_PAGES * PAGE_SIZE; > debugf("ptbl_buf_pool_vabase = 0x%08x end = 0x%08x\n", > ptbl_buf_pool_vabase, virtual_avail); > > /* Calculate corresponding physical addresses for the kernel region. */ > phys_kernelend = kernload + kernsize; > debugf("kernel image and allocated data:\n"); > debugf(" kernload = 0x%08x\n", kernload); > debugf(" kernstart = 0x%08x\n", kernstart); > debugf(" kernsize = 0x%08x\n", kernsize); > > if (sizeof(phys_avail) / sizeof(phys_avail[0]) < availmem_regions_sz) > panic("mmu_booke_bootstrap: phys_avail too small"); > > /* > * Remove kernel physical address range from avail regions list. Page > * align all regions. Non-page aligned memory isn't very interesting > * to us. Also, sort the entries for ascending addresses. > */ > > /* Retrieve phys/avail mem regions */ > mem_regions(&physmem_regions, &physmem_regions_sz, > &availmem_regions, &availmem_regions_sz); > sz = 0; > cnt = availmem_regions_sz; > debugf("processing avail regions:\n"); > for (mp = availmem_regions; mp->mr_size; mp++) { > s = mp->mr_start; > e = mp->mr_start + mp->mr_size; > debugf(" %08x-%08x -> ", s, e); > /* Check whether this region holds all of the kernel. */ > if (s < kernload && e > phys_kernelend) { > availmem_regions[cnt].mr_start = phys_kernelend; > availmem_regions[cnt++].mr_size = e - phys_kernelend; > e = kernload; > } > /* Look whether this regions starts within the kernel. */ > if (s >= kernload && s < phys_kernelend) { > if (e <= phys_kernelend) > goto empty; > s = phys_kernelend; > } > /* Now look whether this region ends within the kernel. */ > if (e > kernload && e <= phys_kernelend) { > if (s >= kernload) > goto empty; > e = kernload; > } > /* Now page align the start and size of the region. */ > s = round_page(s); > e = trunc_page(e); > if (e < s) > e = s; > sz = e - s; > debugf("%08x-%08x = %x\n", s, e, sz); > > /* Check whether some memory is left here. */ > if (sz == 0) { > empty: > memmove(mp, mp + 1, > (cnt - (mp - availmem_regions)) * sizeof(*mp)); > cnt--; > mp--; > continue; > } > > /* Do an insertion sort. */ > for (mp1 = availmem_regions; mp1 < mp; mp1++) > if (s < mp1->mr_start) > break; > if (mp1 < mp) { > memmove(mp1 + 1, mp1, (char *)mp - (char *)mp1); > mp1->mr_start = s; > mp1->mr_size = sz; > } else { > mp->mr_start = s; > mp->mr_size = sz; > } > } > availmem_regions_sz = cnt; > > /*******************************************************/ > /* Steal physical memory for kernel stack from the end */ > /* of the first avail region */ > /*******************************************************/ > kstack0_sz = KSTACK_PAGES * PAGE_SIZE; > kstack0_phys = availmem_regions[0].mr_start + > availmem_regions[0].mr_size; > kstack0_phys -= kstack0_sz; > availmem_regions[0].mr_size -= kstack0_sz; > > /*******************************************************/ > /* Fill in phys_avail table, based on availmem_regions */ > /*******************************************************/ > phys_avail_count = 0; > physsz = 0; > hwphyssz = 0; > TUNABLE_ULONG_FETCH("hw.physmem", (u_long *) &hwphyssz); > > debugf("fill in phys_avail:\n"); > for (i = 0, j = 0; i < availmem_regions_sz; i++, j += 2) { > > debugf(" region: 0x%08x - 0x%08x (0x%08x)\n", > availmem_regions[i].mr_start, > availmem_regions[i].mr_start + > availmem_regions[i].mr_size, > availmem_regions[i].mr_size); > > if (hwphyssz != 0 && > (physsz + availmem_regions[i].mr_size) >= hwphyssz) { > debugf(" hw.physmem adjust\n"); > if (physsz < hwphyssz) { > phys_avail[j] = availmem_regions[i].mr_start; > phys_avail[j + 1] = > availmem_regions[i].mr_start + > hwphyssz - physsz; > physsz = hwphyssz; > phys_avail_count++; > } > break; > } > > phys_avail[j] = availmem_regions[i].mr_start; > phys_avail[j + 1] = availmem_regions[i].mr_start + > availmem_regions[i].mr_size; > phys_avail_count++; > physsz += availmem_regions[i].mr_size; > } > physmem = btoc(physsz); > > /* Calculate the last available physical address. */ > for (i = 0; phys_avail[i + 2] != 0; i += 2) > ; > Maxmem = powerpc_btop(phys_avail[i + 1]); > > debugf("Maxmem = 0x%08lx\n", Maxmem); > debugf("phys_avail_count = %d\n", phys_avail_count); > debugf("physsz = 0x%08x physmem = %ld (0x%08lx)\n", physsz, physmem, > physmem); > > /*******************************************************/ > /* Initialize (statically allocated) kernel pmap. */ > /*******************************************************/ > PMAP_LOCK_INIT(kernel_pmap); > kptbl_min = VM_MIN_KERNEL_ADDRESS / PDIR_SIZE; > > debugf("kernel_pmap = 0x%08x\n", (uint32_t)kernel_pmap); > debugf("kptbl_min = %d, kernel_ptbls = %d\n", kptbl_min, kernel_ptbls); > debugf("kernel pdir range: 0x%08x - 0x%08x\n", > kptbl_min * PDIR_SIZE, (kptbl_min + kernel_ptbls) * PDIR_SIZE - 1); > > /* Initialize kernel pdir */ > for (i = 0; i < kernel_ptbls; i++) > kernel_pmap->pm_pdir[kptbl_min + i] = > (pte_t *)(kernel_pdir + (i * PAGE_SIZE * PTBL_PAGES)); > > for (i = 0; i < MAXCPU; i++) { > kernel_pmap->pm_tid[i] = TID_KERNEL; > > /* Initialize each CPU's tidbusy entry 0 with kernel_pmap */ > tidbusy[i][0] = kernel_pmap; > } > > /* > * Fill in PTEs covering kernel code and data. They are not required > * for address translation, as this area is covered by static TLB1 > * entries, but for pte_vatopa() to work correctly with kernel area > * addresses. > */ > for (va = kernstart; va < data_end; va += PAGE_SIZE) { > pte = &(kernel_pmap->pm_pdir[PDIR_IDX(va)][PTBL_IDX(va)]); > pte->rpn = kernload + (va - kernstart); > pte->flags = PTE_M | PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | > PTE_VALID; > } > /* Mark kernel_pmap active on all CPUs */ > CPU_FILL(&kernel_pmap->pm_active); > > /* > * Initialize the global pv list lock. > */ > rw_init(&pvh_global_lock, "pmap pv global"); > > /*******************************************************/ > /* Final setup */ > /*******************************************************/ > > /* Enter kstack0 into kernel map, provide guard page */ > kstack0 = virtual_avail + KSTACK_GUARD_PAGES * PAGE_SIZE; > thread0.td_kstack = kstack0; > thread0.td_kstack_pages = KSTACK_PAGES; > > debugf("kstack_sz = 0x%08x\n", kstack0_sz); > debugf("kstack0_phys at 0x%08x - 0x%08x\n", > kstack0_phys, kstack0_phys + kstack0_sz); > debugf("kstack0 at 0x%08x - 0x%08x\n", kstack0, kstack0 + kstack0_sz); > > virtual_avail += KSTACK_GUARD_PAGES * PAGE_SIZE + kstack0_sz; > for (i = 0; i < KSTACK_PAGES; i++) { > mmu_booke_kenter(mmu, kstack0, kstack0_phys); > kstack0 += PAGE_SIZE; > kstack0_phys += PAGE_SIZE; > } > > debugf("virtual_avail = %08x\n", virtual_avail); > debugf("virtual_end = %08x\n", virtual_end); > > debugf("mmu_booke_bootstrap: exit\n"); > } > > void > pmap_bootstrap_ap(volatile uint32_t *trcp __unused) > { > int i; > > /* > * Finish TLB1 configuration: the BSP already set up its TLB1 and we > * have the snapshot of its contents in the s/w tlb1[] table, so use > * these values directly to (re)program AP's TLB1 hardware. > */ > for (i = bp_ntlb1s; i < tlb1_idx; i++) { > /* Skip invalid entries */ > if (!(tlb1[i].mas1 & MAS1_VALID)) > continue; > > tlb1_write_entry(i); > } > > set_mas4_defaults(); > } > > /* > * Get the physical page address for the given pmap/virtual address. > */ > static vm_paddr_t > mmu_booke_extract(mmu_t mmu, pmap_t pmap, vm_offset_t va) > { > vm_paddr_t pa; > > PMAP_LOCK(pmap); > pa = pte_vatopa(mmu, pmap, va); > PMAP_UNLOCK(pmap); > > return (pa); > } > > /* > * Extract the physical page address associated with the given > * kernel virtual address. > */ > static vm_paddr_t > mmu_booke_kextract(mmu_t mmu, vm_offset_t va) > { > int i; > > /* Check TLB1 mappings */ > for (i = 0; i < tlb1_idx; i++) { > if (!(tlb1[i].mas1 & MAS1_VALID)) > continue; > if (va >= tlb1[i].virt && va < tlb1[i].virt + tlb1[i].size) > return (tlb1[i].phys + (va - tlb1[i].virt)); > } > > return (pte_vatopa(mmu, kernel_pmap, va)); > } > > /* > * Initialize the pmap module. > * Called by vm_init, to initialize any structures that the pmap > * system needs to map virtual memory. > */ > static void > mmu_booke_init(mmu_t mmu) > { > int shpgperproc = PMAP_SHPGPERPROC; > > /* > * Initialize the address space (zone) for the pv entries. Set a > * high water mark so that the system can recover from excessive > * numbers of pv entries. > */ > pvzone = uma_zcreate("PV ENTRY", sizeof(struct pv_entry), NULL, NULL, > NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_VM | UMA_ZONE_NOFREE); > > TUNABLE_INT_FETCH("vm.pmap.shpgperproc", &shpgperproc); > pv_entry_max = shpgperproc * maxproc + vm_cnt.v_page_count; > > TUNABLE_INT_FETCH("vm.pmap.pv_entries", &pv_entry_max); > pv_entry_high_water = 9 * (pv_entry_max / 10); > > uma_zone_reserve_kva(pvzone, pv_entry_max); > > /* Pre-fill pvzone with initial number of pv entries. */ > uma_prealloc(pvzone, PV_ENTRY_ZONE_MIN); > > /* Initialize ptbl allocation. */ > ptbl_init(); > } > > /* > * Map a list of wired pages into kernel virtual address space. This is > * intended for temporary mappings which do not need page modification or > * references recorded. Existing mappings in the region are overwritten. > */ > static void > mmu_booke_qenter(mmu_t mmu, vm_offset_t sva, vm_page_t *m, int count) > { > vm_offset_t va; > > va = sva; > while (count-- > 0) { > mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(*m)); > va += PAGE_SIZE; > m++; > } > } > > /* > * Remove page mappings from kernel virtual address space. Intended for > * temporary mappings entered by mmu_booke_qenter. > */ > static void > mmu_booke_qremove(mmu_t mmu, vm_offset_t sva, int count) > { > vm_offset_t va; > > va = sva; > while (count-- > 0) { > mmu_booke_kremove(mmu, va); > va += PAGE_SIZE; > } > } > > /* > * Map a wired page into kernel virtual address space. > */ > static void > mmu_booke_kenter(mmu_t mmu, vm_offset_t va, vm_paddr_t pa) > { > > mmu_booke_kenter_attr(mmu, va, pa, VM_MEMATTR_DEFAULT); > } > > static void > mmu_booke_kenter_attr(mmu_t mmu, vm_offset_t va, vm_paddr_t pa, vm_memattr_t ma) > { > unsigned int pdir_idx = PDIR_IDX(va); > unsigned int ptbl_idx = PTBL_IDX(va); > uint32_t flags; > pte_t *pte; > > KASSERT(((va >= VM_MIN_KERNEL_ADDRESS) && > (va <= VM_MAX_KERNEL_ADDRESS)), ("mmu_booke_kenter: invalid va")); > > flags = PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | PTE_VALID; > flags |= tlb_calc_wimg(pa, ma); > > pte = &(kernel_pmap->pm_pdir[pdir_idx][ptbl_idx]); > > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > if (PTE_ISVALID(pte)) { > > CTR1(KTR_PMAP, "%s: replacing entry!", __func__); > > /* Flush entry from TLB0 */ > tlb0_flush_entry(va); > } > > pte->rpn = pa & ~PTE_PA_MASK; > pte->flags = flags; > > //debugf("mmu_booke_kenter: pdir_idx = %d ptbl_idx = %d va=0x%08x " > // "pa=0x%08x rpn=0x%08x flags=0x%08x\n", > // pdir_idx, ptbl_idx, va, pa, pte->rpn, pte->flags); > > /* Flush the real memory from the instruction cache. */ > if ((flags & (PTE_I | PTE_G)) == 0) { > __syncicache((void *)va, PAGE_SIZE); > } > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > } > > /* > * Remove a page from kernel page table. > */ > static void > mmu_booke_kremove(mmu_t mmu, vm_offset_t va) > { > unsigned int pdir_idx = PDIR_IDX(va); > unsigned int ptbl_idx = PTBL_IDX(va); > pte_t *pte; > > // CTR2(KTR_PMAP,("%s: s (va = 0x%08x)\n", __func__, va)); > > KASSERT(((va >= VM_MIN_KERNEL_ADDRESS) && > (va <= VM_MAX_KERNEL_ADDRESS)), > ("mmu_booke_kremove: invalid va")); > > pte = &(kernel_pmap->pm_pdir[pdir_idx][ptbl_idx]); > > if (!PTE_ISVALID(pte)) { > > CTR1(KTR_PMAP, "%s: invalid pte", __func__); > > return; > } > > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > /* Invalidate entry in TLB0, update PTE. */ > tlb0_flush_entry(va); > pte->flags = 0; > pte->rpn = 0; > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > } > > /* > * Initialize pmap associated with process 0. > */ > static void > mmu_booke_pinit0(mmu_t mmu, pmap_t pmap) > { > > PMAP_LOCK_INIT(pmap); > mmu_booke_pinit(mmu, pmap); > PCPU_SET(curpmap, pmap); > } > > /* > * Initialize a preallocated and zeroed pmap structure, > * such as one in a vmspace structure. > */ > static void > mmu_booke_pinit(mmu_t mmu, pmap_t pmap) > { > int i; > > CTR4(KTR_PMAP, "%s: pmap = %p, proc %d '%s'", __func__, pmap, > curthread->td_proc->p_pid, curthread->td_proc->p_comm); > > KASSERT((pmap != kernel_pmap), ("pmap_pinit: initializing kernel_pmap")); > > for (i = 0; i < MAXCPU; i++) > pmap->pm_tid[i] = TID_NONE; > CPU_ZERO(&kernel_pmap->pm_active); > bzero(&pmap->pm_stats, sizeof(pmap->pm_stats)); > bzero(&pmap->pm_pdir, sizeof(pte_t *) * PDIR_NENTRIES); > TAILQ_INIT(&pmap->pm_ptbl_list); > } > > /* > * Release any resources held by the given physical map. > * Called when a pmap initialized by mmu_booke_pinit is being released. > * Should only be called if the map contains no valid mappings. > */ > static void > mmu_booke_release(mmu_t mmu, pmap_t pmap) > { > > KASSERT(pmap->pm_stats.resident_count == 0, > ("pmap_release: pmap resident count %ld != 0", > pmap->pm_stats.resident_count)); > } > > /* > * Insert the given physical page at the specified virtual address in the > * target physical map with the protection requested. If specified the page > * will be wired down. > */ > static int > mmu_booke_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, > vm_prot_t prot, u_int flags, int8_t psind) > { > int error; > > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pmap); > error = mmu_booke_enter_locked(mmu, pmap, va, m, prot, flags, psind); > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pmap); > return (error); > } > > static int > mmu_booke_enter_locked(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, > vm_prot_t prot, u_int pmap_flags, int8_t psind __unused) > { > pte_t *pte; > vm_paddr_t pa; > uint32_t flags; > int error, su, sync; > > pa = VM_PAGE_TO_PHYS(m); > su = (pmap == kernel_pmap); > sync = 0; > > //debugf("mmu_booke_enter_locked: s (pmap=0x%08x su=%d tid=%d m=0x%08x va=0x%08x " > // "pa=0x%08x prot=0x%08x flags=%#x)\n", > // (u_int32_t)pmap, su, pmap->pm_tid, > // (u_int32_t)m, va, pa, prot, flags); > > if (su) { > KASSERT(((va >= virtual_avail) && > (va <= VM_MAX_KERNEL_ADDRESS)), > ("mmu_booke_enter_locked: kernel pmap, non kernel va")); > } else { > KASSERT((va <= VM_MAXUSER_ADDRESS), > ("mmu_booke_enter_locked: user pmap, non user va")); > } > if ((m->oflags & VPO_UNMANAGED) == 0 && !vm_page_xbusied(m)) > VM_OBJECT_ASSERT_LOCKED(m->object); > > PMAP_LOCK_ASSERT(pmap, MA_OWNED); > > /* > * If there is an existing mapping, and the physical address has not > * changed, must be protection or wiring change. > */ > if (((pte = pte_find(mmu, pmap, va)) != NULL) && > (PTE_ISVALID(pte)) && (PTE_PA(pte) == pa)) { > > /* > * Before actually updating pte->flags we calculate and > * prepare its new value in a helper var. > */ > flags = pte->flags; > flags &= ~(PTE_UW | PTE_UX | PTE_SW | PTE_SX | PTE_MODIFIED); > > /* Wiring change, just update stats. */ > if ((pmap_flags & PMAP_ENTER_WIRED) != 0) { > if (!PTE_ISWIRED(pte)) { > flags |= PTE_WIRED; > pmap->pm_stats.wired_count++; > } > } else { > if (PTE_ISWIRED(pte)) { > flags &= ~PTE_WIRED; > pmap->pm_stats.wired_count--; > } > } > > if (prot & VM_PROT_WRITE) { > /* Add write permissions. */ > flags |= PTE_SW; > if (!su) > flags |= PTE_UW; > > if ((flags & PTE_MANAGED) != 0) > vm_page_aflag_set(m, PGA_WRITEABLE); > } else { > /* Handle modified pages, sense modify status. */ > > /* > * The PTE_MODIFIED flag could be set by underlying > * TLB misses since we last read it (above), possibly > * other CPUs could update it so we check in the PTE > * directly rather than rely on that saved local flags > * copy. > */ > if (PTE_ISMODIFIED(pte)) > vm_page_dirty(m); > } > > if (prot & VM_PROT_EXECUTE) { > flags |= PTE_SX; > if (!su) > flags |= PTE_UX; > > /* > * Check existing flags for execute permissions: if we > * are turning execute permissions on, icache should > * be flushed. > */ > if ((pte->flags & (PTE_UX | PTE_SX)) == 0) > sync++; > } > > flags &= ~PTE_REFERENCED; > > /* > * The new flags value is all calculated -- only now actually > * update the PTE. > */ > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > tlb0_flush_entry(va); > pte->flags = flags; > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > > } else { > /* > * If there is an existing mapping, but it's for a different > * physical address, pte_enter() will delete the old mapping. > */ > //if ((pte != NULL) && PTE_ISVALID(pte)) > // debugf("mmu_booke_enter_locked: replace\n"); > //else > // debugf("mmu_booke_enter_locked: new\n"); > > /* Now set up the flags and install the new mapping. */ > flags = (PTE_SR | PTE_VALID); > flags |= PTE_M; > > if (!su) > flags |= PTE_UR; > > if (prot & VM_PROT_WRITE) { > flags |= PTE_SW; > if (!su) > flags |= PTE_UW; > > if ((m->oflags & VPO_UNMANAGED) == 0) > vm_page_aflag_set(m, PGA_WRITEABLE); > } > > if (prot & VM_PROT_EXECUTE) { > flags |= PTE_SX; > if (!su) > flags |= PTE_UX; > } > > /* If its wired update stats. */ > if ((pmap_flags & PMAP_ENTER_WIRED) != 0) > flags |= PTE_WIRED; > > error = pte_enter(mmu, pmap, m, va, flags, > (pmap_flags & PMAP_ENTER_NOSLEEP) != 0); > if (error != 0) > return (KERN_RESOURCE_SHORTAGE); > > if ((flags & PMAP_ENTER_WIRED) != 0) > pmap->pm_stats.wired_count++; > > /* Flush the real memory from the instruction cache. */ > if (prot & VM_PROT_EXECUTE) > sync++; > } > > if (sync && (su || pmap == PCPU_GET(curpmap))) { > __syncicache((void *)va, PAGE_SIZE); > sync = 0; > } > > return (KERN_SUCCESS); > } > > /* > * Maps a sequence of resident pages belonging to the same object. > * The sequence begins with the given page m_start. This page is > * mapped at the given virtual address start. Each subsequent page is > * mapped at a virtual address that is offset from start by the same > * amount as the page is offset from m_start within the object. The > * last page in the sequence is the page with the largest offset from > * m_start that can be mapped at a virtual address less than the given > * virtual address end. Not every virtual page between start and end > * is mapped; only those for which a resident page exists with the > * corresponding offset from m_start are mapped. > */ > static void > mmu_booke_enter_object(mmu_t mmu, pmap_t pmap, vm_offset_t start, > vm_offset_t end, vm_page_t m_start, vm_prot_t prot) > { > vm_page_t m; > vm_pindex_t diff, psize; > > VM_OBJECT_ASSERT_LOCKED(m_start->object); > > psize = atop(end - start); > m = m_start; > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pmap); > while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { > mmu_booke_enter_locked(mmu, pmap, start + ptoa(diff), m, > prot & (VM_PROT_READ | VM_PROT_EXECUTE), > PMAP_ENTER_NOSLEEP, 0); > m = TAILQ_NEXT(m, listq); > } > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pmap); > } > > static void > mmu_booke_enter_quick(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_page_t m, > vm_prot_t prot) > { > > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pmap); > mmu_booke_enter_locked(mmu, pmap, va, m, > prot & (VM_PROT_READ | VM_PROT_EXECUTE), PMAP_ENTER_NOSLEEP, > 0); > rw_wunlock(&pvh_global_lock); > PMAP_UNLOCK(pmap); > } > > /* > * Remove the given range of addresses from the specified map. > * > * It is assumed that the start and end are properly rounded to the page size. > */ > static void > mmu_booke_remove(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_offset_t endva) > { > pte_t *pte; > uint8_t hold_flag; > > int su = (pmap == kernel_pmap); > > //debugf("mmu_booke_remove: s (su = %d pmap=0x%08x tid=%d va=0x%08x endva=0x%08x)\n", > // su, (u_int32_t)pmap, pmap->pm_tid, va, endva); > > if (su) { > KASSERT(((va >= virtual_avail) && > (va <= VM_MAX_KERNEL_ADDRESS)), > ("mmu_booke_remove: kernel pmap, non kernel va")); > } else { > KASSERT((va <= VM_MAXUSER_ADDRESS), > ("mmu_booke_remove: user pmap, non user va")); > } > > if (PMAP_REMOVE_DONE(pmap)) { > //debugf("mmu_booke_remove: e (empty)\n"); > return; > } > > hold_flag = PTBL_HOLD_FLAG(pmap); > //debugf("mmu_booke_remove: hold_flag = %d\n", hold_flag); > > rw_wlock(&pvh_global_lock); > PMAP_LOCK(pmap); > for (; va < endva; va += PAGE_SIZE) { > pte = pte_find(mmu, pmap, va); > if ((pte != NULL) && PTE_ISVALID(pte)) > pte_remove(mmu, pmap, va, hold_flag); > } > PMAP_UNLOCK(pmap); > rw_wunlock(&pvh_global_lock); > > //debugf("mmu_booke_remove: e\n"); > } > > /* > * Remove physical page from all pmaps in which it resides. > */ > static void > mmu_booke_remove_all(mmu_t mmu, vm_page_t m) > { > pv_entry_t pv, pvn; > uint8_t hold_flag; > > rw_wlock(&pvh_global_lock); > for (pv = TAILQ_FIRST(&m->md.pv_list); pv != NULL; pv = pvn) { > pvn = TAILQ_NEXT(pv, pv_link); > > PMAP_LOCK(pv->pv_pmap); > hold_flag = PTBL_HOLD_FLAG(pv->pv_pmap); > pte_remove(mmu, pv->pv_pmap, pv->pv_va, hold_flag); > PMAP_UNLOCK(pv->pv_pmap); > } > vm_page_aflag_clear(m, PGA_WRITEABLE); > rw_wunlock(&pvh_global_lock); > } > > /* > * Map a range of physical addresses into kernel virtual address space. > */ > static vm_offset_t > mmu_booke_map(mmu_t mmu, vm_offset_t *virt, vm_paddr_t pa_start, > vm_paddr_t pa_end, int prot) > { > vm_offset_t sva = *virt; > vm_offset_t va = sva; > > //debugf("mmu_booke_map: s (sva = 0x%08x pa_start = 0x%08x pa_end = 0x%08x)\n", > // sva, pa_start, pa_end); > > while (pa_start < pa_end) { > mmu_booke_kenter(mmu, va, pa_start); > va += PAGE_SIZE; > pa_start += PAGE_SIZE; > } > *virt = va; > > //debugf("mmu_booke_map: e (va = 0x%08x)\n", va); > return (sva); > } > > /* > * The pmap must be activated before it's address space can be accessed in any > * way. > */ > static void > mmu_booke_activate(mmu_t mmu, struct thread *td) > { > pmap_t pmap; > u_int cpuid; > > pmap = &td->td_proc->p_vmspace->vm_pmap; > > CTR5(KTR_PMAP, "%s: s (td = %p, proc = '%s', id = %d, pmap = 0x%08x)", > __func__, td, td->td_proc->p_comm, td->td_proc->p_pid, pmap); > > KASSERT((pmap != kernel_pmap), ("mmu_booke_activate: kernel_pmap!")); > > sched_pin(); > > cpuid = PCPU_GET(cpuid); > CPU_SET_ATOMIC(cpuid, &pmap->pm_active); > PCPU_SET(curpmap, pmap); > > if (pmap->pm_tid[cpuid] == TID_NONE) > tid_alloc(pmap); > > /* Load PID0 register with pmap tid value. */ > mtspr(SPR_PID0, pmap->pm_tid[cpuid]); > __asm __volatile("isync"); > > sched_unpin(); > > CTR3(KTR_PMAP, "%s: e (tid = %d for '%s')", __func__, > pmap->pm_tid[PCPU_GET(cpuid)], td->td_proc->p_comm); > } > > /* > * Deactivate the specified process's address space. > */ > static void > mmu_booke_deactivate(mmu_t mmu, struct thread *td) > { > pmap_t pmap; > > pmap = &td->td_proc->p_vmspace->vm_pmap; > > CTR5(KTR_PMAP, "%s: td=%p, proc = '%s', id = %d, pmap = 0x%08x", > __func__, td, td->td_proc->p_comm, td->td_proc->p_pid, pmap); > > CPU_CLR_ATOMIC(PCPU_GET(cpuid), &pmap->pm_active); > PCPU_SET(curpmap, NULL); > } > > /* > * Copy the range specified by src_addr/len > * from the source map to the range dst_addr/len > * in the destination map. > * > * This routine is only advisory and need not do anything. > */ > static void > mmu_booke_copy(mmu_t mmu, pmap_t dst_pmap, pmap_t src_pmap, > vm_offset_t dst_addr, vm_size_t len, vm_offset_t src_addr) > { > > } > > /* > * Set the physical protection on the specified range of this map as requested. > */ > static void > mmu_booke_protect(mmu_t mmu, pmap_t pmap, vm_offset_t sva, vm_offset_t eva, > vm_prot_t prot) > { > vm_offset_t va; > vm_page_t m; > pte_t *pte; > > if ((prot & VM_PROT_READ) == VM_PROT_NONE) { > mmu_booke_remove(mmu, pmap, sva, eva); > return; > } > > if (prot & VM_PROT_WRITE) > return; > > PMAP_LOCK(pmap); > for (va = sva; va < eva; va += PAGE_SIZE) { > if ((pte = pte_find(mmu, pmap, va)) != NULL) { > if (PTE_ISVALID(pte)) { > m = PHYS_TO_VM_PAGE(PTE_PA(pte)); > > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > /* Handle modified pages. */ > if (PTE_ISMODIFIED(pte) && PTE_ISMANAGED(pte)) > vm_page_dirty(m); > > tlb0_flush_entry(va); > pte->flags &= ~(PTE_UW | PTE_SW | PTE_MODIFIED); > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > } > } > } > PMAP_UNLOCK(pmap); > } > > /* > * Clear the write and modified bits in each of the given page's mappings. > */ > static void > mmu_booke_remove_write(mmu_t mmu, vm_page_t m) > { > pv_entry_t pv; > pte_t *pte; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("mmu_booke_remove_write: page %p is not managed", m)); > > /* > * If the page is not exclusive busied, then PGA_WRITEABLE cannot be > * set by another thread while the object is locked. Thus, > * if PGA_WRITEABLE is clear, no page table entries need updating. > */ > VM_OBJECT_ASSERT_WLOCKED(m->object); > if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) > return; > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > PMAP_LOCK(pv->pv_pmap); > if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL) { > if (PTE_ISVALID(pte)) { > m = PHYS_TO_VM_PAGE(PTE_PA(pte)); > > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > /* Handle modified pages. */ > if (PTE_ISMODIFIED(pte)) > vm_page_dirty(m); > > /* Flush mapping from TLB0. */ > pte->flags &= ~(PTE_UW | PTE_SW | PTE_MODIFIED); > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > } > } > PMAP_UNLOCK(pv->pv_pmap); > } > vm_page_aflag_clear(m, PGA_WRITEABLE); > rw_wunlock(&pvh_global_lock); > } > > static void > mmu_booke_sync_icache(mmu_t mmu, pmap_t pm, vm_offset_t va, vm_size_t sz) > { > pte_t *pte; > pmap_t pmap; > vm_page_t m; > vm_offset_t addr; > vm_paddr_t pa = 0; > int active, valid; > > va = trunc_page(va); > sz = round_page(sz); > > rw_wlock(&pvh_global_lock); > pmap = PCPU_GET(curpmap); > active = (pm == kernel_pmap || pm == pmap) ? 1 : 0; > while (sz > 0) { > PMAP_LOCK(pm); > pte = pte_find(mmu, pm, va); > valid = (pte != NULL && PTE_ISVALID(pte)) ? 1 : 0; > if (valid) > pa = PTE_PA(pte); > PMAP_UNLOCK(pm); > if (valid) { > if (!active) { > /* Create a mapping in the active pmap. */ > addr = 0; > m = PHYS_TO_VM_PAGE(pa); > PMAP_LOCK(pmap); > pte_enter(mmu, pmap, m, addr, > PTE_SR | PTE_VALID | PTE_UR, FALSE); > __syncicache((void *)addr, PAGE_SIZE); > pte_remove(mmu, pmap, addr, PTBL_UNHOLD); > PMAP_UNLOCK(pmap); > } else > __syncicache((void *)va, PAGE_SIZE); > } > va += PAGE_SIZE; > sz -= PAGE_SIZE; > } > rw_wunlock(&pvh_global_lock); > } > > /* > * Atomically extract and hold the physical page with the given > * pmap and virtual address pair if that mapping permits the given > * protection. > */ > static vm_page_t > mmu_booke_extract_and_hold(mmu_t mmu, pmap_t pmap, vm_offset_t va, > vm_prot_t prot) > { > pte_t *pte; > vm_page_t m; > uint32_t pte_wbit; > vm_paddr_t pa; > > m = NULL; > pa = 0; > PMAP_LOCK(pmap); > retry: > pte = pte_find(mmu, pmap, va); > if ((pte != NULL) && PTE_ISVALID(pte)) { > if (pmap == kernel_pmap) > pte_wbit = PTE_SW; > else > pte_wbit = PTE_UW; > > if ((pte->flags & pte_wbit) || ((prot & VM_PROT_WRITE) == 0)) { > if (vm_page_pa_tryrelock(pmap, PTE_PA(pte), &pa)) > goto retry; > m = PHYS_TO_VM_PAGE(PTE_PA(pte)); > vm_page_hold(m); > } > } > > PA_UNLOCK_COND(pa); > PMAP_UNLOCK(pmap); > return (m); > } > > /* > * Initialize a vm_page's machine-dependent fields. > */ > static void > mmu_booke_page_init(mmu_t mmu, vm_page_t m) > { > > TAILQ_INIT(&m->md.pv_list); > } > > /* > * mmu_booke_zero_page_area zeros the specified hardware page by > * mapping it into virtual memory and using bzero to clear > * its contents. > * > * off and size must reside within a single page. > */ > static void > mmu_booke_zero_page_area(mmu_t mmu, vm_page_t m, int off, int size) > { > vm_offset_t va; > > /* XXX KASSERT off and size are within a single page? */ > > mtx_lock(&zero_page_mutex); > va = zero_page_va; > > mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(m)); > bzero((caddr_t)va + off, size); > mmu_booke_kremove(mmu, va); > > mtx_unlock(&zero_page_mutex); > } > > /* > * mmu_booke_zero_page zeros the specified hardware page. > */ > static void > mmu_booke_zero_page(mmu_t mmu, vm_page_t m) > { > > mmu_booke_zero_page_area(mmu, m, 0, PAGE_SIZE); > } > > /* > * mmu_booke_copy_page copies the specified (machine independent) page by > * mapping the page into virtual memory and using memcopy to copy the page, > * one machine dependent page at a time. > */ > static void > mmu_booke_copy_page(mmu_t mmu, vm_page_t sm, vm_page_t dm) > { > vm_offset_t sva, dva; > > sva = copy_page_src_va; > dva = copy_page_dst_va; > > mtx_lock(©_page_mutex); > mmu_booke_kenter(mmu, sva, VM_PAGE_TO_PHYS(sm)); > mmu_booke_kenter(mmu, dva, VM_PAGE_TO_PHYS(dm)); > memcpy((caddr_t)dva, (caddr_t)sva, PAGE_SIZE); > mmu_booke_kremove(mmu, dva); > mmu_booke_kremove(mmu, sva); > mtx_unlock(©_page_mutex); > } > > static inline void > mmu_booke_copy_pages(mmu_t mmu, vm_page_t *ma, vm_offset_t a_offset, > vm_page_t *mb, vm_offset_t b_offset, int xfersize) > { > void *a_cp, *b_cp; > vm_offset_t a_pg_offset, b_pg_offset; > int cnt; > > mtx_lock(©_page_mutex); > while (xfersize > 0) { > a_pg_offset = a_offset & PAGE_MASK; > cnt = min(xfersize, PAGE_SIZE - a_pg_offset); > mmu_booke_kenter(mmu, copy_page_src_va, > VM_PAGE_TO_PHYS(ma[a_offset >> PAGE_SHIFT])); > a_cp = (char *)copy_page_src_va + a_pg_offset; > b_pg_offset = b_offset & PAGE_MASK; > cnt = min(cnt, PAGE_SIZE - b_pg_offset); > mmu_booke_kenter(mmu, copy_page_dst_va, > VM_PAGE_TO_PHYS(mb[b_offset >> PAGE_SHIFT])); > b_cp = (char *)copy_page_dst_va + b_pg_offset; > bcopy(a_cp, b_cp, cnt); > mmu_booke_kremove(mmu, copy_page_dst_va); > mmu_booke_kremove(mmu, copy_page_src_va); > a_offset += cnt; > b_offset += cnt; > xfersize -= cnt; > } > mtx_unlock(©_page_mutex); > } > > /* > * mmu_booke_zero_page_idle zeros the specified hardware page by mapping it > * into virtual memory and using bzero to clear its contents. This is intended > * to be called from the vm_pagezero process only and outside of Giant. No > * lock is required. > */ > static void > mmu_booke_zero_page_idle(mmu_t mmu, vm_page_t m) > { > vm_offset_t va; > > va = zero_page_idle_va; > mmu_booke_kenter(mmu, va, VM_PAGE_TO_PHYS(m)); > bzero((caddr_t)va, PAGE_SIZE); > mmu_booke_kremove(mmu, va); > } > > /* > * Return whether or not the specified physical page was modified > * in any of physical maps. > */ > static boolean_t > mmu_booke_is_modified(mmu_t mmu, vm_page_t m) > { > pte_t *pte; > pv_entry_t pv; > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("mmu_booke_is_modified: page %p is not managed", m)); > rv = FALSE; > > /* > * If the page is not exclusive busied, then PGA_WRITEABLE cannot be > * concurrently set while the object is locked. Thus, if PGA_WRITEABLE > * is clear, no PTEs can be modified. > */ > VM_OBJECT_ASSERT_WLOCKED(m->object); > if (!vm_page_xbusied(m) && (m->aflags & PGA_WRITEABLE) == 0) > return (rv); > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > PMAP_LOCK(pv->pv_pmap); > if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && > PTE_ISVALID(pte)) { > if (PTE_ISMODIFIED(pte)) > rv = TRUE; > } > PMAP_UNLOCK(pv->pv_pmap); > if (rv) > break; > } > rw_wunlock(&pvh_global_lock); > return (rv); > } > > /* > * Return whether or not the specified virtual address is eligible > * for prefault. > */ > static boolean_t > mmu_booke_is_prefaultable(mmu_t mmu, pmap_t pmap, vm_offset_t addr) > { > > return (FALSE); > } > > /* > * Return whether or not the specified physical page was referenced > * in any physical maps. > */ > static boolean_t > mmu_booke_is_referenced(mmu_t mmu, vm_page_t m) > { > pte_t *pte; > pv_entry_t pv; > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("mmu_booke_is_referenced: page %p is not managed", m)); > rv = FALSE; > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > PMAP_LOCK(pv->pv_pmap); > if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && > PTE_ISVALID(pte)) { > if (PTE_ISREFERENCED(pte)) > rv = TRUE; > } > PMAP_UNLOCK(pv->pv_pmap); > if (rv) > break; > } > rw_wunlock(&pvh_global_lock); > return (rv); > } > > /* > * Clear the modify bits on the specified physical page. > */ > static void > mmu_booke_clear_modify(mmu_t mmu, vm_page_t m) > { > pte_t *pte; > pv_entry_t pv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("mmu_booke_clear_modify: page %p is not managed", m)); > VM_OBJECT_ASSERT_WLOCKED(m->object); > KASSERT(!vm_page_xbusied(m), > ("mmu_booke_clear_modify: page %p is exclusive busied", m)); > > /* > * If the page is not PG_AWRITEABLE, then no PTEs can be modified. > * If the object containing the page is locked and the page is not > * exclusive busied, then PG_AWRITEABLE cannot be concurrently set. > */ > if ((m->aflags & PGA_WRITEABLE) == 0) > return; > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > PMAP_LOCK(pv->pv_pmap); > if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && > PTE_ISVALID(pte)) { > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > if (pte->flags & (PTE_SW | PTE_UW | PTE_MODIFIED)) { > tlb0_flush_entry(pv->pv_va); > pte->flags &= ~(PTE_SW | PTE_UW | PTE_MODIFIED | > PTE_REFERENCED); > } > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > } > PMAP_UNLOCK(pv->pv_pmap); > } > rw_wunlock(&pvh_global_lock); > } > > /* > * Return a count of reference bits for a page, clearing those bits. > * It is not necessary for every reference bit to be cleared, but it > * is necessary that 0 only be returned when there are truly no > * reference bits set. > * > * XXX: The exact number of bits to check and clear is a matter that > * should be tested and standardized at some point in the future for > * optimal aging of shared pages. > */ > static int > mmu_booke_ts_referenced(mmu_t mmu, vm_page_t m) > { > pte_t *pte; > pv_entry_t pv; > int count; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("mmu_booke_ts_referenced: page %p is not managed", m)); > count = 0; > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > PMAP_LOCK(pv->pv_pmap); > if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL && > PTE_ISVALID(pte)) { > if (PTE_ISREFERENCED(pte)) { > mtx_lock_spin(&tlbivax_mutex); > tlb_miss_lock(); > > tlb0_flush_entry(pv->pv_va); > pte->flags &= ~PTE_REFERENCED; > > tlb_miss_unlock(); > mtx_unlock_spin(&tlbivax_mutex); > > if (++count > 4) { > PMAP_UNLOCK(pv->pv_pmap); > break; > } > } > } > PMAP_UNLOCK(pv->pv_pmap); > } > rw_wunlock(&pvh_global_lock); > return (count); > } > > /* > * Clear the wired attribute from the mappings for the specified range of > * addresses in the given pmap. Every valid mapping within that range must > * have the wired attribute set. In contrast, invalid mappings cannot have > * the wired attribute set, so they are ignored. > * > * The wired attribute of the page table entry is not a hardware feature, so > * there is no need to invalidate any TLB entries. > */ > static void > mmu_booke_unwire(mmu_t mmu, pmap_t pmap, vm_offset_t sva, vm_offset_t eva) > { > vm_offset_t va; > pte_t *pte; > > PMAP_LOCK(pmap); > for (va = sva; va < eva; va += PAGE_SIZE) { > if ((pte = pte_find(mmu, pmap, va)) != NULL && > PTE_ISVALID(pte)) { > if (!PTE_ISWIRED(pte)) > panic("mmu_booke_unwire: pte %p isn't wired", > pte); > pte->flags &= ~PTE_WIRED; > pmap->pm_stats.wired_count--; > } > } > PMAP_UNLOCK(pmap); > > } > > /* > * Return true if the pmap's pv is one of the first 16 pvs linked to from this > * page. This count may be changed upwards or downwards in the future; it is > * only necessary that true be returned for a small subset of pmaps for proper > * page aging. > */ > static boolean_t > mmu_booke_page_exists_quick(mmu_t mmu, pmap_t pmap, vm_page_t m) > { > pv_entry_t pv; > int loops; > boolean_t rv; > > KASSERT((m->oflags & VPO_UNMANAGED) == 0, > ("mmu_booke_page_exists_quick: page %p is not managed", m)); > loops = 0; > rv = FALSE; > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > if (pv->pv_pmap == pmap) { > rv = TRUE; > break; > } > if (++loops >= 16) > break; > } > rw_wunlock(&pvh_global_lock); > return (rv); > } > > /* > * Return the number of managed mappings to the given physical page that are > * wired. > */ > static int > mmu_booke_page_wired_mappings(mmu_t mmu, vm_page_t m) > { > pv_entry_t pv; > pte_t *pte; > int count = 0; > > if ((m->oflags & VPO_UNMANAGED) != 0) > return (count); > rw_wlock(&pvh_global_lock); > TAILQ_FOREACH(pv, &m->md.pv_list, pv_link) { > PMAP_LOCK(pv->pv_pmap); > if ((pte = pte_find(mmu, pv->pv_pmap, pv->pv_va)) != NULL) > if (PTE_ISVALID(pte) && PTE_ISWIRED(pte)) > count++; > PMAP_UNLOCK(pv->pv_pmap); > } > rw_wunlock(&pvh_global_lock); > return (count); > } > > static int > mmu_booke_dev_direct_mapped(mmu_t mmu, vm_paddr_t pa, vm_size_t size) > { > int i; > vm_offset_t va; > > /* > * This currently does not work for entries that > * overlap TLB1 entries. > */ > for (i = 0; i < tlb1_idx; i ++) { > if (tlb1_iomapped(i, pa, size, &va) == 0) > return (0); > } > > return (EFAULT); > } > >-vm_offset_t >-mmu_booke_dumpsys_map(mmu_t mmu, struct pmap_md *md, vm_size_t ofs, >- vm_size_t *sz) >+void >+mmu_booke_dumpsys_map(mmu_t mmu, vm_paddr_t pa, size_t sz, void **va) > { >- vm_paddr_t pa, ppa; >- vm_offset_t va; >+ vm_paddr_t ppa; >+ vm_offset_t ofs; > vm_size_t gran; > >- /* Raw physical memory dumps don't have a virtual address. */ >- if (md->md_vaddr == ~0UL) { >- /* We always map a 256MB page at 256M. */ >- gran = 256 * 1024 * 1024; >- pa = md->md_paddr + ofs; >- ppa = pa & ~(gran - 1); >- ofs = pa - ppa; >- va = gran; >- tlb1_set_entry(va, ppa, gran, _TLB_ENTRY_IO); >- if (*sz > (gran - ofs)) >- *sz = gran - ofs; >- return (va + ofs); >- } >- > /* Minidumps are based on virtual memory addresses. */ >- va = md->md_vaddr + ofs; >- if (va >= kernstart + kernsize) { >- gran = PAGE_SIZE - (va & PAGE_MASK); >- if (*sz > gran) >- *sz = gran; >+ if (do_minidump) { >+ *va = (void *)pa; >+ return; > } >- return (va); >+ >+ /* Raw physical memory dumps don't have a virtual address. */ >+ /* We always map a 256MB page at 256M. */ >+ gran = 256 * 1024 * 1024; >+ ppa = pa & ~(gran - 1); >+ ofs = pa - ppa; >+ *va = (void *)gran; >+ tlb1_set_entry((vm_offset_t)va, ppa, gran, _TLB_ENTRY_IO); >+ >+ if (sz > (gran - ofs)) >+ tlb1_set_entry((vm_offset_t)(va + gran), ppa + gran, gran, >+ _TLB_ENTRY_IO); > } > > void >-mmu_booke_dumpsys_unmap(mmu_t mmu, struct pmap_md *md, vm_size_t ofs, >- vm_offset_t va) >+mmu_booke_dumpsys_unmap(mmu_t mmu, vm_paddr_t pa, size_t sz, void *va) > { >+ vm_paddr_t ppa; >+ vm_offset_t ofs; >+ vm_size_t gran; >+ >+ /* Minidumps are based on virtual memory addresses. */ >+ /* Nothing to do... */ >+ if (do_minidump) >+ return; > > /* Raw physical memory dumps don't have a virtual address. */ >- if (md->md_vaddr == ~0UL) { >+ tlb1_idx--; >+ tlb1[tlb1_idx].mas1 = 0; >+ tlb1[tlb1_idx].mas2 = 0; >+ tlb1[tlb1_idx].mas3 = 0; >+ tlb1_write_entry(tlb1_idx); >+ >+ gran = 256 * 1024 * 1024; >+ ppa = pa & ~(gran - 1); >+ ofs = pa - ppa; >+ if (sz > (gran - ofs)) { > tlb1_idx--; > tlb1[tlb1_idx].mas1 = 0; > tlb1[tlb1_idx].mas2 = 0; > tlb1[tlb1_idx].mas3 = 0; > tlb1_write_entry(tlb1_idx); >- return; > } >- >- /* Minidumps are based on virtual memory addresses. */ >- /* Nothing to do... */ > } > >-struct pmap_md * >-mmu_booke_scan_md(mmu_t mmu, struct pmap_md *prev) >+extern struct dump_pa dump_map[PHYS_AVAIL_SZ + 1]; >+ >+void >+mmu_booke_scan_init(mmu_t mmu) > { >- static struct pmap_md md; >- pte_t *pte; > vm_offset_t va; >- >- if (dumpsys_minidump) { >- md.md_paddr = ~0UL; /* Minidumps use virtual addresses. */ >- if (prev == NULL) { >- /* 1st: kernel .data and .bss. */ >- md.md_index = 1; >- md.md_vaddr = trunc_page((uintptr_t)_etext); >- md.md_size = round_page((uintptr_t)_end) - md.md_vaddr; >- return (&md); >+ pte_t *pte; >+ int i; >+ >+ if (!do_minidump) { >+ /* Initialize phys. segments for dumpsys(). */ >+ memset(&dump_map, 0, sizeof(dump_map)); >+ mem_regions(&physmem_regions, &physmem_regions_sz, &availmem_regions, >+ &availmem_regions_sz); >+ for (i = 0; i < physmem_regions_sz; i++) { >+ dump_map[i].md_start = physmem_regions[i].mr_start; >+ dump_map[i].md_size = physmem_regions[i].mr_size; >+ } >+ return; >+ } >+ >+ /* Virtual segments for minidumps: */ >+ memset(&dump_map, 0, sizeof(dump_map)); >+ >+ /* 1st: kernel .data and .bss. */ >+ dump_map[0].md_start = trunc_page((uintptr_t)_etext); >+ dump_map[0].md_size = round_page((uintptr_t)_end) - dump_map[0].md_start; >+ >+ /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ >+ dump_map[1].md_start = data_start; >+ dump_map[1].md_size = data_end - data_start; >+ >+ /* 3rd: kernel VM. */ >+ va = dump_map[1].md_start + dump_map[1].md_size; >+ /* Find start of next chunk (from va). */ >+ while (va < virtual_end) { >+ /* Don't dump the buffer cache. */ >+ if (va >= kmi.buffer_sva && va < kmi.buffer_eva) { >+ va = kmi.buffer_eva; >+ continue; > } >- switch (prev->md_index) { >- case 1: >- /* 2nd: msgbuf and tables (see pmap_bootstrap()). */ >- md.md_index = 2; >- md.md_vaddr = data_start; >- md.md_size = data_end - data_start; >+ pte = pte_find(mmu, kernel_pmap, va); >+ if (pte != NULL && PTE_ISVALID(pte)) > break; >- case 2: >- /* 3rd: kernel VM. */ >- va = prev->md_vaddr + prev->md_size; >- /* Find start of next chunk (from va). */ >- while (va < virtual_end) { >- /* Don't dump the buffer cache. */ >- if (va >= kmi.buffer_sva && >- va < kmi.buffer_eva) { >- va = kmi.buffer_eva; >- continue; >- } >- pte = pte_find(mmu, kernel_pmap, va); >- if (pte != NULL && PTE_ISVALID(pte)) >- break; >- va += PAGE_SIZE; >- } >- if (va < virtual_end) { >- md.md_vaddr = va; >- va += PAGE_SIZE; >- /* Find last page in chunk. */ >- while (va < virtual_end) { >- /* Don't run into the buffer cache. */ >- if (va == kmi.buffer_sva) >- break; >- pte = pte_find(mmu, kernel_pmap, va); >- if (pte == NULL || !PTE_ISVALID(pte)) >- break; >- va += PAGE_SIZE; >- } >- md.md_size = va - md.md_vaddr; >+ va += PAGE_SIZE; >+ } >+ if (va < virtual_end) { >+ dump_map[2].md_start = va; >+ va += PAGE_SIZE; >+ /* Find last page in chunk. */ >+ while (va < virtual_end) { >+ /* Don't run into the buffer cache. */ >+ if (va == kmi.buffer_sva) > break; >- } >- md.md_index = 3; >- /* FALLTHROUGH */ >- default: >- return (NULL); >- } >- } else { /* minidumps */ >- mem_regions(&physmem_regions, &physmem_regions_sz, >- &availmem_regions, &availmem_regions_sz); >- >- if (prev == NULL) { >- /* first physical chunk. */ >- md.md_paddr = physmem_regions[0].mr_start; >- md.md_size = physmem_regions[0].mr_size; >- md.md_vaddr = ~0UL; >- md.md_index = 1; >- } else if (md.md_index < physmem_regions_sz) { >- md.md_paddr = physmem_regions[md.md_index].mr_start; >- md.md_size = physmem_regions[md.md_index].mr_size; >- md.md_vaddr = ~0UL; >- md.md_index++; >- } else { >- /* There's no next physical chunk. */ >- return (NULL); >+ pte = pte_find(mmu, kernel_pmap, va); >+ if (pte == NULL || !PTE_ISVALID(pte)) >+ break; >+ va += PAGE_SIZE; > } >+ dump_map[2].md_size = va - dump_map[2].md_start; > } >- >- return (&md); > } > > /* > * Map a set of physical memory pages into the kernel virtual address space. > * Return a pointer to where it is mapped. This routine is intended to be used > * for mapping device memory, NOT real memory. > */ > static void * > mmu_booke_mapdev(mmu_t mmu, vm_paddr_t pa, vm_size_t size) > { > > return (mmu_booke_mapdev_attr(mmu, pa, size, VM_MEMATTR_DEFAULT)); > } > > static void * > mmu_booke_mapdev_attr(mmu_t mmu, vm_paddr_t pa, vm_size_t size, vm_memattr_t ma) > { > void *res; > uintptr_t va; > vm_size_t sz; > int i; > > /* > * Check if this is premapped in TLB1. Note: this should probably also > * check whether a sequence of TLB1 entries exist that match the > * requirement, but now only checks the easy case. > */ > if (ma == VM_MEMATTR_DEFAULT) { > for (i = 0; i < tlb1_idx; i++) { > if (!(tlb1[i].mas1 & MAS1_VALID)) > continue; > if (pa >= tlb1[i].phys && > (pa + size) <= (tlb1[i].phys + tlb1[i].size)) > return (void *)(tlb1[i].virt + > (pa - tlb1[i].phys)); > } > } > > size = roundup(size, PAGE_SIZE); > > /* > * We leave a hole for device direct mapping between the maximum user > * address (0x8000000) and the minimum KVA address (0xc0000000). If > * devices are in there, just map them 1:1. If not, map them to the > * device mapping area about VM_MAX_KERNEL_ADDRESS. These mapped > * addresses should be pulled from an allocator, but since we do not > * ever free TLB1 entries, it is safe just to increment a counter. > * Note that there isn't a lot of address space here (128 MB) and it > * is not at all difficult to imagine running out, since that is a 4:1 > * compression from the 0xc0000000 - 0xf0000000 address space that gets > * mapped there. > */ > if (pa >= (VM_MAXUSER_ADDRESS + PAGE_SIZE) && > (pa + size - 1) < VM_MIN_KERNEL_ADDRESS) > va = pa; > else > va = atomic_fetchadd_int(&tlb1_map_base, size); > res = (void *)va; > > do { > sz = 1 << (ilog2(size) & ~1); > if (bootverbose) > printf("Wiring VA=%x to PA=%x (size=%x), " > "using TLB1[%d]\n", va, pa, sz, tlb1_idx); > tlb1_set_entry(va, pa, sz, tlb_calc_wimg(pa, ma)); > size -= sz; > pa += sz; > va += sz; > } while (size > 0); > > return (res); > } > > /* > * 'Unmap' a range mapped by mmu_booke_mapdev(). > */ > static void > mmu_booke_unmapdev(mmu_t mmu, vm_offset_t va, vm_size_t size) > { > #ifdef SUPPORTS_SHRINKING_TLB1 > vm_offset_t base, offset; > > /* > * Unmap only if this is inside kernel virtual space. > */ > if ((va >= VM_MIN_KERNEL_ADDRESS) && (va <= VM_MAX_KERNEL_ADDRESS)) { > base = trunc_page(va); > offset = va & PAGE_MASK; > size = roundup(offset + size, PAGE_SIZE); > kva_free(base, size); > } > #endif > } > > /* > * mmu_booke_object_init_pt preloads the ptes for a given object into the > * specified pmap. This eliminates the blast of soft faults on process startup > * and immediately after an mmap. > */ > static void > mmu_booke_object_init_pt(mmu_t mmu, pmap_t pmap, vm_offset_t addr, > vm_object_t object, vm_pindex_t pindex, vm_size_t size) > { > > VM_OBJECT_ASSERT_WLOCKED(object); > KASSERT(object->type == OBJT_DEVICE || object->type == OBJT_SG, > ("mmu_booke_object_init_pt: non-device object")); > } > > /* > * Perform the pmap work for mincore. > */ > static int > mmu_booke_mincore(mmu_t mmu, pmap_t pmap, vm_offset_t addr, > vm_paddr_t *locked_pa) > { > > /* XXX: this should be implemented at some point */ > return (0); > } > > /**************************************************************************/ > /* TID handling */ > /**************************************************************************/ > > /* > * Allocate a TID. If necessary, steal one from someone else. > * The new TID is flushed from the TLB before returning. > */ > static tlbtid_t > tid_alloc(pmap_t pmap) > { > tlbtid_t tid; > int thiscpu; > > KASSERT((pmap != kernel_pmap), ("tid_alloc: kernel pmap")); > > CTR2(KTR_PMAP, "%s: s (pmap = %p)", __func__, pmap); > > thiscpu = PCPU_GET(cpuid); > > tid = PCPU_GET(tid_next); > if (tid > TID_MAX) > tid = TID_MIN; > PCPU_SET(tid_next, tid + 1); > > /* If we are stealing TID then clear the relevant pmap's field */ > if (tidbusy[thiscpu][tid] != NULL) { > > CTR2(KTR_PMAP, "%s: warning: stealing tid %d", __func__, tid); > > tidbusy[thiscpu][tid]->pm_tid[thiscpu] = TID_NONE; > > /* Flush all entries from TLB0 matching this TID. */ > tid_flush(tid); > } > > tidbusy[thiscpu][tid] = pmap; > pmap->pm_tid[thiscpu] = tid; > __asm __volatile("msync; isync"); > > CTR3(KTR_PMAP, "%s: e (%02d next = %02d)", __func__, tid, > PCPU_GET(tid_next)); > > return (tid); > } > > /**************************************************************************/ > /* TLB0 handling */ > /**************************************************************************/ > > static void > tlb_print_entry(int i, uint32_t mas1, uint32_t mas2, uint32_t mas3, > uint32_t mas7) > { > int as; > char desc[3]; > tlbtid_t tid; > vm_size_t size; > unsigned int tsize; > > desc[2] = '\0'; > if (mas1 & MAS1_VALID) > desc[0] = 'V'; > else > desc[0] = ' '; > > if (mas1 & MAS1_IPROT) > desc[1] = 'P'; > else > desc[1] = ' '; > > as = (mas1 & MAS1_TS_MASK) ? 1 : 0; > tid = MAS1_GETTID(mas1); > > tsize = (mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT; > size = 0; > if (tsize) > size = tsize2size(tsize); > > debugf("%3d: (%s) [AS=%d] " > "sz = 0x%08x tsz = %d tid = %d mas1 = 0x%08x " > "mas2(va) = 0x%08x mas3(pa) = 0x%08x mas7 = 0x%08x\n", > i, desc, as, size, tsize, tid, mas1, mas2, mas3, mas7); > } > > /* Convert TLB0 va and way number to tlb0[] table index. */ > static inline unsigned int > tlb0_tableidx(vm_offset_t va, unsigned int way) > { > unsigned int idx; > > idx = (way * TLB0_ENTRIES_PER_WAY); > idx += (va & MAS2_TLB0_ENTRY_IDX_MASK) >> MAS2_TLB0_ENTRY_IDX_SHIFT; > return (idx); > } > > /* > * Invalidate TLB0 entry. > */ > static inline void > tlb0_flush_entry(vm_offset_t va) > { > > CTR2(KTR_PMAP, "%s: s va=0x%08x", __func__, va); > > mtx_assert(&tlbivax_mutex, MA_OWNED); > > __asm __volatile("tlbivax 0, %0" :: "r"(va & MAS2_EPN_MASK)); > __asm __volatile("isync; msync"); > __asm __volatile("tlbsync; msync"); > > CTR1(KTR_PMAP, "%s: e", __func__); > } > > /* Print out contents of the MAS registers for each TLB0 entry */ > void > tlb0_print_tlbentries(void) > { > uint32_t mas0, mas1, mas2, mas3, mas7; > int entryidx, way, idx; > > debugf("TLB0 entries:\n"); > for (way = 0; way < TLB0_WAYS; way ++) > for (entryidx = 0; entryidx < TLB0_ENTRIES_PER_WAY; entryidx++) { > > mas0 = MAS0_TLBSEL(0) | MAS0_ESEL(way); > mtspr(SPR_MAS0, mas0); > __asm __volatile("isync"); > > mas2 = entryidx << MAS2_TLB0_ENTRY_IDX_SHIFT; > mtspr(SPR_MAS2, mas2); > > __asm __volatile("isync; tlbre"); > > mas1 = mfspr(SPR_MAS1); > mas2 = mfspr(SPR_MAS2); > mas3 = mfspr(SPR_MAS3); > mas7 = mfspr(SPR_MAS7); > > idx = tlb0_tableidx(mas2, way); > tlb_print_entry(idx, mas1, mas2, mas3, mas7); > } > } > > /**************************************************************************/ > /* TLB1 handling */ > /**************************************************************************/ > > /* > * TLB1 mapping notes: > * > * TLB1[0] Kernel text and data. > * TLB1[1-15] Additional kernel text and data mappings (if required), PCI > * windows, other devices mappings. > */ > > /* > * Write given entry to TLB1 hardware. > * Use 32 bit pa, clear 4 high-order bits of RPN (mas7). > */ > static void > tlb1_write_entry(unsigned int idx) > { > uint32_t mas0, mas7; > > //debugf("tlb1_write_entry: s\n"); > > /* Clear high order RPN bits */ > mas7 = 0; > > /* Select entry */ > mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(idx); > //debugf("tlb1_write_entry: mas0 = 0x%08x\n", mas0); > > mtspr(SPR_MAS0, mas0); > __asm __volatile("isync"); > mtspr(SPR_MAS1, tlb1[idx].mas1); > __asm __volatile("isync"); > mtspr(SPR_MAS2, tlb1[idx].mas2); > __asm __volatile("isync"); > mtspr(SPR_MAS3, tlb1[idx].mas3); > __asm __volatile("isync"); > mtspr(SPR_MAS7, mas7); > __asm __volatile("isync; tlbwe; isync; msync"); > > //debugf("tlb1_write_entry: e\n"); > } > > /* > * Return the largest uint value log such that 2^log <= num. > */ > static unsigned int > ilog2(unsigned int num) > { > int lz; > > __asm ("cntlzw %0, %1" : "=r" (lz) : "r" (num)); > return (31 - lz); > } > > /* > * Convert TLB TSIZE value to mapped region size. > */ > static vm_size_t > tsize2size(unsigned int tsize) > { > > /* > * size = 4^tsize KB > * size = 4^tsize * 2^10 = 2^(2 * tsize - 10) > */ > > return ((1 << (2 * tsize)) * 1024); > } > > /* > * Convert region size (must be power of 4) to TLB TSIZE value. > */ > static unsigned int > size2tsize(vm_size_t size) > { > > return (ilog2(size) / 2 - 5); > } > > /* > * Register permanent kernel mapping in TLB1. > * > * Entries are created starting from index 0 (current free entry is > * kept in tlb1_idx) and are not supposed to be invalidated. > */ > static int > tlb1_set_entry(vm_offset_t va, vm_offset_t pa, vm_size_t size, > uint32_t flags) > { > uint32_t ts, tid; > int tsize, index; > > index = atomic_fetchadd_int(&tlb1_idx, 1); > if (index >= TLB1_ENTRIES) { > printf("tlb1_set_entry: TLB1 full!\n"); > return (-1); > } > > /* Convert size to TSIZE */ > tsize = size2tsize(size); > > tid = (TID_KERNEL << MAS1_TID_SHIFT) & MAS1_TID_MASK; > /* XXX TS is hard coded to 0 for now as we only use single address space */ > ts = (0 << MAS1_TS_SHIFT) & MAS1_TS_MASK; > > /* > * Atomicity is preserved by the atomic increment above since nothing > * is ever removed from tlb1. > */ > > tlb1[index].phys = pa; > tlb1[index].virt = va; > tlb1[index].size = size; > tlb1[index].mas1 = MAS1_VALID | MAS1_IPROT | ts | tid; > tlb1[index].mas1 |= ((tsize << MAS1_TSIZE_SHIFT) & MAS1_TSIZE_MASK); > tlb1[index].mas2 = (va & MAS2_EPN_MASK) | flags; > > /* Set supervisor RWX permission bits */ > tlb1[index].mas3 = (pa & MAS3_RPN) | MAS3_SR | MAS3_SW | MAS3_SX; > > tlb1_write_entry(index); > > /* > * XXX in general TLB1 updates should be propagated between CPUs, > * since current design assumes to have the same TLB1 set-up on all > * cores. > */ > return (0); > } > > /* > * Map in contiguous RAM region into the TLB1 using maximum of > * KERNEL_REGION_MAX_TLB_ENTRIES entries. > * > * If necessary round up last entry size and return total size > * used by all allocated entries. > */ > vm_size_t > tlb1_mapin_region(vm_offset_t va, vm_paddr_t pa, vm_size_t size) > { > vm_size_t pgs[KERNEL_REGION_MAX_TLB_ENTRIES]; > vm_size_t mapped, pgsz, base, mask; > int idx, nents; > > /* Round up to the next 1M */ > size = (size + (1 << 20) - 1) & ~((1 << 20) - 1); > > mapped = 0; > idx = 0; > base = va; > pgsz = 64*1024*1024; > while (mapped < size) { > while (mapped < size && idx < KERNEL_REGION_MAX_TLB_ENTRIES) { > while (pgsz > (size - mapped)) > pgsz >>= 2; > pgs[idx++] = pgsz; > mapped += pgsz; > } > > /* We under-map. Correct for this. */ > if (mapped < size) { > while (pgs[idx - 1] == pgsz) { > idx--; > mapped -= pgsz; > } > /* XXX We may increase beyond out starting point. */ > pgsz <<= 2; > pgs[idx++] = pgsz; > mapped += pgsz; > } > } > > nents = idx; > mask = pgs[0] - 1; > /* Align address to the boundary */ > if (va & mask) { > va = (va + mask) & ~mask; > pa = (pa + mask) & ~mask; > } > > for (idx = 0; idx < nents; idx++) { > pgsz = pgs[idx]; > debugf("%u: %x -> %x, size=%x\n", idx, pa, va, pgsz); > tlb1_set_entry(va, pa, pgsz, _TLB_ENTRY_MEM); > pa += pgsz; > va += pgsz; > } > > mapped = (va - base); > printf("mapped size 0x%08x (wasted space 0x%08x)\n", > mapped, mapped - size); > return (mapped); > } > > /* > * TLB1 initialization routine, to be called after the very first > * assembler level setup done in locore.S. > */ > void > tlb1_init() > { > uint32_t mas0, mas1, mas2, mas3; > uint32_t tsz; > u_int i; > > if (bootinfo != NULL && bootinfo[0] != 1) { > tlb1_idx = *((uint16_t *)(bootinfo + 8)); > } else > tlb1_idx = 1; > > /* The first entry/entries are used to map the kernel. */ > for (i = 0; i < tlb1_idx; i++) { > mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(i); > mtspr(SPR_MAS0, mas0); > __asm __volatile("isync; tlbre"); > > mas1 = mfspr(SPR_MAS1); > if ((mas1 & MAS1_VALID) == 0) > continue; > > mas2 = mfspr(SPR_MAS2); > mas3 = mfspr(SPR_MAS3); > > tlb1[i].mas1 = mas1; > tlb1[i].mas2 = mfspr(SPR_MAS2); > tlb1[i].mas3 = mas3; > tlb1[i].virt = mas2 & MAS2_EPN_MASK; > tlb1[i].phys = mas3 & MAS3_RPN; > > if (i == 0) > kernload = mas3 & MAS3_RPN; > > tsz = (mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT; > tlb1[i].size = (tsz > 0) ? tsize2size(tsz) : 0; > kernsize += tlb1[i].size; > } > > #ifdef SMP > bp_ntlb1s = tlb1_idx; > #endif > > /* Purge the remaining entries */ > for (i = tlb1_idx; i < TLB1_ENTRIES; i++) > tlb1_write_entry(i); > > /* Setup TLB miss defaults */ > set_mas4_defaults(); > } > > vm_offset_t > pmap_early_io_map(vm_paddr_t pa, vm_size_t size) > { > vm_paddr_t pa_base; > vm_offset_t va, sz; > int i; > > KASSERT(!pmap_bootstrapped, ("Do not use after PMAP is up!")); > > for (i = 0; i < tlb1_idx; i++) { > if (!(tlb1[i].mas1 & MAS1_VALID)) > continue; > if (pa >= tlb1[i].phys && (pa + size) <= > (tlb1[i].phys + tlb1[i].size)) > return (tlb1[i].virt + (pa - tlb1[i].phys)); > } > > pa_base = trunc_page(pa); > size = roundup(size + (pa - pa_base), PAGE_SIZE); > tlb1_map_base = roundup2(tlb1_map_base, 1 << (ilog2(size) & ~1)); > va = tlb1_map_base + (pa - pa_base); > > do { > sz = 1 << (ilog2(size) & ~1); > tlb1_set_entry(tlb1_map_base, pa_base, sz, _TLB_ENTRY_IO); > size -= sz; > pa_base += sz; > tlb1_map_base += sz; > } while (size > 0); > > #ifdef SMP > bp_ntlb1s = tlb1_idx; > #endif > > return (va); > } > > /* > * Setup MAS4 defaults. > * These values are loaded to MAS0-2 on a TLB miss. > */ > static void > set_mas4_defaults(void) > { > uint32_t mas4; > > /* Defaults: TLB0, PID0, TSIZED=4K */ > mas4 = MAS4_TLBSELD0; > mas4 |= (TLB_SIZE_4K << MAS4_TSIZED_SHIFT) & MAS4_TSIZED_MASK; > #ifdef SMP > mas4 |= MAS4_MD; > #endif > mtspr(SPR_MAS4, mas4); > __asm __volatile("isync"); > } > > /* > * Print out contents of the MAS registers for each TLB1 entry > */ > void > tlb1_print_tlbentries(void) > { > uint32_t mas0, mas1, mas2, mas3, mas7; > int i; > > debugf("TLB1 entries:\n"); > for (i = 0; i < TLB1_ENTRIES; i++) { > > mas0 = MAS0_TLBSEL(1) | MAS0_ESEL(i); > mtspr(SPR_MAS0, mas0); > > __asm __volatile("isync; tlbre"); > > mas1 = mfspr(SPR_MAS1); > mas2 = mfspr(SPR_MAS2); > mas3 = mfspr(SPR_MAS3); > mas7 = mfspr(SPR_MAS7); > > tlb_print_entry(i, mas1, mas2, mas3, mas7); > } > } > > /* > * Print out contents of the in-ram tlb1 table. > */ > void > tlb1_print_entries(void) > { > int i; > > debugf("tlb1[] table entries:\n"); > for (i = 0; i < TLB1_ENTRIES; i++) > tlb_print_entry(i, tlb1[i].mas1, tlb1[i].mas2, tlb1[i].mas3, 0); > } > > /* > * Return 0 if the physical IO range is encompassed by one of the > * the TLB1 entries, otherwise return related error code. > */ > static int > tlb1_iomapped(int i, vm_paddr_t pa, vm_size_t size, vm_offset_t *va) > { > uint32_t prot; > vm_paddr_t pa_start; > vm_paddr_t pa_end; > unsigned int entry_tsize; > vm_size_t entry_size; > > *va = (vm_offset_t)NULL; > > /* Skip invalid entries */ > if (!(tlb1[i].mas1 & MAS1_VALID)) > return (EINVAL); > > /* > * The entry must be cache-inhibited, guarded, and r/w > * so it can function as an i/o page > */ > prot = tlb1[i].mas2 & (MAS2_I | MAS2_G); > if (prot != (MAS2_I | MAS2_G)) > return (EPERM); > > prot = tlb1[i].mas3 & (MAS3_SR | MAS3_SW); > if (prot != (MAS3_SR | MAS3_SW)) > return (EPERM); > > /* The address should be within the entry range. */ > entry_tsize = (tlb1[i].mas1 & MAS1_TSIZE_MASK) >> MAS1_TSIZE_SHIFT; > KASSERT((entry_tsize), ("tlb1_iomapped: invalid entry tsize")); > > entry_size = tsize2size(entry_tsize); > pa_start = tlb1[i].mas3 & MAS3_RPN; > pa_end = pa_start + entry_size - 1; > > if ((pa < pa_start) || ((pa + size) > pa_end)) > return (ERANGE); > > /* Return virtual address of this mapping. */ > *va = (tlb1[i].mas2 & MAS2_EPN_MASK) + (pa - pa_start); > return (0); > } >diff --git a/sys/powerpc/include/dump.h b/sys/powerpc/include/dump.h >new file mode 100644 >index 0000000..2ce5bb2 >--- /dev/null >+++ b/sys/powerpc/include/dump.h >@@ -0,0 +1,69 @@ >+/*- >+ * Copyright (c) 2014 EMC Corp. >+ * Copyright (c) 2014 Conrad Meyer <conrad.meyer@isilon.com> >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND >+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE >+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE >+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL >+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS >+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) >+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT >+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY >+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF >+ * SUCH DAMAGE. >+ * >+ * $FreeBSD$ >+ */ >+ >+#ifndef _MACHINE_DUMP_H_ >+#define _MACHINE_DUMP_H_ >+ >+#define KERNELDUMP_VERSION KERNELDUMP_POWERPC_VERSION >+#define EM_VALUE ELF_ARCH /* Defined in powerpc/include/elf.h */ >+#define DUMPSYS_MD_PA_NPAIRS (PHYS_AVAIL_SZ + 1) >+#define DUMPSYS_NUM_AUX_HDRS 0 >+ >+void dumpsys_md_pa_init(void); >+void dumpsys_unmap_chunk(vm_paddr_t, size_t, void *); >+ >+static inline struct dump_pa * >+dumpsys_md_pa_next(struct dump_pa *p) >+{ >+ >+ return dumpsys_gen_md_pa_next(p); >+} >+ >+static inline void >+dumpsys_wbinv_all(void) >+{ >+ >+ dumpsys_gen_wbinv_all(); >+} >+ >+static inline int >+dumpsys_write_aux_headers(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_gen_write_aux_headers(di)); >+} >+ >+static inline int >+dumpsys(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_generic(di)); >+} >+ >+#endif /* !_MACHINE_DUMP_H_ */ >diff --git a/sys/powerpc/include/pmap.h b/sys/powerpc/include/pmap.h >index 663cd1a..9a853f6 100644 >--- a/sys/powerpc/include/pmap.h >+++ b/sys/powerpc/include/pmap.h >@@ -1,264 +1,252 @@ > /*- > * Copyright (C) 2006 Semihalf, Marian Balakowicz <m8@semihalf.com> > * All rights reserved. > * > * Adapted for Freescale's e500 core CPUs. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 3. The name of the author may not be used to endorse or promote products > * derived from this software without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN > * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED > * TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR > * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF > * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING > * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS > * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > * > * $FreeBSD$ > */ > /*- > * Copyright (C) 1995, 1996 Wolfgang Solfrank. > * Copyright (C) 1995, 1996 TooLs GmbH. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 3. All advertising materials mentioning features or use of this software > * must display the following acknowledgement: > * This product includes software developed by TooLs GmbH. > * 4. The name of TooLs GmbH may not be used to endorse or promote products > * derived from this software without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY TOOLS GMBH ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL TOOLS GMBH BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, > * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; > * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, > * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR > * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF > * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > * > * from: $NetBSD: pmap.h,v 1.17 2000/03/30 16:18:24 jdolecek Exp $ > */ > > #ifndef _MACHINE_PMAP_H_ > #define _MACHINE_PMAP_H_ > > #include <sys/queue.h> > #include <sys/tree.h> > #include <sys/_cpuset.h> > #include <sys/_lock.h> > #include <sys/_mutex.h> > #include <machine/sr.h> > #include <machine/pte.h> > #include <machine/slb.h> > #include <machine/tlb.h> > >-struct pmap_md { >- u_int md_index; >- vm_paddr_t md_paddr; >- vm_offset_t md_vaddr; >- vm_size_t md_size; >-}; >- > #if defined(AIM) > > #if !defined(NPMAPS) > #define NPMAPS 32768 > #endif /* !defined(NPMAPS) */ > > struct slbtnode; > struct pmap; > typedef struct pmap *pmap_t; > > struct pvo_entry { > LIST_ENTRY(pvo_entry) pvo_vlink; /* Link to common virt page */ > LIST_ENTRY(pvo_entry) pvo_olink; /* Link to overflow entry */ > RB_ENTRY(pvo_entry) pvo_plink; /* Link to pmap entries */ > union { > struct pte pte; /* 32 bit PTE */ > struct lpte lpte; /* 64 bit PTE */ > } pvo_pte; > pmap_t pvo_pmap; /* Owning pmap */ > vm_offset_t pvo_vaddr; /* VA of entry */ > uint64_t pvo_vpn; /* Virtual page number */ > }; > LIST_HEAD(pvo_head, pvo_entry); > RB_HEAD(pvo_tree, pvo_entry); > int pvo_vaddr_compare(struct pvo_entry *, struct pvo_entry *); > RB_PROTOTYPE(pvo_tree, pvo_entry, pvo_plink, pvo_vaddr_compare); > > #define PVO_PTEGIDX_MASK 0x007UL /* which PTEG slot */ > #define PVO_PTEGIDX_VALID 0x008UL /* slot is valid */ > #define PVO_WIRED 0x010UL /* PVO entry is wired */ > #define PVO_MANAGED 0x020UL /* PVO entry is managed */ > #define PVO_BOOTSTRAP 0x080UL /* PVO entry allocated during > bootstrap */ > #define PVO_LARGE 0x200UL /* large page */ > #define PVO_VADDR(pvo) ((pvo)->pvo_vaddr & ~ADDR_POFF) > #define PVO_PTEGIDX_GET(pvo) ((pvo)->pvo_vaddr & PVO_PTEGIDX_MASK) > #define PVO_PTEGIDX_ISSET(pvo) ((pvo)->pvo_vaddr & PVO_PTEGIDX_VALID) > #define PVO_PTEGIDX_CLR(pvo) \ > ((void)((pvo)->pvo_vaddr &= ~(PVO_PTEGIDX_VALID|PVO_PTEGIDX_MASK))) > #define PVO_PTEGIDX_SET(pvo, i) \ > ((void)((pvo)->pvo_vaddr |= (i)|PVO_PTEGIDX_VALID)) > #define PVO_VSID(pvo) ((pvo)->pvo_vpn >> 16) > > struct pmap { > struct mtx pm_mtx; > > #ifdef __powerpc64__ > struct slbtnode *pm_slb_tree_root; > struct slb **pm_slb; > int pm_slb_len; > #else > register_t pm_sr[16]; > #endif > cpuset_t pm_active; > > struct pmap *pmap_phys; > struct pmap_statistics pm_stats; > struct pvo_tree pmap_pvo; > }; > > struct md_page { > u_int64_t mdpg_attrs; > vm_memattr_t mdpg_cache_attrs; > struct pvo_head mdpg_pvoh; > }; > > #define pmap_page_get_memattr(m) ((m)->md.mdpg_cache_attrs) > #define pmap_page_is_mapped(m) (!LIST_EMPTY(&(m)->md.mdpg_pvoh)) > > /* > * Return the VSID corresponding to a given virtual address. > * If no VSID is currently defined, it will allocate one, and add > * it to a free slot if available. > * > * NB: The PMAP MUST be locked already. > */ > uint64_t va_to_vsid(pmap_t pm, vm_offset_t va); > > /* Lock-free, non-allocating lookup routines */ > uint64_t kernel_va_to_slbv(vm_offset_t va); > struct slb *user_va_to_slb_entry(pmap_t pm, vm_offset_t va); > > uint64_t allocate_user_vsid(pmap_t pm, uint64_t esid, int large); > void free_vsid(pmap_t pm, uint64_t esid, int large); > void slb_insert_user(pmap_t pm, struct slb *slb); > void slb_insert_kernel(uint64_t slbe, uint64_t slbv); > > struct slbtnode *slb_alloc_tree(void); > void slb_free_tree(pmap_t pm); > struct slb **slb_alloc_user_cache(void); > void slb_free_user_cache(struct slb **); > > #else > > struct pmap { > struct mtx pm_mtx; /* pmap mutex */ > tlbtid_t pm_tid[MAXCPU]; /* TID to identify this pmap entries in TLB */ > cpuset_t pm_active; /* active on cpus */ > struct pmap_statistics pm_stats; /* pmap statistics */ > > /* Page table directory, array of pointers to page tables. */ > pte_t *pm_pdir[PDIR_NENTRIES]; > > /* List of allocated ptbl bufs (ptbl kva regions). */ > TAILQ_HEAD(, ptbl_buf) pm_ptbl_list; > }; > typedef struct pmap *pmap_t; > > struct pv_entry { > pmap_t pv_pmap; > vm_offset_t pv_va; > TAILQ_ENTRY(pv_entry) pv_link; > }; > typedef struct pv_entry *pv_entry_t; > > struct md_page { > TAILQ_HEAD(, pv_entry) pv_list; > }; > > #define pmap_page_get_memattr(m) VM_MEMATTR_DEFAULT > #define pmap_page_is_mapped(m) (!TAILQ_EMPTY(&(m)->md.pv_list)) > > #endif /* AIM */ > > extern struct pmap kernel_pmap_store; > #define kernel_pmap (&kernel_pmap_store) > > #ifdef _KERNEL > > #define PMAP_LOCK(pmap) mtx_lock(&(pmap)->pm_mtx) > #define PMAP_LOCK_ASSERT(pmap, type) \ > mtx_assert(&(pmap)->pm_mtx, (type)) > #define PMAP_LOCK_DESTROY(pmap) mtx_destroy(&(pmap)->pm_mtx) > #define PMAP_LOCK_INIT(pmap) mtx_init(&(pmap)->pm_mtx, \ > (pmap == kernel_pmap) ? "kernelpmap" : \ > "pmap", NULL, MTX_DEF) > #define PMAP_LOCKED(pmap) mtx_owned(&(pmap)->pm_mtx) > #define PMAP_MTX(pmap) (&(pmap)->pm_mtx) > #define PMAP_TRYLOCK(pmap) mtx_trylock(&(pmap)->pm_mtx) > #define PMAP_UNLOCK(pmap) mtx_unlock(&(pmap)->pm_mtx) > > #define pmap_page_is_write_mapped(m) (((m)->aflags & PGA_WRITEABLE) != 0) > > void pmap_bootstrap(vm_offset_t, vm_offset_t); > void pmap_kenter(vm_offset_t va, vm_paddr_t pa); > void pmap_kenter_attr(vm_offset_t va, vm_offset_t pa, vm_memattr_t); > void pmap_kremove(vm_offset_t); > void *pmap_mapdev(vm_paddr_t, vm_size_t); > void *pmap_mapdev_attr(vm_offset_t, vm_size_t, vm_memattr_t); > void pmap_unmapdev(vm_offset_t, vm_size_t); > void pmap_page_set_memattr(vm_page_t, vm_memattr_t); > void pmap_deactivate(struct thread *); > vm_paddr_t pmap_kextract(vm_offset_t); > int pmap_dev_direct_mapped(vm_paddr_t, vm_size_t); > boolean_t pmap_mmu_install(char *name, int prio); > > #define vtophys(va) pmap_kextract((vm_offset_t)(va)) > > #define PHYS_AVAIL_SZ 256 /* Allows up to 16GB Ram on pSeries with > * logical memory block size of 64MB. > * For more Ram increase the lmb or this value. > */ > > extern vm_offset_t phys_avail[PHYS_AVAIL_SZ]; > extern vm_offset_t virtual_avail; > extern vm_offset_t virtual_end; > > extern vm_offset_t msgbuf_phys; > > extern int pmap_bootstrapped; > >-extern vm_offset_t pmap_dumpsys_map(struct pmap_md *, vm_size_t, vm_size_t *); >-extern void pmap_dumpsys_unmap(struct pmap_md *, vm_size_t, vm_offset_t); >- >-extern struct pmap_md *pmap_scan_md(struct pmap_md *); >- > vm_offset_t pmap_early_io_map(vm_paddr_t pa, vm_size_t size); > > #endif > > #endif /* !_MACHINE_PMAP_H_ */ >diff --git a/sys/powerpc/powerpc/dump_machdep.c b/sys/powerpc/powerpc/dump_machdep.c >index 14e2f0f..573ed1c 100644 >--- a/sys/powerpc/powerpc/dump_machdep.c >+++ b/sys/powerpc/powerpc/dump_machdep.c >@@ -1,315 +1,38 @@ > /*- > * Copyright (c) 2002 Marcel Moolenaar > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, > * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF > * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > >-#include "opt_watchdog.h" >- > #include <sys/param.h> > #include <sys/systm.h> > #include <sys/conf.h> >-#include <sys/cons.h> >-#include <sys/kernel.h> > #include <sys/kerneldump.h> > #include <sys/sysctl.h> >-#ifdef SW_WATCHDOG >-#include <sys/watchdog.h> >-#endif >-#include <vm/vm.h> >-#include <vm/pmap.h> >-#include <machine/elf.h> >-#include <machine/md_var.h> >- >-CTASSERT(sizeof(struct kerneldumpheader) == 512); >- >-/* >- * Don't touch the first SIZEOF_METADATA bytes on the dump device. This >- * is to protect us from metadata and to protect metadata from us. >- */ >-#define SIZEOF_METADATA (64*1024) >- >-#define MD_ALIGN(x) (((off_t)(x) + PAGE_MASK) & ~PAGE_MASK) >-#define DEV_ALIGN(x) (((off_t)(x) + (DEV_BSIZE-1)) & ~(DEV_BSIZE-1)) >- >-typedef int callback_t(struct pmap_md *, int, void *); >- >-static struct kerneldumpheader kdh; >-static off_t dumplo, fileofs; >- >-/* Handle buffered writes. */ >-static char buffer[DEV_BSIZE]; >-static size_t fragsz; >- >-int dumpsys_minidump = 1; >-SYSCTL_INT(_debug, OID_AUTO, minidump, CTLFLAG_RD, &dumpsys_minidump, 0, >- "Kernel makes compressed crash dumps"); >- >-static int >-buf_write(struct dumperinfo *di, char *ptr, size_t sz) >-{ >- size_t len; >- int error; >- >- while (sz) { >- len = DEV_BSIZE - fragsz; >- if (len > sz) >- len = sz; >- bcopy(ptr, buffer + fragsz, len); >- fragsz += len; >- ptr += len; >- sz -= len; >- if (fragsz == DEV_BSIZE) { >- error = di->dumper(di->priv, buffer, 0, dumplo, >- DEV_BSIZE); >- if (error) >- return error; >- dumplo += DEV_BSIZE; >- fragsz = 0; >- } >- } >- >- return (0); >-} >- >-static int >-buf_flush(struct dumperinfo *di) >-{ >- int error; >- >- if (fragsz == 0) >- return (0); >- >- error = di->dumper(di->priv, buffer, 0, dumplo, DEV_BSIZE); >- dumplo += DEV_BSIZE; >- fragsz = 0; >- return (error); >-} >- >-static int >-cb_dumpdata(struct pmap_md *md, int seqnr, void *arg) >-{ >- struct dumperinfo *di = (struct dumperinfo*)arg; >- vm_offset_t va; >- size_t counter, ofs, resid, sz, maxsz; >- int c, error, twiddle; >- >- error = 0; >- counter = 0; /* Update twiddle every 16MB */ >- twiddle = 0; >- >- ofs = 0; /* Logical offset within the chunk */ >- resid = md->md_size; >- maxsz = min(DFLTPHYS, di->maxiosize); >- >- printf(" chunk %d: %lu bytes ", seqnr, (u_long)resid); >- >- while (resid) { >- sz = min(resid, maxsz); >- va = pmap_dumpsys_map(md, ofs, &sz); >- counter += sz; >- if (counter >> 24) { >- printf("%c\b", "|/-\\"[twiddle++ & 3]); >- counter &= (1<<24) - 1; >- } >-#ifdef SW_WATCHDOG >- wdog_kern_pat(WD_LASTVAL); >-#endif >- error = di->dumper(di->priv, (void*)va, 0, dumplo, sz); >- pmap_dumpsys_unmap(md, ofs, va); >- if (error) >- break; >- dumplo += sz; >- resid -= sz; >- ofs += sz; >- >- /* Check for user abort. */ >- c = cncheckc(); >- if (c == 0x03) >- return (ECANCELED); >- if (c != -1) >- printf("(CTRL-C to abort) "); >- } >- printf("... %s\n", (error) ? "fail" : "ok"); >- return (error); >-} >- >-static int >-cb_dumphdr(struct pmap_md *md, int seqnr, void *arg) >-{ >- struct dumperinfo *di = (struct dumperinfo*)arg; >- Elf_Phdr phdr; >- int error; >- >- bzero(&phdr, sizeof(phdr)); >- phdr.p_type = PT_LOAD; >- phdr.p_flags = PF_R; /* XXX */ >- phdr.p_offset = fileofs; >- phdr.p_vaddr = md->md_vaddr; >- phdr.p_paddr = md->md_paddr; >- phdr.p_filesz = md->md_size; >- phdr.p_memsz = md->md_size; >- phdr.p_align = PAGE_SIZE; >- >- error = buf_write(di, (char*)&phdr, sizeof(phdr)); >- fileofs += phdr.p_filesz; >- return (error); >-} >- >-static int >-cb_size(struct pmap_md *md, int seqnr, void *arg) >-{ >- uint32_t *sz = (uint32_t*)arg; >- >- *sz += md->md_size; >- return (0); >-} >- >-static int >-foreach_chunk(callback_t cb, void *arg) >-{ >- struct pmap_md *md; >- int error, seqnr; >- >- seqnr = 0; >- md = pmap_scan_md(NULL); >- while (md != NULL) { >- error = (*cb)(md, seqnr++, arg); >- if (error) >- return (-error); >- md = pmap_scan_md(md); >- } >- return (seqnr); >-} >- >-int >-dumpsys(struct dumperinfo *di) >-{ >- Elf_Ehdr ehdr; >- uint32_t dumpsize; >- off_t hdrgap; >- size_t hdrsz; >- int error; >- >- bzero(&ehdr, sizeof(ehdr)); >- ehdr.e_ident[EI_MAG0] = ELFMAG0; >- ehdr.e_ident[EI_MAG1] = ELFMAG1; >- ehdr.e_ident[EI_MAG2] = ELFMAG2; >- ehdr.e_ident[EI_MAG3] = ELFMAG3; >- ehdr.e_ident[EI_CLASS] = ELF_TARG_CLASS; >-#if BYTE_ORDER == LITTLE_ENDIAN >- ehdr.e_ident[EI_DATA] = ELFDATA2LSB; >-#else >- ehdr.e_ident[EI_DATA] = ELFDATA2MSB; >-#endif >- ehdr.e_ident[EI_VERSION] = EV_CURRENT; >- ehdr.e_ident[EI_OSABI] = ELFOSABI_STANDALONE; /* XXX big picture? */ >- ehdr.e_type = ET_CORE; >- ehdr.e_machine = ELF_ARCH; /* Defined in powerpc/include/elf.h */ >- ehdr.e_phoff = sizeof(ehdr); >- ehdr.e_ehsize = sizeof(ehdr); >- ehdr.e_phentsize = sizeof(Elf_Phdr); >- ehdr.e_shentsize = sizeof(Elf_Shdr); >- >- /* Calculate dump size. */ >- dumpsize = 0L; >- ehdr.e_phnum = foreach_chunk(cb_size, &dumpsize); >- hdrsz = ehdr.e_phoff + ehdr.e_phnum * ehdr.e_phentsize; >- fileofs = MD_ALIGN(hdrsz); >- dumpsize += fileofs; >- hdrgap = fileofs - DEV_ALIGN(hdrsz); >- >- /* For block devices, determine the dump offset on the device. */ >- if (di->mediasize > 0) { >- if (di->mediasize < >- SIZEOF_METADATA + dumpsize + sizeof(kdh) * 2) { >- error = ENOSPC; >- goto fail; >- } >- dumplo = di->mediaoffset + di->mediasize - dumpsize; >- dumplo -= sizeof(kdh) * 2; >- } else >- dumplo = 0; >- >- mkdumpheader(&kdh, KERNELDUMPMAGIC, KERNELDUMP_POWERPC_VERSION, dumpsize, >- di->blocksize); >- >- printf("Dumping %u MB (%d chunks)\n", dumpsize >> 20, >- ehdr.e_phnum); >- >- /* Dump leader */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- dumplo += sizeof(kdh); >- >- /* Dump ELF header */ >- error = buf_write(di, (char*)&ehdr, sizeof(ehdr)); >- if (error) >- goto fail; >- >- /* Dump program headers */ >- error = foreach_chunk(cb_dumphdr, di); >- if (error < 0) >- goto fail; >- buf_flush(di); >- >- /* >- * All headers are written using blocked I/O, so we know the >- * current offset is (still) block aligned. Skip the alignement >- * in the file to have the segment contents aligned at page >- * boundary. We cannot use MD_ALIGN on dumplo, because we don't >- * care and may very well be unaligned within the dump device. >- */ >- dumplo += hdrgap; >- >- /* Dump memory chunks (updates dumplo) */ >- error = foreach_chunk(cb_dumpdata, di); >- if (error < 0) >- goto fail; >- >- /* Dump trailer */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- >- /* Signal completion, signoff and exit stage left. */ >- dump_write(di, NULL, 0, 0, 0); >- printf("\nDump complete\n"); >- return (0); >- >- fail: >- if (error < 0) >- error = -error; > >- if (error == ECANCELED) >- printf("\nDump aborted\n"); >- else if (error == ENOSPC) >- printf("\nDump failed. Partition too small.\n"); >- else >- printf("\n** DUMP FAILED (ERROR %d) **\n", error); >- return (error); >-} >+int do_minidump = 1; >+SYSCTL_INT(_debug, OID_AUTO, minidump, CTLFLAG_RWTUN, &do_minidump, 0, >+ "Enable mini crash dumps"); >diff --git a/sys/powerpc/powerpc/mmu_if.m b/sys/powerpc/powerpc/mmu_if.m >index 5c44b71..caf0fb0 100644 >--- a/sys/powerpc/powerpc/mmu_if.m >+++ b/sys/powerpc/powerpc/mmu_if.m >@@ -1,950 +1,935 @@ > #- > # Copyright (c) 2005 Peter Grehan > # All rights reserved. > # > # Redistribution and use in source and binary forms, with or without > # modification, are permitted provided that the following conditions > # are met: > # 1. Redistributions of source code must retain the above copyright > # notice, this list of conditions and the following disclaimer. > # 2. Redistributions in binary form must reproduce the above copyright > # notice, this list of conditions and the following disclaimer in the > # documentation and/or other materials provided with the distribution. > # > # THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND > # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > # ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE > # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > # SUCH DAMAGE. > # > # $FreeBSD$ > # > > #include <sys/param.h> > #include <sys/lock.h> > #include <sys/mutex.h> > #include <sys/systm.h> > > #include <vm/vm.h> > #include <vm/vm_page.h> > > #include <machine/mmuvar.h> > > /** > * @defgroup MMU mmu - KObj methods for PowerPC MMU implementations > * @brief A set of methods required by all MMU implementations. These > * are basically direct call-thru's from the pmap machine-dependent > * code. > * Thanks to Bruce M Simpson's pmap man pages for routine descriptions. > *@{ > */ > > INTERFACE mmu; > > # > # Default implementations of some methods > # > CODE { > static void mmu_null_copy(mmu_t mmu, pmap_t dst_pmap, pmap_t src_pmap, > vm_offset_t dst_addr, vm_size_t len, vm_offset_t src_addr) > { > return; > } > > static void mmu_null_growkernel(mmu_t mmu, vm_offset_t addr) > { > return; > } > > static void mmu_null_init(mmu_t mmu) > { > return; > } > > static boolean_t mmu_null_is_prefaultable(mmu_t mmu, pmap_t pmap, > vm_offset_t va) > { > return (FALSE); > } > > static void mmu_null_object_init_pt(mmu_t mmu, pmap_t pmap, > vm_offset_t addr, vm_object_t object, vm_pindex_t index, > vm_size_t size) > { > return; > } > > static void mmu_null_page_init(mmu_t mmu, vm_page_t m) > { > return; > } > > static void mmu_null_remove_pages(mmu_t mmu, pmap_t pmap) > { > return; > } > > static int mmu_null_mincore(mmu_t mmu, pmap_t pmap, vm_offset_t addr, > vm_paddr_t *locked_pa) > { > return (0); > } > > static void mmu_null_deactivate(struct thread *td) > { > return; > } > > static void mmu_null_align_superpage(mmu_t mmu, vm_object_t object, > vm_ooffset_t offset, vm_offset_t *addr, vm_size_t size) > { > return; > } > >- static struct pmap_md *mmu_null_scan_md(mmu_t mmu, struct pmap_md *p) >- { >- return (NULL); >- } >- > static void *mmu_null_mapdev_attr(mmu_t mmu, vm_offset_t pa, > vm_size_t size, vm_memattr_t ma) > { > return MMU_MAPDEV(mmu, pa, size); > } > > static void mmu_null_kenter_attr(mmu_t mmu, vm_offset_t va, > vm_offset_t pa, vm_memattr_t ma) > { > MMU_KENTER(mmu, va, pa); > } > > static void mmu_null_page_set_memattr(mmu_t mmu, vm_page_t m, > vm_memattr_t ma) > { > return; > } > }; > > > /** > * @brief Apply the given advice to the specified range of addresses within > * the given pmap. Depending on the advice, clear the referenced and/or > * modified flags in each mapping and set the mapped page's dirty field. > * > * @param _pmap physical map > * @param _start virtual range start > * @param _end virtual range end > * @param _advice advice to apply > */ > METHOD void advise { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _start; > vm_offset_t _end; > int _advice; > }; > > > /** > * @brief Clear the 'modified' bit on the given physical page > * > * @param _pg physical page > */ > METHOD void clear_modify { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Clear the write and modified bits in each of the given > * physical page's mappings > * > * @param _pg physical page > */ > METHOD void remove_write { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Copy the address range given by the source physical map, virtual > * address and length to the destination physical map and virtual address. > * This routine is optional (xxx default null implementation ?) > * > * @param _dst_pmap destination physical map > * @param _src_pmap source physical map > * @param _dst_addr destination virtual address > * @param _len size of range > * @param _src_addr source virtual address > */ > METHOD void copy { > mmu_t _mmu; > pmap_t _dst_pmap; > pmap_t _src_pmap; > vm_offset_t _dst_addr; > vm_size_t _len; > vm_offset_t _src_addr; > } DEFAULT mmu_null_copy; > > > /** > * @brief Copy the source physical page to the destination physical page > * > * @param _src source physical page > * @param _dst destination physical page > */ > METHOD void copy_page { > mmu_t _mmu; > vm_page_t _src; > vm_page_t _dst; > }; > > METHOD void copy_pages { > mmu_t _mmu; > vm_page_t *_ma; > vm_offset_t _a_offset; > vm_page_t *_mb; > vm_offset_t _b_offset; > int _xfersize; > }; > > /** > * @brief Create a mapping between a virtual/physical address pair in the > * passed physical map with the specified protection and wiring > * > * @param _pmap physical map > * @param _va mapping virtual address > * @param _p mapping physical page > * @param _prot mapping page protection > * @param _flags pmap_enter flags > * @param _psind superpage size index > */ > METHOD int enter { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _va; > vm_page_t _p; > vm_prot_t _prot; > u_int _flags; > int8_t _psind; > }; > > > /** > * @brief Maps a sequence of resident pages belonging to the same object. > * > * @param _pmap physical map > * @param _start virtual range start > * @param _end virtual range end > * @param _m_start physical page mapped at start > * @param _prot mapping page protection > */ > METHOD void enter_object { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _start; > vm_offset_t _end; > vm_page_t _m_start; > vm_prot_t _prot; > }; > > > /** > * @brief A faster entry point for page mapping where it is possible > * to short-circuit some of the tests in pmap_enter. > * > * @param _pmap physical map (and also currently active pmap) > * @param _va mapping virtual address > * @param _pg mapping physical page > * @param _prot new page protection - used to see if page is exec. > */ > METHOD void enter_quick { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _va; > vm_page_t _pg; > vm_prot_t _prot; > }; > > > /** > * @brief Reverse map the given virtual address, returning the physical > * page associated with the address if a mapping exists. > * > * @param _pmap physical map > * @param _va mapping virtual address > * > * @retval 0 No mapping found > * @retval addr The mapping physical address > */ > METHOD vm_paddr_t extract { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _va; > }; > > > /** > * @brief Reverse map the given virtual address, returning the > * physical page if found. The page must be held (by calling > * vm_page_hold) if the page protection matches the given protection > * > * @param _pmap physical map > * @param _va mapping virtual address > * @param _prot protection used to determine if physical page > * should be locked > * > * @retval NULL No mapping found > * @retval page Pointer to physical page. Held if protections match > */ > METHOD vm_page_t extract_and_hold { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _va; > vm_prot_t _prot; > }; > > > /** > * @brief Increase kernel virtual address space to the given virtual address. > * Not really required for PowerPC, so optional unless the MMU implementation > * can use it. > * > * @param _va new upper limit for kernel virtual address space > */ > METHOD void growkernel { > mmu_t _mmu; > vm_offset_t _va; > } DEFAULT mmu_null_growkernel; > > > /** > * @brief Called from vm_mem_init. Zone allocation is available at > * this stage so a convenient time to create zones. This routine is > * for MMU-implementation convenience and is optional. > */ > METHOD void init { > mmu_t _mmu; > } DEFAULT mmu_null_init; > > > /** > * @brief Return if the page has been marked by MMU hardware to have been > * modified > * > * @param _pg physical page to test > * > * @retval boolean TRUE if page has been modified > */ > METHOD boolean_t is_modified { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Return whether the specified virtual address is a candidate to be > * prefaulted in. This routine is optional. > * > * @param _pmap physical map > * @param _va virtual address to test > * > * @retval boolean TRUE if the address is a candidate. > */ > METHOD boolean_t is_prefaultable { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _va; > } DEFAULT mmu_null_is_prefaultable; > > > /** > * @brief Return whether or not the specified physical page was referenced > * in any physical maps. > * > * @params _pg physical page > * > * @retval boolean TRUE if page has been referenced > */ > METHOD boolean_t is_referenced { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Return a count of referenced bits for a page, clearing those bits. > * Not all referenced bits need to be cleared, but it is necessary that 0 > * only be returned when there are none set. > * > * @params _m physical page > * > * @retval int count of referenced bits > */ > METHOD int ts_referenced { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Map the requested physical address range into kernel virtual > * address space. The value in _virt is taken as a hint. The virtual > * address of the range is returned, or NULL if the mapping could not > * be created. The range can be direct-mapped if that is supported. > * > * @param *_virt Hint for start virtual address, and also return > * value > * @param _start physical address range start > * @param _end physical address range end > * @param _prot protection of range (currently ignored) > * > * @retval NULL could not map the area > * @retval addr, *_virt mapping start virtual address > */ > METHOD vm_offset_t map { > mmu_t _mmu; > vm_offset_t *_virt; > vm_paddr_t _start; > vm_paddr_t _end; > int _prot; > }; > > > /** > * @brief Used to create a contiguous set of read-only mappings for a > * given object to try and eliminate a cascade of on-demand faults as > * the object is accessed sequentially. This routine is optional. > * > * @param _pmap physical map > * @param _addr mapping start virtual address > * @param _object device-backed V.M. object to be mapped > * @param _pindex page-index within object of mapping start > * @param _size size in bytes of mapping > */ > METHOD void object_init_pt { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _addr; > vm_object_t _object; > vm_pindex_t _pindex; > vm_size_t _size; > } DEFAULT mmu_null_object_init_pt; > > > /** > * @brief Used to determine if the specified page has a mapping for the > * given physical map, by scanning the list of reverse-mappings from the > * page. The list is scanned to a maximum of 16 entries. > * > * @param _pmap physical map > * @param _pg physical page > * > * @retval bool TRUE if the physical map was found in the first 16 > * reverse-map list entries off the physical page. > */ > METHOD boolean_t page_exists_quick { > mmu_t _mmu; > pmap_t _pmap; > vm_page_t _pg; > }; > > > /** > * @brief Initialise the machine-dependent section of the physical page > * data structure. This routine is optional. > * > * @param _pg physical page > */ > METHOD void page_init { > mmu_t _mmu; > vm_page_t _pg; > } DEFAULT mmu_null_page_init; > > > /** > * @brief Count the number of managed mappings to the given physical > * page that are wired. > * > * @param _pg physical page > * > * @retval int the number of wired, managed mappings to the > * given physical page > */ > METHOD int page_wired_mappings { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Initialise a physical map data structure > * > * @param _pmap physical map > */ > METHOD void pinit { > mmu_t _mmu; > pmap_t _pmap; > }; > > > /** > * @brief Initialise the physical map for process 0, the initial process > * in the system. > * XXX default to pinit ? > * > * @param _pmap physical map > */ > METHOD void pinit0 { > mmu_t _mmu; > pmap_t _pmap; > }; > > > /** > * @brief Set the protection for physical pages in the given virtual address > * range to the given value. > * > * @param _pmap physical map > * @param _start virtual range start > * @param _end virtual range end > * @param _prot new page protection > */ > METHOD void protect { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _start; > vm_offset_t _end; > vm_prot_t _prot; > }; > > > /** > * @brief Create a mapping in kernel virtual address space for the given array > * of wired physical pages. > * > * @param _start mapping virtual address start > * @param *_m array of physical page pointers > * @param _count array elements > */ > METHOD void qenter { > mmu_t _mmu; > vm_offset_t _start; > vm_page_t *_pg; > int _count; > }; > > > /** > * @brief Remove the temporary mappings created by qenter. > * > * @param _start mapping virtual address start > * @param _count number of pages in mapping > */ > METHOD void qremove { > mmu_t _mmu; > vm_offset_t _start; > int _count; > }; > > > /** > * @brief Release per-pmap resources, e.g. mutexes, allocated memory etc. There > * should be no existing mappings for the physical map at this point > * > * @param _pmap physical map > */ > METHOD void release { > mmu_t _mmu; > pmap_t _pmap; > }; > > > /** > * @brief Remove all mappings in the given physical map for the start/end > * virtual address range. The range will be page-aligned. > * > * @param _pmap physical map > * @param _start mapping virtual address start > * @param _end mapping virtual address end > */ > METHOD void remove { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _start; > vm_offset_t _end; > }; > > > /** > * @brief Traverse the reverse-map list off the given physical page and > * remove all mappings. Clear the PGA_WRITEABLE attribute from the page. > * > * @param _pg physical page > */ > METHOD void remove_all { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Remove all mappings in the given start/end virtual address range > * for the given physical map. Similar to the remove method, but it used > * when tearing down all mappings in an address space. This method is > * optional, since pmap_remove will be called for each valid vm_map in > * the address space later. > * > * @param _pmap physical map > * @param _start mapping virtual address start > * @param _end mapping virtual address end > */ > METHOD void remove_pages { > mmu_t _mmu; > pmap_t _pmap; > } DEFAULT mmu_null_remove_pages; > > > /** > * @brief Clear the wired attribute from the mappings for the specified range > * of addresses in the given pmap. > * > * @param _pmap physical map > * @param _start virtual range start > * @param _end virtual range end > */ > METHOD void unwire { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _start; > vm_offset_t _end; > }; > > > /** > * @brief Zero a physical page. It is not assumed that the page is mapped, > * so a temporary (or direct) mapping may need to be used. > * > * @param _pg physical page > */ > METHOD void zero_page { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Zero a portion of a physical page, starting at a given offset and > * for a given size (multiples of 512 bytes for 4k pages). > * > * @param _pg physical page > * @param _off byte offset from start of page > * @param _size size of area to zero > */ > METHOD void zero_page_area { > mmu_t _mmu; > vm_page_t _pg; > int _off; > int _size; > }; > > > /** > * @brief Called from the idle loop to zero pages. XXX I think locking > * constraints might be different here compared to zero_page. > * > * @param _pg physical page > */ > METHOD void zero_page_idle { > mmu_t _mmu; > vm_page_t _pg; > }; > > > /** > * @brief Extract mincore(2) information from a mapping. > * > * @param _pmap physical map > * @param _addr page virtual address > * @param _locked_pa page physical address > * > * @retval 0 no result > * @retval non-zero mincore(2) flag values > */ > METHOD int mincore { > mmu_t _mmu; > pmap_t _pmap; > vm_offset_t _addr; > vm_paddr_t *_locked_pa; > } DEFAULT mmu_null_mincore; > > > /** > * @brief Perform any operations required to allow a physical map to be used > * before it's address space is accessed. > * > * @param _td thread associated with physical map > */ > METHOD void activate { > mmu_t _mmu; > struct thread *_td; > }; > > /** > * @brief Perform any operations required to deactivate a physical map, > * for instance as it is context-switched out. > * > * @param _td thread associated with physical map > */ > METHOD void deactivate { > mmu_t _mmu; > struct thread *_td; > } DEFAULT mmu_null_deactivate; > > /** > * @brief Return a hint for the best virtual address to map a tentative > * virtual address range in a given VM object. The default is to just > * return the given tentative start address. > * > * @param _obj VM backing object > * @param _offset starting offset with the VM object > * @param _addr initial guess at virtual address > * @param _size size of virtual address range > */ > METHOD void align_superpage { > mmu_t _mmu; > vm_object_t _obj; > vm_ooffset_t _offset; > vm_offset_t *_addr; > vm_size_t _size; > } DEFAULT mmu_null_align_superpage; > > > > > /** > * INTERNAL INTERFACES > */ > > /** > * @brief Bootstrap the VM system. At the completion of this routine, the > * kernel will be running in it's own address space with full control over > * paging. > * > * @param _start start of reserved memory (obsolete ???) > * @param _end end of reserved memory (obsolete ???) > * XXX I think the intent of these was to allow > * the memory used by kernel text+data+bss and > * loader variables/load-time kld's to be carved out > * of available physical mem. > * > */ > METHOD void bootstrap { > mmu_t _mmu; > vm_offset_t _start; > vm_offset_t _end; > }; > > /** > * @brief Set up the MMU on the current CPU. Only called by the PMAP layer > * for alternate CPUs on SMP systems. > * > * @param _ap Set to 1 if the CPU being set up is an AP > * > */ > METHOD void cpu_bootstrap { > mmu_t _mmu; > int _ap; > }; > > > /** > * @brief Create a kernel mapping for a given physical address range. > * Called by bus code on behalf of device drivers. The mapping does not > * have to be a virtual address: it can be a direct-mapped physical address > * if that is supported by the MMU. > * > * @param _pa start physical address > * @param _size size in bytes of mapping > * > * @retval addr address of mapping. > */ > METHOD void * mapdev { > mmu_t _mmu; > vm_paddr_t _pa; > vm_size_t _size; > }; > > /** > * @brief Create a kernel mapping for a given physical address range. > * Called by bus code on behalf of device drivers. The mapping does not > * have to be a virtual address: it can be a direct-mapped physical address > * if that is supported by the MMU. > * > * @param _pa start physical address > * @param _size size in bytes of mapping > * @param _attr cache attributes > * > * @retval addr address of mapping. > */ > METHOD void * mapdev_attr { > mmu_t _mmu; > vm_offset_t _pa; > vm_size_t _size; > vm_memattr_t _attr; > } DEFAULT mmu_null_mapdev_attr; > > /** > * @brief Change cache control attributes for a page. Should modify all > * mappings for that page. > * > * @param _m page to modify > * @param _ma new cache control attributes > */ > METHOD void page_set_memattr { > mmu_t _mmu; > vm_page_t _pg; > vm_memattr_t _ma; > } DEFAULT mmu_null_page_set_memattr; > > /** > * @brief Remove the mapping created by mapdev. Called when a driver > * is unloaded. > * > * @param _va Mapping address returned from mapdev > * @param _size size in bytes of mapping > */ > METHOD void unmapdev { > mmu_t _mmu; > vm_offset_t _va; > vm_size_t _size; > }; > > > /** > * @brief Reverse-map a kernel virtual address > * > * @param _va kernel virtual address to reverse-map > * > * @retval pa physical address corresponding to mapping > */ > METHOD vm_paddr_t kextract { > mmu_t _mmu; > vm_offset_t _va; > }; > > > /** > * @brief Map a wired page into kernel virtual address space > * > * @param _va mapping virtual address > * @param _pa mapping physical address > */ > METHOD void kenter { > mmu_t _mmu; > vm_offset_t _va; > vm_paddr_t _pa; > }; > > /** > * @brief Map a wired page into kernel virtual address space > * > * @param _va mapping virtual address > * @param _pa mapping physical address > * @param _ma mapping cache control attributes > */ > METHOD void kenter_attr { > mmu_t _mmu; > vm_offset_t _va; > vm_offset_t _pa; > vm_memattr_t _ma; > } DEFAULT mmu_null_kenter_attr; > > /** > * @brief Determine if the given physical address range has been direct-mapped. > * > * @param _pa physical address start > * @param _size physical address range size > * > * @retval bool TRUE if the range is direct-mapped. > */ > METHOD boolean_t dev_direct_mapped { > mmu_t _mmu; > vm_paddr_t _pa; > vm_size_t _size; > }; > > > /** > * @brief Enforce instruction cache coherency. Typically called after a > * region of memory has been modified and before execution of or within > * that region is attempted. Setting breakpoints in a process through > * ptrace(2) is one example of when the instruction cache needs to be > * made coherent. > * > * @param _pm the physical map of the virtual address > * @param _va the virtual address of the modified region > * @param _sz the size of the modified region > */ > METHOD void sync_icache { > mmu_t _mmu; > pmap_t _pm; > vm_offset_t _va; > vm_size_t _sz; > }; > > > /** > * @brief Create temporary memory mapping for use by dumpsys(). > * >- * @param _md The memory chunk in which the mapping lies. >- * @param _ofs The offset within the chunk of the mapping. >+ * @param _pa The physical page to map. > * @param _sz The requested size of the mapping. >- * >- * @retval vm_offset_t The virtual address of the mapping. >- * >- * The sz argument is modified to reflect the actual size of the >- * mapping. >+ * @param _va The virtual address of the mapping. > */ >-METHOD vm_offset_t dumpsys_map { >+METHOD void dumpsys_map { > mmu_t _mmu; >- struct pmap_md *_md; >- vm_size_t _ofs; >- vm_size_t *_sz; >+ vm_paddr_t _pa; >+ size_t _sz; >+ void **_va; > }; > > > /** > * @brief Remove temporary dumpsys() mapping. > * >- * @param _md The memory chunk in which the mapping lies. >- * @param _ofs The offset within the chunk of the mapping. >+ * @param _pa The physical page to map. >+ * @param _sz The requested size of the mapping. > * @param _va The virtual address of the mapping. > */ > METHOD void dumpsys_unmap { > mmu_t _mmu; >- struct pmap_md *_md; >- vm_size_t _ofs; >- vm_offset_t _va; >+ vm_paddr_t _pa; >+ size_t _sz; >+ void *_va; > }; > > > /** >- * @brief Scan/iterate memory chunks. >- * >- * @param _prev The previously returned chunk or NULL. >- * >- * @retval The next (or first when _prev is NULL) chunk. >+ * @brief Initialize memory chunks for dumpsys. > */ >-METHOD struct pmap_md * scan_md { >+METHOD void scan_init { > mmu_t _mmu; >- struct pmap_md *_prev; >-} DEFAULT mmu_null_scan_md; >+}; >diff --git a/sys/powerpc/powerpc/pmap_dispatch.c b/sys/powerpc/powerpc/pmap_dispatch.c >index 7f3f913..1e962cd 100644 >--- a/sys/powerpc/powerpc/pmap_dispatch.c >+++ b/sys/powerpc/powerpc/pmap_dispatch.c >@@ -1,579 +1,583 @@ > /*- > * Copyright (c) 2005 Peter Grehan > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND > * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE > * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > * SUCH DAMAGE. > * > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > /* > * Dispatch MI pmap calls to the appropriate MMU implementation > * through a previously registered kernel object. > * > * Before pmap_bootstrap() can be called, a CPU module must have > * called pmap_mmu_install(). This may be called multiple times: > * the highest priority call will be installed as the default > * MMU handler when pmap_bootstrap() is called. > * > * It is required that mutex_init() be called before pmap_bootstrap(), > * as the PMAP layer makes extensive use of mutexes. > */ > > #include <sys/param.h> > #include <sys/kernel.h> >+#include <sys/conf.h> > #include <sys/lock.h> >+#include <sys/kerneldump.h> > #include <sys/ktr.h> > #include <sys/mutex.h> > #include <sys/systm.h> > > #include <vm/vm.h> > #include <vm/vm_page.h> > >+#include <machine/dump.h> >+#include <machine/md_var.h> > #include <machine/mmuvar.h> > #include <machine/smp.h> > > #include "mmu_if.h" > > static mmu_def_t *mmu_def_impl; > static mmu_t mmu_obj; > static struct mmu_kobj mmu_kernel_obj; > static struct kobj_ops mmu_kernel_kops; > > /* > * pmap globals > */ > struct pmap kernel_pmap_store; > > struct msgbuf *msgbufp; > vm_offset_t msgbuf_phys; > > vm_offset_t kernel_vm_end; > vm_offset_t phys_avail[PHYS_AVAIL_SZ]; > vm_offset_t virtual_avail; > vm_offset_t virtual_end; > > int pmap_bootstrapped; > > #ifdef AIM > int > pvo_vaddr_compare(struct pvo_entry *a, struct pvo_entry *b) > { > if (PVO_VADDR(a) < PVO_VADDR(b)) > return (-1); > else if (PVO_VADDR(a) > PVO_VADDR(b)) > return (1); > return (0); > } > RB_GENERATE(pvo_tree, pvo_entry, pvo_plink, pvo_vaddr_compare); > #endif > > > void > pmap_advise(pmap_t pmap, vm_offset_t start, vm_offset_t end, int advice) > { > > CTR5(KTR_PMAP, "%s(%p, %#x, %#x, %d)", __func__, pmap, start, end, > advice); > MMU_ADVISE(mmu_obj, pmap, start, end, advice); > } > > void > pmap_clear_modify(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > MMU_CLEAR_MODIFY(mmu_obj, m); > } > > void > pmap_copy(pmap_t dst_pmap, pmap_t src_pmap, vm_offset_t dst_addr, > vm_size_t len, vm_offset_t src_addr) > { > > CTR6(KTR_PMAP, "%s(%p, %p, %#x, %#x, %#x)", __func__, dst_pmap, > src_pmap, dst_addr, len, src_addr); > MMU_COPY(mmu_obj, dst_pmap, src_pmap, dst_addr, len, src_addr); > } > > void > pmap_copy_page(vm_page_t src, vm_page_t dst) > { > > CTR3(KTR_PMAP, "%s(%p, %p)", __func__, src, dst); > MMU_COPY_PAGE(mmu_obj, src, dst); > } > > void > pmap_copy_pages(vm_page_t ma[], vm_offset_t a_offset, vm_page_t mb[], > vm_offset_t b_offset, int xfersize) > { > > CTR6(KTR_PMAP, "%s(%p, %#x, %p, %#x, %#x)", __func__, ma, > a_offset, mb, b_offset, xfersize); > MMU_COPY_PAGES(mmu_obj, ma, a_offset, mb, b_offset, xfersize); > } > > int > pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t p, vm_prot_t prot, > u_int flags, int8_t psind) > { > > CTR6(KTR_PMAP, "pmap_enter(%p, %#x, %p, %#x, %x, %d)", pmap, va, > p, prot, flags, psind); > return (MMU_ENTER(mmu_obj, pmap, va, p, prot, flags, psind)); > } > > void > pmap_enter_object(pmap_t pmap, vm_offset_t start, vm_offset_t end, > vm_page_t m_start, vm_prot_t prot) > { > > CTR6(KTR_PMAP, "%s(%p, %#x, %#x, %p, %#x)", __func__, pmap, start, > end, m_start, prot); > MMU_ENTER_OBJECT(mmu_obj, pmap, start, end, m_start, prot); > } > > void > pmap_enter_quick(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot) > { > > CTR5(KTR_PMAP, "%s(%p, %#x, %p, %#x)", __func__, pmap, va, m, prot); > MMU_ENTER_QUICK(mmu_obj, pmap, va, m, prot); > } > > vm_paddr_t > pmap_extract(pmap_t pmap, vm_offset_t va) > { > > CTR3(KTR_PMAP, "%s(%p, %#x)", __func__, pmap, va); > return (MMU_EXTRACT(mmu_obj, pmap, va)); > } > > vm_page_t > pmap_extract_and_hold(pmap_t pmap, vm_offset_t va, vm_prot_t prot) > { > > CTR4(KTR_PMAP, "%s(%p, %#x, %#x)", __func__, pmap, va, prot); > return (MMU_EXTRACT_AND_HOLD(mmu_obj, pmap, va, prot)); > } > > void > pmap_growkernel(vm_offset_t va) > { > > CTR2(KTR_PMAP, "%s(%#x)", __func__, va); > MMU_GROWKERNEL(mmu_obj, va); > } > > void > pmap_init(void) > { > > CTR1(KTR_PMAP, "%s()", __func__); > MMU_INIT(mmu_obj); > } > > boolean_t > pmap_is_modified(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > return (MMU_IS_MODIFIED(mmu_obj, m)); > } > > boolean_t > pmap_is_prefaultable(pmap_t pmap, vm_offset_t va) > { > > CTR3(KTR_PMAP, "%s(%p, %#x)", __func__, pmap, va); > return (MMU_IS_PREFAULTABLE(mmu_obj, pmap, va)); > } > > boolean_t > pmap_is_referenced(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > return (MMU_IS_REFERENCED(mmu_obj, m)); > } > > boolean_t > pmap_ts_referenced(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > return (MMU_TS_REFERENCED(mmu_obj, m)); > } > > vm_offset_t > pmap_map(vm_offset_t *virt, vm_paddr_t start, vm_paddr_t end, int prot) > { > > CTR5(KTR_PMAP, "%s(%p, %#x, %#x, %#x)", __func__, virt, start, end, > prot); > return (MMU_MAP(mmu_obj, virt, start, end, prot)); > } > > void > pmap_object_init_pt(pmap_t pmap, vm_offset_t addr, vm_object_t object, > vm_pindex_t pindex, vm_size_t size) > { > > CTR6(KTR_PMAP, "%s(%p, %#x, %p, %u, %#x)", __func__, pmap, addr, > object, pindex, size); > MMU_OBJECT_INIT_PT(mmu_obj, pmap, addr, object, pindex, size); > } > > boolean_t > pmap_page_exists_quick(pmap_t pmap, vm_page_t m) > { > > CTR3(KTR_PMAP, "%s(%p, %p)", __func__, pmap, m); > return (MMU_PAGE_EXISTS_QUICK(mmu_obj, pmap, m)); > } > > void > pmap_page_init(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > MMU_PAGE_INIT(mmu_obj, m); > } > > int > pmap_page_wired_mappings(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > return (MMU_PAGE_WIRED_MAPPINGS(mmu_obj, m)); > } > > int > pmap_pinit(pmap_t pmap) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, pmap); > MMU_PINIT(mmu_obj, pmap); > return (1); > } > > void > pmap_pinit0(pmap_t pmap) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, pmap); > MMU_PINIT0(mmu_obj, pmap); > } > > void > pmap_protect(pmap_t pmap, vm_offset_t start, vm_offset_t end, vm_prot_t prot) > { > > CTR5(KTR_PMAP, "%s(%p, %#x, %#x, %#x)", __func__, pmap, start, end, > prot); > MMU_PROTECT(mmu_obj, pmap, start, end, prot); > } > > void > pmap_qenter(vm_offset_t start, vm_page_t *m, int count) > { > > CTR4(KTR_PMAP, "%s(%#x, %p, %d)", __func__, start, m, count); > MMU_QENTER(mmu_obj, start, m, count); > } > > void > pmap_qremove(vm_offset_t start, int count) > { > > CTR3(KTR_PMAP, "%s(%#x, %d)", __func__, start, count); > MMU_QREMOVE(mmu_obj, start, count); > } > > void > pmap_release(pmap_t pmap) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, pmap); > MMU_RELEASE(mmu_obj, pmap); > } > > void > pmap_remove(pmap_t pmap, vm_offset_t start, vm_offset_t end) > { > > CTR4(KTR_PMAP, "%s(%p, %#x, %#x)", __func__, pmap, start, end); > MMU_REMOVE(mmu_obj, pmap, start, end); > } > > void > pmap_remove_all(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > MMU_REMOVE_ALL(mmu_obj, m); > } > > void > pmap_remove_pages(pmap_t pmap) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, pmap); > MMU_REMOVE_PAGES(mmu_obj, pmap); > } > > void > pmap_remove_write(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > MMU_REMOVE_WRITE(mmu_obj, m); > } > > void > pmap_unwire(pmap_t pmap, vm_offset_t start, vm_offset_t end) > { > > CTR4(KTR_PMAP, "%s(%p, %#x, %#x)", __func__, pmap, start, end); > MMU_UNWIRE(mmu_obj, pmap, start, end); > } > > void > pmap_zero_page(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > MMU_ZERO_PAGE(mmu_obj, m); > } > > void > pmap_zero_page_area(vm_page_t m, int off, int size) > { > > CTR4(KTR_PMAP, "%s(%p, %d, %d)", __func__, m, off, size); > MMU_ZERO_PAGE_AREA(mmu_obj, m, off, size); > } > > void > pmap_zero_page_idle(vm_page_t m) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, m); > MMU_ZERO_PAGE_IDLE(mmu_obj, m); > } > > int > pmap_mincore(pmap_t pmap, vm_offset_t addr, vm_paddr_t *locked_pa) > { > > CTR3(KTR_PMAP, "%s(%p, %#x)", __func__, pmap, addr); > return (MMU_MINCORE(mmu_obj, pmap, addr, locked_pa)); > } > > void > pmap_activate(struct thread *td) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, td); > MMU_ACTIVATE(mmu_obj, td); > } > > void > pmap_deactivate(struct thread *td) > { > > CTR2(KTR_PMAP, "%s(%p)", __func__, td); > MMU_DEACTIVATE(mmu_obj, td); > } > > /* > * Increase the starting virtual address of the given mapping if a > * different alignment might result in more superpage mappings. > */ > void > pmap_align_superpage(vm_object_t object, vm_ooffset_t offset, > vm_offset_t *addr, vm_size_t size) > { > > CTR5(KTR_PMAP, "%s(%p, %#x, %p, %#x)", __func__, object, offset, addr, > size); > MMU_ALIGN_SUPERPAGE(mmu_obj, object, offset, addr, size); > } > > /* > * Routines used in machine-dependent code > */ > void > pmap_bootstrap(vm_offset_t start, vm_offset_t end) > { > mmu_obj = &mmu_kernel_obj; > > /* > * Take care of compiling the selected class, and > * then statically initialise the MMU object > */ > kobj_class_compile_static(mmu_def_impl, &mmu_kernel_kops); > kobj_init_static((kobj_t)mmu_obj, mmu_def_impl); > > MMU_BOOTSTRAP(mmu_obj, start, end); > } > > void > pmap_cpu_bootstrap(int ap) > { > /* > * No KTR here because our console probably doesn't work yet > */ > > return (MMU_CPU_BOOTSTRAP(mmu_obj, ap)); > } > > void * > pmap_mapdev(vm_paddr_t pa, vm_size_t size) > { > > CTR3(KTR_PMAP, "%s(%#x, %#x)", __func__, pa, size); > return (MMU_MAPDEV(mmu_obj, pa, size)); > } > > void * > pmap_mapdev_attr(vm_offset_t pa, vm_size_t size, vm_memattr_t attr) > { > > CTR4(KTR_PMAP, "%s(%#x, %#x, %#x)", __func__, pa, size, attr); > return (MMU_MAPDEV_ATTR(mmu_obj, pa, size, attr)); > } > > void > pmap_page_set_memattr(vm_page_t m, vm_memattr_t ma) > { > > CTR3(KTR_PMAP, "%s(%p, %#x)", __func__, m, ma); > return (MMU_PAGE_SET_MEMATTR(mmu_obj, m, ma)); > } > > void > pmap_unmapdev(vm_offset_t va, vm_size_t size) > { > > CTR3(KTR_PMAP, "%s(%#x, %#x)", __func__, va, size); > MMU_UNMAPDEV(mmu_obj, va, size); > } > > vm_paddr_t > pmap_kextract(vm_offset_t va) > { > > CTR2(KTR_PMAP, "%s(%#x)", __func__, va); > return (MMU_KEXTRACT(mmu_obj, va)); > } > > void > pmap_kenter(vm_offset_t va, vm_paddr_t pa) > { > > CTR3(KTR_PMAP, "%s(%#x, %#x)", __func__, va, pa); > MMU_KENTER(mmu_obj, va, pa); > } > > void > pmap_kenter_attr(vm_offset_t va, vm_offset_t pa, vm_memattr_t ma) > { > > CTR4(KTR_PMAP, "%s(%#x, %#x, %#x)", __func__, va, pa, ma); > MMU_KENTER_ATTR(mmu_obj, va, pa, ma); > } > > boolean_t > pmap_dev_direct_mapped(vm_paddr_t pa, vm_size_t size) > { > > CTR3(KTR_PMAP, "%s(%#x, %#x)", __func__, pa, size); > return (MMU_DEV_DIRECT_MAPPED(mmu_obj, pa, size)); > } > > void > pmap_sync_icache(pmap_t pm, vm_offset_t va, vm_size_t sz) > { > > CTR4(KTR_PMAP, "%s(%p, %#x, %#x)", __func__, pm, va, sz); > return (MMU_SYNC_ICACHE(mmu_obj, pm, va, sz)); > } > >-vm_offset_t >-pmap_dumpsys_map(struct pmap_md *md, vm_size_t ofs, vm_size_t *sz) >+void >+dumpsys_map_chunk(vm_paddr_t pa, size_t sz, void **va) > { > >- CTR4(KTR_PMAP, "%s(%p, %#x, %#x)", __func__, md, ofs, *sz); >- return (MMU_DUMPSYS_MAP(mmu_obj, md, ofs, sz)); >+ CTR4(KTR_PMAP, "%s(%#jx, %#zx, %p)", __func__, (uintmax_t)pa, sz, va); >+ return (MMU_DUMPSYS_MAP(mmu_obj, pa, sz, va)); > } > > void >-pmap_dumpsys_unmap(struct pmap_md *md, vm_size_t ofs, vm_offset_t va) >+dumpsys_unmap_chunk(vm_paddr_t pa, size_t sz, void *va) > { > >- CTR4(KTR_PMAP, "%s(%p, %#x, %#x)", __func__, md, ofs, va); >- return (MMU_DUMPSYS_UNMAP(mmu_obj, md, ofs, va)); >+ CTR4(KTR_PMAP, "%s(%#jx, %#zx, %p)", __func__, (uintmax_t)pa, sz, va); >+ return (MMU_DUMPSYS_UNMAP(mmu_obj, pa, sz, va)); > } > >-struct pmap_md * >-pmap_scan_md(struct pmap_md *prev) >+void >+dumpsys_md_pa_init(void) > { > >- CTR2(KTR_PMAP, "%s(%p)", __func__, prev); >- return (MMU_SCAN_MD(mmu_obj, prev)); >+ CTR1(KTR_PMAP, "%s()", __func__); >+ return (MMU_SCAN_INIT(mmu_obj)); > } > > /* > * MMU install routines. Highest priority wins, equal priority also > * overrides allowing last-set to win. > */ > SET_DECLARE(mmu_set, mmu_def_t); > > boolean_t > pmap_mmu_install(char *name, int prio) > { > mmu_def_t **mmupp, *mmup; > static int curr_prio = 0; > > /* > * Try and locate the MMU kobj corresponding to the name > */ > SET_FOREACH(mmupp, mmu_set) { > mmup = *mmupp; > > if (mmup->name && > !strcmp(mmup->name, name) && > (prio >= curr_prio || mmu_def_impl == NULL)) { > curr_prio = prio; > mmu_def_impl = mmup; > return (TRUE); > } > } > > return (FALSE); > } > > int unmapped_buf_allowed; >diff --git a/sys/sparc64/include/dump.h b/sys/sparc64/include/dump.h >new file mode 100644 >index 0000000..912c297 >--- /dev/null >+++ b/sys/sparc64/include/dump.h >@@ -0,0 +1,77 @@ >+/*- >+ * Copyright (c) 2014 EMC Corp. >+ * Copyright (c) 2014 Conrad Meyer <conrad.meyer@isilon.com> >+ * All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND >+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE >+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE >+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL >+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS >+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) >+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT >+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY >+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF >+ * SUCH DAMAGE. >+ * >+ * $FreeBSD$ >+ */ >+ >+#ifndef _MACHINE_DUMP_H_ >+#define _MACHINE_DUMP_H_ >+ >+#define DUMPSYS_MD_PA_NPAIRS 128 >+#define DUMPSYS_NUM_AUX_HDRS 0 >+/* These are just dummy values: */ >+#define KERNELDUMP_VERSION 0 >+#define EM_VALUE 0 >+ >+void dumpsys_md_pa_init(void); >+int dumpsys(struct dumperinfo *); >+ >+static inline struct dump_pa * >+dumpsys_md_pa_next(struct dump_pa *p) >+{ >+ >+ return (dumpsys_gen_md_pa_next(p)); >+} >+ >+static inline void >+dumpsys_wbinv_all(void) >+{ >+ >+ dumpsys_gen_wbinv_all(); >+} >+ >+static inline void >+dumpsys_unmap_chunk(vm_paddr_t pa, size_t s, void *va) >+{ >+ >+ dumpsys_gen_unmap_chunk(pa, s, va); >+} >+ >+static inline int >+dumpsys_write_aux_headers(struct dumperinfo *di) >+{ >+ >+ return (dumpsys_gen_write_aux_headers(di)); >+} >+ >+static inline int >+minidumpsys(struct dumperinfo *di) >+{ >+ >+ return (-ENOSYS); >+} >+ >+#endif /* !_MACHINE_DUMP_H_ */ >diff --git a/sys/sparc64/sparc64/dump_machdep.c b/sys/sparc64/sparc64/dump_machdep.c >index 5af21cc..da65920 100644 >--- a/sys/sparc64/sparc64/dump_machdep.c >+++ b/sys/sparc64/sparc64/dump_machdep.c >@@ -1,227 +1,172 @@ > /*- > * Copyright (c) 2002 Marcel Moolenaar > * Copyright (c) 2002 Thomas Moestl > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, > * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF > * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > #include <sys/param.h> > #include <sys/systm.h> > #include <sys/conf.h> > #include <sys/cons.h> > #include <sys/kernel.h> > #include <sys/kerneldump.h> > > #include <vm/vm.h> > #include <vm/vm_param.h> > #include <vm/pmap.h> > >+#include <machine/dump.h> > #include <machine/metadata.h> >+#include <machine/md_var.h> > #include <machine/kerneldump.h> > #include <machine/ofw_mem.h> > #include <machine/tsb.h> > #include <machine/tlb.h> > >-CTASSERT(sizeof(struct kerneldumpheader) == DEV_BSIZE); >+static off_t fileofs; > >-static struct kerneldumpheader kdh; >-static off_t dumplo, dumppos; >+extern off_t dumplo; >+extern struct dump_pa dump_map[DUMPSYS_MD_PA_NPAIRS]; > >-/* Handle buffered writes. */ >-static char buffer[DEV_BSIZE]; >-static vm_size_t fragsz; >+int do_minidump = 0; > >-#define MAXDUMPSZ (MAXDUMPPGS << PAGE_SHIFT) >- >-static int >-buf_write(struct dumperinfo *di, char *ptr, size_t sz) >+void >+dumpsys_md_pa_init(void) > { >- size_t len; >- int error; >- >- while (sz) { >- len = DEV_BSIZE - fragsz; >- if (len > sz) >- len = sz; >- bcopy(ptr, buffer + fragsz, len); >- fragsz += len; >- ptr += len; >- sz -= len; >- if (fragsz == DEV_BSIZE) { >- error = dump_write(di, buffer, 0, dumplo, >- DEV_BSIZE); >- if (error) >- return error; >- dumplo += DEV_BSIZE; >- fragsz = 0; >- } >- } >+ int i; > >- return (0); >+ memset(dump_map, 0, sizeof(dump_map)); >+ for (i = 0; i < sparc64_nmemreg; i++) { >+ dump_map[i].md_start = sparc64_memreg[i].mr_start; >+ dump_map[i].md_size = sparc64_memreg[i].mr_size; >+ } > } > >-static int >-buf_flush(struct dumperinfo *di) >+void >+dumpsys_map_chunk(vm_paddr_t pa, size_t chunk __unused, void **va) > { >- int error; >- >- if (fragsz == 0) >- return (0); > >- error = dump_write(di, buffer, 0, dumplo, DEV_BSIZE); >- dumplo += DEV_BSIZE; >- fragsz = 0; >- return (error); >+ *va = (void*)TLB_PHYS_TO_DIRECT(pa); > } > > static int > reg_write(struct dumperinfo *di, vm_paddr_t pa, vm_size_t size) > { > struct sparc64_dump_reg r; > > r.dr_pa = pa; > r.dr_size = size; >- r.dr_offs = dumppos; >- dumppos += size; >- return (buf_write(di, (char *)&r, sizeof(r))); >-} >- >-static int >-blk_dump(struct dumperinfo *di, vm_paddr_t pa, vm_size_t size) >-{ >- vm_size_t pos, rsz; >- vm_offset_t va; >- int c, counter, error, twiddle; >- >- printf(" chunk at %#lx: %ld bytes ", (u_long)pa, (long)size); >- >- va = 0L; >- error = counter = twiddle = 0; >- for (pos = 0; pos < size; pos += MAXDUMPSZ, counter++) { >- if (counter % 128 == 0) >- printf("%c\b", "|/-\\"[twiddle++ & 3]); >- rsz = size - pos; >- rsz = (rsz > MAXDUMPSZ) ? MAXDUMPSZ : rsz; >- va = TLB_PHYS_TO_DIRECT(pa + pos); >- error = dump_write(di, (void *)va, 0, dumplo, rsz); >- if (error) >- break; >- dumplo += rsz; >- >- /* Check for user abort. */ >- c = cncheckc(); >- if (c == 0x03) >- return (ECANCELED); >- if (c != -1) >- printf("(CTRL-C to abort) "); >- } >- printf("... %s\n", (error) ? "fail" : "ok"); >- return (error); >+ r.dr_offs = fileofs; >+ fileofs += size; >+ return (dumpsys_buf_write(di, (char *)&r, sizeof(r))); > } > > int > dumpsys(struct dumperinfo *di) > { >+ static struct kerneldumpheader kdh; >+ > struct sparc64_dump_hdr hdr; > vm_size_t size, totsize, hdrsize; > int error, i, nreg; > > /* Calculate dump size. */ > size = 0; > nreg = sparc64_nmemreg; > for (i = 0; i < sparc64_nmemreg; i++) > size += sparc64_memreg[i].mr_size; > /* Account for the header size. */ > hdrsize = roundup2(sizeof(hdr) + sizeof(struct sparc64_dump_reg) * nreg, > DEV_BSIZE); > size += hdrsize; > > totsize = size + 2 * sizeof(kdh); > if (totsize > di->mediasize) { > printf("Insufficient space on device (need %ld, have %ld), " > "refusing to dump.\n", (long)totsize, > (long)di->mediasize); > error = ENOSPC; > goto fail; > } > > /* Determine dump offset on device. */ > dumplo = di->mediaoffset + di->mediasize - totsize; > > mkdumpheader(&kdh, KERNELDUMPMAGIC, KERNELDUMP_SPARC64_VERSION, size, > di->blocksize); > > printf("Dumping %lu MB (%d chunks)\n", (u_long)(size >> 20), nreg); > > /* Dump leader */ > error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); > if (error) > goto fail; > dumplo += sizeof(kdh); > > /* Dump the private header. */ > hdr.dh_hdr_size = hdrsize; > hdr.dh_tsb_pa = tsb_kernel_phys; > hdr.dh_tsb_size = tsb_kernel_size; > hdr.dh_tsb_mask = tsb_kernel_mask; > hdr.dh_nregions = nreg; > >- if (buf_write(di, (char *)&hdr, sizeof(hdr)) != 0) >+ if (dumpsys_buf_write(di, (char *)&hdr, sizeof(hdr)) != 0) > goto fail; > >- dumppos = hdrsize; >+ fileofs = hdrsize; > /* Now, write out the region descriptors. */ > for (i = 0; i < sparc64_nmemreg; i++) { > error = reg_write(di, sparc64_memreg[i].mr_start, > sparc64_memreg[i].mr_size); > if (error != 0) > goto fail; > } >- buf_flush(di); >+ dumpsys_buf_flush(di); > > /* Dump memory chunks. */ >- for (i = 0; i < sparc64_nmemreg; i++) { >- error = blk_dump(di, sparc64_memreg[i].mr_start, >- sparc64_memreg[i].mr_size); >- if (error != 0) >- goto fail; >- } >+ error = dumpsys_foreach_chunk(dumpsys_cb_dumpdata, di); >+ if (error < 0) >+ goto fail; > > /* Dump trailer */ > error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); > if (error) > goto fail; > > /* Signal completion, signoff and exit stage left. */ > dump_write(di, NULL, 0, 0, 0); > printf("\nDump complete\n"); > return (0); > > fail: >+ if (error < 0) >+ error = -error; >+ > /* XXX It should look more like VMS :-) */ > printf("** DUMP FAILED (ERROR %d) **\n", error); > return (error); > } >diff --git a/sys/sys/conf.h b/sys/sys/conf.h >index 9d73d59..68e9d0f 100644 >--- a/sys/sys/conf.h >+++ b/sys/sys/conf.h >@@ -1,347 +1,346 @@ > /*- > * Copyright (c) 1990, 1993 > * The Regents of the University of California. All rights reserved. > * Copyright (c) 2000 > * Poul-Henning Kamp. All rights reserved. > * (c) UNIX System Laboratories, Inc. > * All or some portions of this file are derived from material licensed > * to the University of California by American Telephone and Telegraph > * Co. or Unix System Laboratories, Inc. and are reproduced herein with > * the permission of UNIX System Laboratories, Inc. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 4. Neither the name of the University nor the names of its contributors > * may be used to endorse or promote products derived from this software > * without specific prior written permission. > * > * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND > * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE > * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > * SUCH DAMAGE. > * > * @(#)conf.h 8.5 (Berkeley) 1/9/95 > * $FreeBSD$ > */ > > #ifndef _SYS_CONF_H_ > #define _SYS_CONF_H_ > > #ifdef _KERNEL > #include <sys/eventhandler.h> > #else > #include <sys/queue.h> > #endif > > struct snapdata; > struct devfs_dirent; > struct cdevsw; > struct file; > > struct cdev { > void *si_spare0; > u_int si_flags; > #define SI_ETERNAL 0x0001 /* never destroyed */ > #define SI_ALIAS 0x0002 /* carrier of alias name */ > #define SI_NAMED 0x0004 /* make_dev{_alias} has been called */ > #define SI_CHEAPCLONE 0x0008 /* can be removed_dev'ed when vnode reclaims */ > #define SI_CHILD 0x0010 /* child of another struct cdev **/ > #define SI_DUMPDEV 0x0080 /* is kernel dumpdev */ > #define SI_CLONELIST 0x0200 /* on a clone list */ > #define SI_UNMAPPED 0x0400 /* can handle unmapped I/O */ > #define SI_NOSPLIT 0x0800 /* I/O should not be split up */ > struct timespec si_atime; > struct timespec si_ctime; > struct timespec si_mtime; > uid_t si_uid; > gid_t si_gid; > mode_t si_mode; > struct ucred *si_cred; /* cached clone-time credential */ > int si_drv0; > int si_refcount; > LIST_ENTRY(cdev) si_list; > LIST_ENTRY(cdev) si_clone; > LIST_HEAD(, cdev) si_children; > LIST_ENTRY(cdev) si_siblings; > struct cdev *si_parent; > struct mount *si_mountpt; > void *si_drv1, *si_drv2; > struct cdevsw *si_devsw; > int si_iosize_max; /* maximum I/O size (for physio &al) */ > u_long si_usecount; > u_long si_threadcount; > union { > struct snapdata *__sid_snapdata; > } __si_u; > char si_name[SPECNAMELEN + 1]; > }; > > #define si_snapdata __si_u.__sid_snapdata > > #ifdef _KERNEL > > /* > * Definitions of device driver entry switches > */ > > struct bio; > struct buf; > struct thread; > struct uio; > struct knote; > struct clonedevs; > struct vm_object; > struct vnode; > > /* > * Note: d_thread_t is provided as a transition aid for those drivers > * that treat struct proc/struct thread as an opaque data type and > * exist in substantially the same form in both 4.x and 5.x. Writers > * of drivers that dips into the d_thread_t structure should use > * struct thread or struct proc as appropriate for the version of the > * OS they are using. It is provided in lieu of each device driver > * inventing its own way of doing this. While it does violate style(9) > * in a number of ways, this violation is deemed to be less > * important than the benefits that a uniform API between releases > * gives. > * > * Users of struct thread/struct proc that aren't device drivers should > * not use d_thread_t. > */ > > typedef struct thread d_thread_t; > > typedef int d_open_t(struct cdev *dev, int oflags, int devtype, struct thread *td); > typedef int d_fdopen_t(struct cdev *dev, int oflags, struct thread *td, struct file *fp); > typedef int d_close_t(struct cdev *dev, int fflag, int devtype, struct thread *td); > typedef void d_strategy_t(struct bio *bp); > typedef int d_ioctl_t(struct cdev *dev, u_long cmd, caddr_t data, > int fflag, struct thread *td); > > typedef int d_read_t(struct cdev *dev, struct uio *uio, int ioflag); > typedef int d_write_t(struct cdev *dev, struct uio *uio, int ioflag); > typedef int d_poll_t(struct cdev *dev, int events, struct thread *td); > typedef int d_kqfilter_t(struct cdev *dev, struct knote *kn); > typedef int d_mmap_t(struct cdev *dev, vm_ooffset_t offset, vm_paddr_t *paddr, > int nprot, vm_memattr_t *memattr); > typedef int d_mmap_single_t(struct cdev *cdev, vm_ooffset_t *offset, > vm_size_t size, struct vm_object **object, int nprot); > typedef void d_purge_t(struct cdev *dev); > > typedef int dumper_t( > void *_priv, /* Private to the driver. */ > void *_virtual, /* Virtual (mapped) address. */ > vm_offset_t _physical, /* Physical address of virtual. */ > off_t _offset, /* Byte-offset to write at. */ > size_t _length); /* Number of bytes to dump. */ > > #endif /* _KERNEL */ > > /* > * Types for d_flags. > */ > #define D_TAPE 0x0001 > #define D_DISK 0x0002 > #define D_TTY 0x0004 > #define D_MEM 0x0008 > > #ifdef _KERNEL > > #define D_TYPEMASK 0xffff > > /* > * Flags for d_flags which the drivers can set. > */ > #define D_TRACKCLOSE 0x00080000 /* track all closes */ > #define D_MMAP_ANON 0x00100000 /* special treatment in vm_mmap.c */ > #define D_NEEDGIANT 0x00400000 /* driver want Giant */ > #define D_NEEDMINOR 0x00800000 /* driver uses clone_create() */ > > /* > * Version numbers. > */ > #define D_VERSION_00 0x20011966 > #define D_VERSION_01 0x17032005 /* Add d_uid,gid,mode & kind */ > #define D_VERSION_02 0x28042009 /* Add d_mmap_single */ > #define D_VERSION_03 0x17122009 /* d_mmap takes memattr,vm_ooffset_t */ > #define D_VERSION D_VERSION_03 > > /* > * Flags used for internal housekeeping > */ > #define D_INIT 0x80000000 /* cdevsw initialized */ > > /* > * Character device switch table > */ > struct cdevsw { > int d_version; > u_int d_flags; > const char *d_name; > d_open_t *d_open; > d_fdopen_t *d_fdopen; > d_close_t *d_close; > d_read_t *d_read; > d_write_t *d_write; > d_ioctl_t *d_ioctl; > d_poll_t *d_poll; > d_mmap_t *d_mmap; > d_strategy_t *d_strategy; > dumper_t *d_dump; > d_kqfilter_t *d_kqfilter; > d_purge_t *d_purge; > d_mmap_single_t *d_mmap_single; > > int32_t d_spare0[3]; > void *d_spare1[3]; > > /* These fields should not be messed with by drivers */ > LIST_HEAD(, cdev) d_devs; > int d_spare2; > union { > struct cdevsw *gianttrick; > SLIST_ENTRY(cdevsw) postfree_list; > } __d_giant; > }; > #define d_gianttrick __d_giant.gianttrick > #define d_postfree_list __d_giant.postfree_list > > struct module; > > struct devsw_module_data { > int (*chainevh)(struct module *, int, void *); /* next handler */ > void *chainarg; /* arg for next event handler */ > /* Do not initialize fields hereafter */ > }; > > #define DEV_MODULE_ORDERED(name, evh, arg, ord) \ > static moduledata_t name##_mod = { \ > #name, \ > evh, \ > arg \ > }; \ > DECLARE_MODULE(name, name##_mod, SI_SUB_DRIVERS, ord) > > #define DEV_MODULE(name, evh, arg) \ > DEV_MODULE_ORDERED(name, evh, arg, SI_ORDER_MIDDLE) > > void clone_setup(struct clonedevs **cdp); > void clone_cleanup(struct clonedevs **); > #define CLONE_UNITMASK 0xfffff > #define CLONE_FLAG0 (CLONE_UNITMASK + 1) > int clone_create(struct clonedevs **, struct cdevsw *, int *unit, struct cdev **dev, int extra); > > int count_dev(struct cdev *_dev); > void destroy_dev(struct cdev *_dev); > int destroy_dev_sched(struct cdev *dev); > int destroy_dev_sched_cb(struct cdev *dev, void (*cb)(void *), void *arg); > void destroy_dev_drain(struct cdevsw *csw); > void drain_dev_clone_events(void); > struct cdevsw *dev_refthread(struct cdev *_dev, int *_ref); > struct cdevsw *devvn_refthread(struct vnode *vp, struct cdev **devp, int *_ref); > void dev_relthread(struct cdev *_dev, int _ref); > void dev_depends(struct cdev *_pdev, struct cdev *_cdev); > void dev_ref(struct cdev *dev); > void dev_refl(struct cdev *dev); > void dev_rel(struct cdev *dev); > void dev_strategy(struct cdev *dev, struct buf *bp); > void dev_strategy_csw(struct cdev *dev, struct cdevsw *csw, struct buf *bp); > struct cdev *make_dev(struct cdevsw *_devsw, int _unit, uid_t _uid, gid_t _gid, > int _perms, const char *_fmt, ...) __printflike(6, 7); > struct cdev *make_dev_cred(struct cdevsw *_devsw, int _unit, > struct ucred *_cr, uid_t _uid, gid_t _gid, int _perms, > const char *_fmt, ...) __printflike(7, 8); > #define MAKEDEV_REF 0x01 > #define MAKEDEV_WHTOUT 0x02 > #define MAKEDEV_NOWAIT 0x04 > #define MAKEDEV_WAITOK 0x08 > #define MAKEDEV_ETERNAL 0x10 > #define MAKEDEV_CHECKNAME 0x20 > struct cdev *make_dev_credf(int _flags, > struct cdevsw *_devsw, int _unit, > struct ucred *_cr, uid_t _uid, gid_t _gid, int _mode, > const char *_fmt, ...) __printflike(8, 9); > int make_dev_p(int _flags, struct cdev **_cdev, struct cdevsw *_devsw, > struct ucred *_cr, uid_t _uid, gid_t _gid, int _mode, > const char *_fmt, ...) __printflike(8, 9); > struct cdev *make_dev_alias(struct cdev *_pdev, const char *_fmt, ...) > __printflike(2, 3); > int make_dev_alias_p(int _flags, struct cdev **_cdev, struct cdev *_pdev, > const char *_fmt, ...) __printflike(4, 5); > int make_dev_physpath_alias(int _flags, struct cdev **_cdev, > struct cdev *_pdev, struct cdev *_old_alias, > const char *_physpath); > void dev_lock(void); > void dev_unlock(void); > void setconf(void); > > #ifdef KLD_MODULE > #define MAKEDEV_ETERNAL_KLD 0 > #else > #define MAKEDEV_ETERNAL_KLD MAKEDEV_ETERNAL > #endif > > #define dev2unit(d) ((d)->si_drv0) > > typedef void (*cdevpriv_dtr_t)(void *data); > int devfs_get_cdevpriv(void **datap); > int devfs_set_cdevpriv(void *priv, cdevpriv_dtr_t dtr); > void devfs_clear_cdevpriv(void); > void devfs_fpdrop(struct file *fp); /* XXX This is not public KPI */ > > ino_t devfs_alloc_cdp_inode(void); > void devfs_free_cdp_inode(ino_t ino); > > #define UID_ROOT 0 > #define UID_BIN 3 > #define UID_UUCP 66 > #define UID_NOBODY 65534 > > #define GID_WHEEL 0 > #define GID_KMEM 2 > #define GID_TTY 4 > #define GID_OPERATOR 5 > #define GID_BIN 7 > #define GID_GAMES 13 > #define GID_DIALER 68 > #define GID_NOBODY 65534 > > typedef void (*dev_clone_fn)(void *arg, struct ucred *cred, char *name, > int namelen, struct cdev **result); > > int dev_stdclone(char *_name, char **_namep, const char *_stem, int *_unit); > EVENTHANDLER_DECLARE(dev_clone, dev_clone_fn); > > /* Stuff relating to kernel-dump */ > > struct dumperinfo { > dumper_t *dumper; /* Dumping function. */ > void *priv; /* Private parts. */ > u_int blocksize; /* Size of block in bytes. */ > u_int maxiosize; /* Max size allowed for an individual I/O */ > off_t mediaoffset; /* Initial offset in bytes. */ > off_t mediasize; /* Space available in bytes. */ > }; > > int set_dumper(struct dumperinfo *, const char *_devname, struct thread *td); > int dump_write(struct dumperinfo *, void *, vm_offset_t, off_t, size_t); >-int dumpsys(struct dumperinfo *); > int doadump(boolean_t); > extern int dumping; /* system is dumping */ > > #endif /* _KERNEL */ > > #endif /* !_SYS_CONF_H_ */ >diff --git a/sys/sys/kerneldump.h b/sys/sys/kerneldump.h >index a148736..8e1370a 100644 >--- a/sys/sys/kerneldump.h >+++ b/sys/sys/kerneldump.h >@@ -1,107 +1,130 @@ > /*- > * Copyright (c) 2002 Poul-Henning Kamp > * Copyright (c) 2002 Networks Associates Technology, Inc. > * All rights reserved. > * > * This software was developed for the FreeBSD Project by Poul-Henning Kamp > * and NAI Labs, the Security Research Division of Network Associates, Inc. > * under DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the > * DARPA CHATS research program. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * 3. The names of the authors may not be used to endorse or promote > * products derived from this software without specific prior written > * permission. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND > * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE > * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE > * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE > * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL > * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS > * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) > * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF > * SUCH DAMAGE. > * > * $FreeBSD$ > */ > > #ifndef _SYS_KERNELDUMP_H > #define _SYS_KERNELDUMP_H > > #include <machine/endian.h> > > #if BYTE_ORDER == LITTLE_ENDIAN > #define dtoh32(x) __bswap32(x) > #define dtoh64(x) __bswap64(x) > #define htod32(x) __bswap32(x) > #define htod64(x) __bswap64(x) > #elif BYTE_ORDER == BIG_ENDIAN > #define dtoh32(x) (x) > #define dtoh64(x) (x) > #define htod32(x) (x) > #define htod64(x) (x) > #endif > > /* > * All uintX_t fields are in dump byte order, which is the same as > * network byte order. Use the macros defined above to read or > * write the fields. > */ > struct kerneldumpheader { > char magic[20]; > #define KERNELDUMPMAGIC "FreeBSD Kernel Dump" > #define TEXTDUMPMAGIC "FreeBSD Text Dump" > #define KERNELDUMPMAGIC_CLEARED "Cleared Kernel Dump" > char architecture[12]; > uint32_t version; > #define KERNELDUMPVERSION 1 > uint32_t architectureversion; > #define KERNELDUMP_ALPHA_VERSION 1 > #define KERNELDUMP_AMD64_VERSION 2 > #define KERNELDUMP_ARM_VERSION 1 > #define KERNELDUMP_I386_VERSION 2 > #define KERNELDUMP_MIPS_VERSION 1 > #define KERNELDUMP_POWERPC_VERSION 1 > #define KERNELDUMP_SPARC64_VERSION 1 > #define KERNELDUMP_TEXT_VERSION 1 > uint64_t dumplength; /* excl headers */ > uint64_t dumptime; > uint32_t blocksize; > char hostname[64]; > char versionstring[192]; > char panicstring[192]; > uint32_t parity; > }; > > /* > * Parity calculation is endian insensitive. > */ > static __inline u_int32_t > kerneldump_parity(struct kerneldumpheader *kdhp) > { > uint32_t *up, parity; > u_int i; > > up = (uint32_t *)kdhp; > parity = 0; > for (i = 0; i < sizeof *kdhp; i += sizeof *up) > parity ^= *up++; > return (parity); > } > > #ifdef _KERNEL >+struct dump_pa { >+ vm_paddr_t md_start; >+ vm_paddr_t md_size; >+}; >+ > void mkdumpheader(struct kerneldumpheader *kdh, char *magic, uint32_t archver, > uint64_t dumplen, uint32_t blksz); >+ >+int dumpsys_generic(struct dumperinfo *); >+ >+void dumpsys_map_chunk(vm_paddr_t, size_t, void **); >+typedef int dumpsys_callback_t(struct dump_pa *, int, void *); >+int dumpsys_foreach_chunk(dumpsys_callback_t, void *); >+int dumpsys_cb_dumpdata(struct dump_pa *, int, void *); >+int dumpsys_buf_write(struct dumperinfo *, char *, size_t); >+int dumpsys_buf_flush(struct dumperinfo *); >+ >+void dumpsys_gen_md_pa_init(void); >+struct dump_pa *dumpsys_gen_md_pa_next(struct dump_pa *); >+void dumpsys_gen_wbinv_all(void); >+void dumpsys_gen_unmap_chunk(vm_paddr_t, size_t, void *); >+int dumpsys_gen_write_aux_headers(struct dumperinfo *); >+ >+extern int do_minidump; >+ > #endif > > #endif /* _SYS_KERNELDUMP_H */ >diff --git a/sys/x86/x86/dump_machdep.c b/sys/x86/x86/dump_machdep.c >index 4e048bf..89c197e 100644 >--- a/sys/x86/x86/dump_machdep.c >+++ b/sys/x86/x86/dump_machdep.c >@@ -1,378 +1,54 @@ > /*- > * Copyright (c) 2002 Marcel Moolenaar > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > * are met: > * > * 1. Redistributions of source code must retain the above copyright > * notice, this list of conditions and the following disclaimer. > * 2. Redistributions in binary form must reproduce the above copyright > * notice, this list of conditions and the following disclaimer in the > * documentation and/or other materials provided with the distribution. > * > * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR > * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES > * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. > * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, > * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF > * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > */ > > #include <sys/cdefs.h> > __FBSDID("$FreeBSD$"); > > #include "opt_watchdog.h" > > #include <sys/param.h> >-#include <sys/systm.h> > #include <sys/conf.h> >-#include <sys/cons.h> >-#include <sys/sysctl.h> >-#include <sys/kernel.h> > #include <sys/kerneldump.h> >-#include <sys/watchdog.h> >+#include <sys/sysctl.h> >+#include <sys/systm.h> > #include <vm/vm.h> > #include <vm/pmap.h> >-#include <machine/elf.h> >-#include <machine/md_var.h> >- >-#ifdef __amd64__ >-#define KERNELDUMP_VERSION KERNELDUMP_AMD64_VERSION >-#define EM_VALUE EM_X86_64 >-#else >-#define KERNELDUMP_VERSION KERNELDUMP_I386_VERSION >-#define EM_VALUE EM_386 >-#endif >- >-CTASSERT(sizeof(struct kerneldumpheader) == 512); > > int do_minidump = 1; > SYSCTL_INT(_debug, OID_AUTO, minidump, CTLFLAG_RWTUN, &do_minidump, 0, > "Enable mini crash dumps"); > >-/* >- * Don't touch the first SIZEOF_METADATA bytes on the dump device. This >- * is to protect us from metadata and to protect metadata from us. >- */ >-#define SIZEOF_METADATA (64*1024) >- >-#define MD_ALIGN(x) (((off_t)(x) + PAGE_MASK) & ~PAGE_MASK) >-#define DEV_ALIGN(x) (((off_t)(x) + (DEV_BSIZE-1)) & ~(DEV_BSIZE-1)) >- >-struct md_pa { >- vm_paddr_t md_start; >- vm_paddr_t md_size; >-}; >- >-typedef int callback_t(struct md_pa *, int, void *); >- >-static struct kerneldumpheader kdh; >-static off_t dumplo, fileofs; >- >-/* Handle buffered writes. */ >-static char buffer[DEV_BSIZE]; >-static size_t fragsz; >- >-/* 20 phys_avail entry pairs correspond to 10 md_pa's */ >-static struct md_pa dump_map[10]; >- >-static void >-md_pa_init(void) >-{ >- int n, idx; >- >- bzero(dump_map, sizeof(dump_map)); >- for (n = 0; n < sizeof(dump_map) / sizeof(dump_map[0]); n++) { >- idx = n * 2; >- if (dump_avail[idx] == 0 && dump_avail[idx + 1] == 0) >- break; >- dump_map[n].md_start = dump_avail[idx]; >- dump_map[n].md_size = dump_avail[idx + 1] - dump_avail[idx]; >- } >-} >- >-static struct md_pa * >-md_pa_first(void) >-{ >- >- return (&dump_map[0]); >-} >- >-static struct md_pa * >-md_pa_next(struct md_pa *mdp) >-{ >- >- mdp++; >- if (mdp->md_size == 0) >- mdp = NULL; >- return (mdp); >-} >- >-static int >-buf_write(struct dumperinfo *di, char *ptr, size_t sz) >-{ >- size_t len; >- int error; >- >- while (sz) { >- len = DEV_BSIZE - fragsz; >- if (len > sz) >- len = sz; >- bcopy(ptr, buffer + fragsz, len); >- fragsz += len; >- ptr += len; >- sz -= len; >- if (fragsz == DEV_BSIZE) { >- error = dump_write(di, buffer, 0, dumplo, >- DEV_BSIZE); >- if (error) >- return error; >- dumplo += DEV_BSIZE; >- fragsz = 0; >- } >- } >- >- return (0); >-} >- >-static int >-buf_flush(struct dumperinfo *di) >-{ >- int error; >- >- if (fragsz == 0) >- return (0); >- >- error = dump_write(di, buffer, 0, dumplo, DEV_BSIZE); >- dumplo += DEV_BSIZE; >- fragsz = 0; >- return (error); >-} >- >-#define PG2MB(pgs) ((pgs + (1 << 8) - 1) >> 8) >- >-static int >-cb_dumpdata(struct md_pa *mdp, int seqnr, void *arg) >+void >+dumpsys_map_chunk(vm_paddr_t pa, size_t chunk, void **va) > { >- struct dumperinfo *di = (struct dumperinfo*)arg; >- vm_paddr_t a, pa; >- void *va; >- uint64_t pgs; >- size_t counter, sz, chunk; >- int i, c, error, twiddle; >- u_int maxdumppgs; >+ int i; >+ vm_paddr_t a; > >- error = 0; /* catch case in which chunk size is 0 */ >- counter = 0; /* Update twiddle every 16MB */ >- twiddle = 0; >- va = 0; >- pgs = mdp->md_size / PAGE_SIZE; >- pa = mdp->md_start; >- maxdumppgs = min(di->maxiosize / PAGE_SIZE, MAXDUMPPGS); >- if (maxdumppgs == 0) /* seatbelt */ >- maxdumppgs = 1; >- >- printf(" chunk %d: %juMB (%ju pages)", seqnr, (uintmax_t)PG2MB(pgs), >- (uintmax_t)pgs); >- >- while (pgs) { >- chunk = pgs; >- if (chunk > maxdumppgs) >- chunk = maxdumppgs; >- sz = chunk << PAGE_SHIFT; >- counter += sz; >- if (counter >> 24) { >- printf(" %ju", (uintmax_t)PG2MB(pgs)); >- counter &= (1<<24) - 1; >- } >- for (i = 0; i < chunk; i++) { >- a = pa + i * PAGE_SIZE; >- va = pmap_kenter_temporary(trunc_page(a), i); >- } >- >- wdog_kern_pat(WD_LASTVAL); >- >- error = dump_write(di, va, 0, dumplo, sz); >- if (error) >- break; >- dumplo += sz; >- pgs -= chunk; >- pa += sz; >- >- /* Check for user abort. */ >- c = cncheckc(); >- if (c == 0x03) >- return (ECANCELED); >- if (c != -1) >- printf(" (CTRL-C to abort) "); >- } >- printf(" ... %s\n", (error) ? "fail" : "ok"); >- return (error); >-} >- >-static int >-cb_dumphdr(struct md_pa *mdp, int seqnr, void *arg) >-{ >- struct dumperinfo *di = (struct dumperinfo*)arg; >- Elf_Phdr phdr; >- uint64_t size; >- int error; >- >- size = mdp->md_size; >- bzero(&phdr, sizeof(phdr)); >- phdr.p_type = PT_LOAD; >- phdr.p_flags = PF_R; /* XXX */ >- phdr.p_offset = fileofs; >- phdr.p_vaddr = mdp->md_start; >- phdr.p_paddr = mdp->md_start; >- phdr.p_filesz = size; >- phdr.p_memsz = size; >- phdr.p_align = PAGE_SIZE; >- >- error = buf_write(di, (char*)&phdr, sizeof(phdr)); >- fileofs += phdr.p_filesz; >- return (error); >-} >- >-static int >-cb_size(struct md_pa *mdp, int seqnr, void *arg) >-{ >- uint64_t *sz = (uint64_t*)arg; >- >- *sz += (uint64_t)mdp->md_size; >- return (0); >-} >- >-static int >-foreach_chunk(callback_t cb, void *arg) >-{ >- struct md_pa *mdp; >- int error, seqnr; >- >- seqnr = 0; >- mdp = md_pa_first(); >- while (mdp != NULL) { >- error = (*cb)(mdp, seqnr++, arg); >- if (error) >- return (-error); >- mdp = md_pa_next(mdp); >+ for (i = 0; i < chunk; i++) { >+ a = pa + i * PAGE_SIZE; >+ *va = pmap_kenter_temporary(trunc_page(a), i); > } >- return (seqnr); >-} >- >-int >-dumpsys(struct dumperinfo *di) >-{ >- Elf_Ehdr ehdr; >- uint64_t dumpsize; >- off_t hdrgap; >- size_t hdrsz; >- int error; >- >- if (do_minidump) >- return (minidumpsys(di)); >- >- bzero(&ehdr, sizeof(ehdr)); >- ehdr.e_ident[EI_MAG0] = ELFMAG0; >- ehdr.e_ident[EI_MAG1] = ELFMAG1; >- ehdr.e_ident[EI_MAG2] = ELFMAG2; >- ehdr.e_ident[EI_MAG3] = ELFMAG3; >- ehdr.e_ident[EI_CLASS] = ELF_CLASS; >-#if BYTE_ORDER == LITTLE_ENDIAN >- ehdr.e_ident[EI_DATA] = ELFDATA2LSB; >-#else >- ehdr.e_ident[EI_DATA] = ELFDATA2MSB; >-#endif >- ehdr.e_ident[EI_VERSION] = EV_CURRENT; >- ehdr.e_ident[EI_OSABI] = ELFOSABI_STANDALONE; /* XXX big picture? */ >- ehdr.e_type = ET_CORE; >- ehdr.e_machine = EM_VALUE; >- ehdr.e_phoff = sizeof(ehdr); >- ehdr.e_flags = 0; >- ehdr.e_ehsize = sizeof(ehdr); >- ehdr.e_phentsize = sizeof(Elf_Phdr); >- ehdr.e_shentsize = sizeof(Elf_Shdr); >- >- md_pa_init(); >- >- /* Calculate dump size. */ >- dumpsize = 0L; >- ehdr.e_phnum = foreach_chunk(cb_size, &dumpsize); >- hdrsz = ehdr.e_phoff + ehdr.e_phnum * ehdr.e_phentsize; >- fileofs = MD_ALIGN(hdrsz); >- dumpsize += fileofs; >- hdrgap = fileofs - DEV_ALIGN(hdrsz); >- >- /* Determine dump offset on device. */ >- if (di->mediasize < SIZEOF_METADATA + dumpsize + sizeof(kdh) * 2) { >- error = ENOSPC; >- goto fail; >- } >- dumplo = di->mediaoffset + di->mediasize - dumpsize; >- dumplo -= sizeof(kdh) * 2; >- >- mkdumpheader(&kdh, KERNELDUMPMAGIC, KERNELDUMP_VERSION, dumpsize, >- di->blocksize); >- >- printf("Dumping %llu MB (%d chunks)\n", (long long)dumpsize >> 20, >- ehdr.e_phnum); >- >- /* Dump leader */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- dumplo += sizeof(kdh); >- >- /* Dump ELF header */ >- error = buf_write(di, (char*)&ehdr, sizeof(ehdr)); >- if (error) >- goto fail; >- >- /* Dump program headers */ >- error = foreach_chunk(cb_dumphdr, di); >- if (error < 0) >- goto fail; >- buf_flush(di); >- >- /* >- * All headers are written using blocked I/O, so we know the >- * current offset is (still) block aligned. Skip the alignement >- * in the file to have the segment contents aligned at page >- * boundary. We cannot use MD_ALIGN on dumplo, because we don't >- * care and may very well be unaligned within the dump device. >- */ >- dumplo += hdrgap; >- >- /* Dump memory chunks (updates dumplo) */ >- error = foreach_chunk(cb_dumpdata, di); >- if (error < 0) >- goto fail; >- >- /* Dump trailer */ >- error = dump_write(di, &kdh, 0, dumplo, sizeof(kdh)); >- if (error) >- goto fail; >- >- /* Signal completion, signoff and exit stage left. */ >- dump_write(di, NULL, 0, 0, 0); >- printf("\nDump complete\n"); >- return (0); >- >- fail: >- if (error < 0) >- error = -error; >- >- if (error == ECANCELED) >- printf("\nDump aborted\n"); >- else if (error == ENOSPC) >- printf("\nDump failed. Partition too small.\n"); >- else >- printf("\n** DUMP FAILED (ERROR %d) **\n", error); >- return (error); > } >-- >1.9.3 >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 193873
:
147599
|
147868
|
147869
|
148257
| 149303