TARGET_ARCH=powerpc64 got a fix to bugzilla 205458. It turns out that TARGET_ARCH=powerpc needs to detect when it is running on powerpc64 and do the same thing. The issue is that for powerpc64 it is inappropriate to restore the sprg0 value to its openfirmware value. This is because the FreeBSD real-mode handling is involved instead of the openfirmware's original virtual mode. Quoting Nathan W. from Comment #4 of 205458: Where this explodes is if OF uses an unmapped SLB entry. The SLB fault handler runs in real mode and refers to the PCPU pointer in SPRG0, which blows up the kernel. Having a value of SPRG0 that works for the kernel is less fatal than preserving OF's value in this case. I know that part of the code does detect the powerpc64 context vs. not and does things differently to emulate being powerpc like on powerpc64 (such as limiting RAM use as a consequence). The powerpc64 vs. not status needs to be recorded and used to control a sprg0 restoration choice: avoid restoring openfirmware's value on powerpc64; otherwise restore it. https://lists.freebsd.org/pipermail/freebsd-ppc/2017-May/008891.html has details of the periodic/random panics that can occur as things are. (I've other reports on the lists as well but the above has the best details, such as backtraces and other information.)
(In reply to Mark Millard from comment #0) I expect that the following patch would fix the problem: # svnlite diff /usr/src/sys/powerpc/ofw/ofw_machdep.c Index: /usr/src/sys/powerpc/ofw/ofw_machdep.c =================================================================== --- /usr/src/sys/powerpc/ofw/ofw_machdep.c (revision 317820) +++ /usr/src/sys/powerpc/ofw/ofw_machdep.c (working copy) @@ -147,7 +147,8 @@ * PCPU data cannot be used until this routine is called ! */ #ifndef __powerpc64__ - __asm __volatile("mtsprg0 %0" :: "r"(ofw_sprg0_save)); + if (cpu_features & PPC_FEATURE_64 != PPC_FEATURE_64) + __asm __volatile("mtsprg0 %0" :: "r"(ofw_sprg0_save)); #endif } #endif This is based oncpu_features already having had PPC_FEATURE_64 masked in before this if things are running on a PowerMac G5 or other powerpc64.
(In reply to Mark Millard from comment #1) It helps to type all my ()'s. Trying again, using idiom from elsewhere in the file: # svnlite diff /usr/src/sys/powerpc/ofw/ofw_machdep.c Index: /usr/src/sys/powerpc/ofw/ofw_machdep.c =================================================================== --- /usr/src/sys/powerpc/ofw/ofw_machdep.c (revision 317820) +++ /usr/src/sys/powerpc/ofw/ofw_machdep.c (working copy) @@ -147,7 +147,8 @@ * PCPU data cannot be used until this routine is called ! */ #ifndef __powerpc64__ - __asm __volatile("mtsprg0 %0" :: "r"(ofw_sprg0_save)); + if (!(cpu_features & PPC_FEATURE_64)) + __asm __volatile("mtsprg0 %0" :: "r"(ofw_sprg0_save)); #endif } #endif
(In reply to Mark Millard from comment #2) Testing shows that the early boot hangs when the patch has been applied: kernel entry at <?> . . . is the last thing displayed. May be there is a later stage before it is appropriate?
(In reply to Mark Millard from comment #3) I forgot to deal with the prepare side of things. So, now with both prepare and restore code fixes, this code boots. # svnlite diff /usr/src/sys/powerpc/ofw/ofw_machdep.c Index: /usr/src/sys/powerpc/ofw/ofw_machdep.c =================================================================== --- /usr/src/sys/powerpc/ofw/ofw_machdep.c (revision 317820) +++ /usr/src/sys/powerpc/ofw/ofw_machdep.c (working copy) @@ -111,26 +111,27 @@ * Assume that interrupt are disabled at this point, or * SPRG1-3 could be trashed */ -#ifdef __powerpc64__ - __asm __volatile("mtsprg1 %0\n\t" - "mtsprg2 %1\n\t" - "mtsprg3 %2\n\t" - : - : "r"(ofmsr[2]), - "r"(ofmsr[3]), - "r"(ofmsr[4])); -#else - __asm __volatile("mfsprg0 %0\n\t" - "mtsprg0 %1\n\t" - "mtsprg1 %2\n\t" - "mtsprg2 %3\n\t" - "mtsprg3 %4\n\t" - : "=&r"(ofw_sprg0_save) - : "r"(ofmsr[1]), - "r"(ofmsr[2]), - "r"(ofmsr[3]), - "r"(ofmsr[4])); +#ifndef __powerpc64__ + if (!(cpu_features & PPC_FEATURE_64)) + __asm __volatile("mfsprg0 %0\n\t" + "mtsprg0 %1\n\t" + "mtsprg1 %2\n\t" + "mtsprg2 %3\n\t" + "mtsprg3 %4\n\t" + : "=&r"(ofw_sprg0_save) + : "r"(ofmsr[1]), + "r"(ofmsr[2]), + "r"(ofmsr[3]), + "r"(ofmsr[4])); + else #endif + __asm __volatile("mtsprg1 %0\n\t" + "mtsprg2 %1\n\t" + "mtsprg3 %2\n\t" + : + : "r"(ofmsr[2]), + "r"(ofmsr[3]), + "r"(ofmsr[4])); } static __inline void @@ -147,7 +148,8 @@ * PCPU data cannot be used until this routine is called ! */ #ifndef __powerpc64__ - __asm __volatile("mtsprg0 %0" :: "r"(ofw_sprg0_save)); + if (!(cpu_features & PPC_FEATURE_64)) + __asm __volatile("mtsprg0 %0" :: "r"(ofw_sprg0_save)); #endif } #endif
(In reply to Mark Millard from comment #4) While the patch probably is still appropriate, testing has shown that it is not sufficient to prevent the periodic/random panics that I have been getting.
Created attachment 185360 a