Bug 120515

Summary: [acpi] [patch] acpi_alloc_wakeup_handler: can't alloc wake memory
Product: Base System Reporter: Arthur Hartwig <arthur.hartwig>
Component: kernAssignee: freebsd-acpi (Nobody) <acpi>
Status: Closed FIXED    
Severity: Affects Only Me    
Priority: Normal    
Version: Unspecified   
Hardware: Any   
OS: Any   

Description Arthur Hartwig 2008-02-11 01:40:02 UTC
The following message appears during startup:

acpi_alloc_wakeup_handler: can't alloc wake memory

Investigation shows the problem to be that there is no free memory with
sufficiently low physical address.

(This message has not been seen during startup of a generic kernel but
Nokia has applied a number of tweaks to significantly increase the kernel
virtual address including setting KERNBASE to 0x60000000).

Fix: In vm/vm_page.c function vm_page_startup()loops over the available memory
blocks in order of increasing address calling vm_pageq_add_new_page(pa)
which calls vm_pageq_enqueue() which adds the page to the tail of the
appropriate vm_pages_queue queue. Thus on return from vm_page_startup()
the vm_pages_queue queues are ordered by increasing physical address.
Removal from these queues is from the queue head so the pages with lowest
physical address are allocated first.

The following change steps through the available memory blocks in order
of decreasing physical address (allocating pages to the queues in order
of increasing address) thus relegating the pages with low physical address
to the end of the queues and increasing the likelihood that suitable
pages will be found to meet the request of acpi_alloc_wakeup_handler().



/*
         * Construct the free queue(s) in descending order (by physical
!        * address) so that the first 16MB of physical memory is allocated
         * last rather than first.  On large-memory machines, this avoids
         * the exhaustion of low physical memory before isa_dma_init has run.
         */
        cnt.v_page_count = 0;
        cnt.v_free_count = 0;
        list = getenv("vm.blacklist");
!       for (i = 0; phys_avail[i + 1] && npages > 0; i += 2) {
                pa = phys_avail[i];
                last_pa = phys_avail[i + 1];
                while (pa < last_pa && npages-- > 0) {
--- 330,344 ----

        /*
         * Construct the free queue(s) in descending order (by physical
!        * address) of base address of memory block so that the first
!        * 16MB of physical memory is allocated
         * last rather than first.  On large-memory machines, this avoids
         * the exhaustion of low physical memory before isa_dma_init has run.
         */
        cnt.v_page_count = 0;
        cnt.v_free_count = 0;
        list = getenv("vm.blacklist");
!       for (i = nblocks*2-2; i >= 0 && npages > 0; i -= 2) {
                pa = phys_avail[i];
                last_pa = phys_avail[i + 1];
                while (pa < last_pa && npages-- > 0) {--In3MnshHHIs6TMTT3qRBN3tE5LvNfQasBfsczByXPzWwpPgM
Content-Type: text/plain; name="file.diff"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="file.diff"

*** 329,342 ****
Comment 1 Remko Lodder freebsd_committer freebsd_triage 2008-02-11 06:43:19 UTC
Responsible Changed
From-To: freebsd-bugs->freebsd-acpi

Over to maintainer.
Comment 2 Dan Lukes 2008-02-11 15:34:58 UTC
Duplicate of kern/119356 but with different fix
Comment 3 Arthur Hartwig 2008-02-13 06:58:47 UTC
ext Dan Lukes wrote:
> Duplicate of kern/119356 but with different fix
G'day Dan,
    Thanks for the pointer to the other PR.

Based on the comments preceding the code I suggested be changed, I 
suspect the the vm subsystem originally added pages to the head of the 
free queues and removed them from the head of the free queues so that 
the first pages added to the free queues were the last actually 
allocated. Unfortunately this scheme has the undesirable consequence 
that once the system is up and running a free page is more likely to be 
reused soon thus obliterating information that might be useful to 
someone dredging a crash dump for clues why a panic occurred. But I 
haven't done the research.

Your proposed change in PR 119356 looks OK to me apart from the 
disadvantages you mention AND it doesn't help anything else that might 
need to allocate memory at low physical addresses (not that I know of 
anything).

Arthur
Comment 4 Alan Cox 2008-06-21 07:28:29 UTC
This patch does not apply to HEAD or RELENG_7 in two senses of "apply".  
The code affected by the patch has changed and so the patch will not 
mechanically apply.  More importantly, the new physical memory allocator 
in HEAD and RELENG_7 already addresses this problem in a systematic way.

I see no reason not to apply this patch to RELENG_6.

Alan
Comment 5 Dan Lukes 2008-06-22 13:49:02 UTC
Alan Cox wrote:
>  new physical memory allocator in HEAD and RELENG_7 already addresses this problem in a systematic way.
>  
>  I see no reason not to apply this patch to RELENG_6.

	At the first, I analyzed problem for myself. My 6.x based instalations 
affected by the problem are hacked already as I decide not to wait 
several months for next release.

	Well, I offered the analysis and hack to the public also.

	To apply it into RELENG_6 or not apply it into RELENG_6 - it is 
commiters decision. I have no problem with either decision.

					Dan
Comment 6 Volker Werth freebsd_committer freebsd_triage 2008-10-17 15:22:32 UTC
According to Joerg, one or two pages should be allocated at module load
time (boot-time). This should be sufficient.
Comment 7 John Baldwin freebsd_committer freebsd_triage 2010-06-02 14:08:14 UTC
Can you verify that this issue is resolved in 7.2 or later?

-- 
John Baldwin
Comment 8 Andriy Gapon freebsd_committer freebsd_triage 2010-12-05 15:06:38 UTC
State Changed
From-To: open->closed

Based on lack of recent feedback I conclude that the problem 
is solved in currently supported versions.