mm/hugetlb.c: avoid bogus counter of surplus huge page
authorHillf Danton <dhillf@gmail.com>
Tue, 10 Jan 2012 23:08:30 +0000 (15:08 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 11 Jan 2012 00:30:45 +0000 (16:30 -0800)
If we have to hand back the newly allocated huge page to page allocator,
for any reason, the changed counter should be recovered.

This affects only s390 at present.

Signed-off-by: Hillf Danton <dhillf@gmail.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hugetlb.c

index bb7dc405634ff38bb25a4765618a9e295d3069d7..ea8c3a4cd2ae8acdf52a7a4e862e277f2390c265 100644 (file)
@@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
 
        if (page && arch_prepare_hugepage(page)) {
                __free_pages(page, huge_page_order(h));
-               return NULL;
+               page = NULL;
        }
 
        spin_lock(&hugetlb_lock);