From cd02cf1aceeac5b0bbbb2d6fbf614c987dcd396f Mon Sep 17 00:00:00 2001 From: Qian Cai Date: Tue, 5 Mar 2019 15:49:57 -0800 Subject: [PATCH] mm/hotplug: fix an imbalance with DEBUG_PAGEALLOC When onlining a memory block with DEBUG_PAGEALLOC, it unmaps the pages in the block from kernel, However, it does not map those pages while offlining at the beginning. As the result, it triggers a panic below while onlining on ppc64le as it checks if the pages are mapped before unmapping. However, the imbalance exists for all arches where double-unmappings could happen. Therefore, let kernel map those pages in generic_online_page() before they have being freed into the page allocator for the first time where it will set the page count to one. On the other hand, it works fine during the boot, because at least for IBM POWER8, it does, early_setup early_init_mmu harsh__early_init_mmu htab_initialize [1] htab_bolt_mapping [2] where it effectively map all memblock regions just like kernel_map_linear_page(), so later mem_init() -> memblock_free_all() will unmap them just fine without any imbalance. On other arches without this imbalance checking, it still unmap them once at the most. [1] for_each_memblock(memory, reg) { base = (unsigned long)__va(reg->base); size = reg->size; DBG("creating mapping for region: %lx..%lx (prot: %lx)\n", base, size, prot); BUG_ON(htab_bolt_mapping(base, base + size, __pa(base), prot, mmu_linear_psize, mmu_kernel_ssize)); } [2] linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80; kernel BUG at arch/powerpc/mm/hash_utils_64.c:1815! Oops: Exception in kernel mode, sig: 5 [#1] LE SMP NR_CPUS=256 DEBUG_PAGEALLOC NUMA pSeries CPU: 2 PID: 4298 Comm: bash Not tainted 5.0.0-rc7+ #15 NIP: c000000000062670 LR: c00000000006265c CTR: 0000000000000000 REGS: c0000005bf8a75b0 TRAP: 0700 Not tainted (5.0.0-rc7+) MSR: 800000000282b033 CR: 28422842 XER: 00000000 CFAR: c000000000804f44 IRQMASK: 1 NIP [c000000000062670] __kernel_map_pages+0x2e0/0x4f0 LR [c00000000006265c] __kernel_map_pages+0x2cc/0x4f0 Call Trace: __kernel_map_pages+0x2cc/0x4f0 free_unref_page_prepare+0x2f0/0x4d0 free_unref_page+0x44/0x90 __online_page_free+0x84/0x110 online_pages_range+0xc0/0x150 walk_system_ram_range+0xc8/0x120 online_pages+0x280/0x5a0 memory_subsys_online+0x1b4/0x270 device_online+0xc0/0xf0 state_store+0xc0/0x180 dev_attr_store+0x3c/0x60 sysfs_kf_write+0x70/0xb0 kernfs_fop_write+0x10c/0x250 __vfs_write+0x48/0x240 vfs_write+0xd8/0x210 ksys_write+0x70/0x120 system_call+0x5c/0x70 Link: http://lkml.kernel.org/r/20190301220814.97339-1-cai@lca.pw Signed-off-by: Qian Cai Reviewed-by: Andrew Morton Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman [powerpc] Cc: Michal Hocko Cc: Souptick Joarder Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/memory_hotplug.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index ebc8c1e75355..6b05576fb4ec 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -658,6 +658,7 @@ EXPORT_SYMBOL_GPL(__online_page_free); static void generic_online_page(struct page *page, unsigned int order) { + kernel_map_pages(page, 1 << order, 1); __free_pages_core(page, order); totalram_pages_add(1UL << order); #ifdef CONFIG_HIGHMEM -- 2.30.2