mm/memory_hotplug.c: simplify calculation of number of pages in __remove_pages()
authorDavid Hildenbrand <david@redhat.com>
Tue, 7 Apr 2020 03:06:53 +0000 (20:06 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 7 Apr 2020 17:43:40 +0000 (10:43 -0700)
In commit 52fb87c81f11 ("mm/memory_hotplug: cleanup __remove_pages()"), we
cleaned up __remove_pages(), and introduced a shorter variant to calculate
the number of pages to the next section boundary.

Turns out we can make this calculation easier to read.  We always want to
have the number of pages (> 0) to the next section boundary, starting from
the current pfn.

We'll clean up __remove_pages() in a follow-up patch and directly make use
of this computation.

Suggested-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Link: http://lkml.kernel.org/r/20200228095819.10750-2-david@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory_hotplug.c

index d6f813cc12c1f503a31793f21901fc2b123c797a..df748b7834188dd6e1f6631360553efd93224e2d 100644 (file)
@@ -534,7 +534,8 @@ void __remove_pages(unsigned long pfn, unsigned long nr_pages,
        for (; pfn < end_pfn; pfn += cur_nr_pages) {
                cond_resched();
                /* Select all remaining pages up to the next section boundary */
-               cur_nr_pages = min(end_pfn - pfn, -(pfn | PAGE_SECTION_MASK));
+               cur_nr_pages = min(end_pfn - pfn,
+                                  SECTION_ALIGN_UP(pfn + 1) - pfn);
                __remove_section(pfn, cur_nr_pages, map_offset, altmap);
                map_offset = 0;
        }