While reserving KVA for lmem_maps of node, we have to make sure that
node_remap_start_pfn[] is aligned to a proper pmd boundary.
(node_remap_start_pfn[] gets its value from node_end_pfn[])
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
/* now the roundup is correct, convert to PAGE_SIZE pages */
size = size * PTRS_PER_PTE;
+ if (node_end_pfn[nid] & (PTRS_PER_PTE-1)) {
+ /*
+ * Adjust size if node_end_pfn is not on a proper
+ * pmd boundary. remap_numa_kva will barf otherwise.
+ */
+ size += node_end_pfn[nid] & (PTRS_PER_PTE-1);
+ }
+
/*
* Validate the region we are allocating only contains valid
* pages.