This patch fixes a bug of ioremap_nocache. ioremap_nocache() will call
__ioremap() with flags != 0 to do the real work, which will call
change_page_attr_addr() if phys_addr + size - 1 < (end_pfn_map << PAGE_SHIFT).
But some pages between 0 ~ end_pfn_map << PAGE_SHIFT are not mapped by
identity map, this will make change_page_attr_addr failed.
This patch is based on latest x86 git and has been tested on x86_64 platform.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (phys_addr + size - 1 < (end_pfn_map << PAGE_SHIFT)) {
unsigned long npages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
unsigned long vaddr = (unsigned long) __va(phys_addr);
+ int level;
+ /*
+ * If there is no identity map for this address,
+ * change_page_attr_addr is unnecessary
+ */
+ if (!lookup_address(vaddr, &level))
+ return err;
/*
* Must use a address here and not struct page because the phys addr
* can be a in hole between nodes and not have an memmap entry.