Intel recommends to first flush the TLBs and then the caches
on caching attribute changes. c_p_a() previously did it the
other way round. Reorder that.
The procedure is still not fully compliant to the Intel documentation
because Intel recommends a all CPU synchronization step between
the TLB flushes and the cache flushes.
However on all new Intel CPUs this is now meaningless anyways
because they support Self-Snoop and can skip the cache flush
step anyway.
[ mingo@elte.hu: decoupled from clflush and ported it to x86.git ]
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
struct list_head *lh = (struct list_head *)arg;
struct page *p;
+ /*
+ * Flush all to work around Errata in early athlons regarding
+ * large page flushing.
+ */
+ __flush_tlb_all();
+
/* High level code is not ready for clflush yet */
if (0 && cpu_has_clflush) {
list_for_each_entry(p, lh, lru)
if (boot_cpu_data.x86_model >= 4)
wbinvd();
}
-
- /*
- * Flush all to work around Errata in early athlons regarding
- * large page flushing.
- */
- __flush_tlb_all();
}
static void set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
struct list_head *l = (struct list_head *)arg;
struct page *pg;
+ __flush_tlb_all();
+
/* When clflush is available always use it because it is
much cheaper than WBINVD. */
/* clflush is still broken. Disable for now. */
clflush_cache_range(addr, PAGE_SIZE);
}
}
- __flush_tlb_all();
}
static inline void flush_map(struct list_head *l)