Based upon a bug report by Mariusz Kozlowski
It uses smp_call_function_masked() now, which has a preemption-disabled
requirement.
Signed-off-by: David S. Miller <davem@davemloft.net>
/* arch/sparc64/mm/tsb.c
*
- * Copyright (C) 2006 David S. Miller <davem@davemloft.net>
+ * Copyright (C) 2006, 2008 David S. Miller <davem@davemloft.net>
*/
#include <linux/kernel.h>
+#include <linux/preempt.h>
#include <asm/system.h>
#include <asm/page.h>
#include <asm/tlbflush.h>
tsb_context_switch(mm);
/* Now force other processors to do the same. */
+ preempt_disable();
smp_tsb_sync(mm);
+ preempt_enable();
/* Now it is safe to free the old tsb. */
kmem_cache_free(tsb_caches[old_cache_index], old_tsb);