There are several tricky races involved with growing the TSB. So just
use base-size TSBs for user contexts and we can revisit enabling this
later.
One part of the SMP problems is that tsb_context_switch() can see
partially updated TSB configuration state if tsb_grow() is running in
parallel. That's easily solved with a seqlock taken as a writer by
tsb_grow() and taken as a reader to capture all the TSB config state
in tsb_context_switch().
Then there is flush_tsb_user() running in parallel with a tsb_grow().
In theory we could take the seqlock as a reader there too, and just
resample the TSB pointer and reflush but that looks really ugly.
Lastly, I believe there is a case with threads that results in a TSB
entry lock bit being set spuriously which will cause the next access
to that TSB entry to wedge the cpu (since the TSB entry lock bit will
never clear). It's either copy_tsb() or some bug elsewhere in the TSB
assembly.
Signed-off-by: David S. Miller <davem@davemloft.net>
struct page *page;
unsigned long pfn;
unsigned long pg_flags;
- unsigned long mm_rss;
pfn = pte_pfn(pte);
if (pfn_valid(pfn) &&
}
mm = vma->vm_mm;
- mm_rss = get_mm_rss(mm);
- if (mm_rss >= mm->context.tsb_rss_limit)
- tsb_grow(mm, mm_rss, GFP_ATOMIC);
-
if ((pte_val(pte) & _PAGE_ALL_SZ_BITS) == _PAGE_SZBITS) {
struct tsb *tsb;
unsigned long tag;
int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
- unsigned long initial_rss;
mm->context.sparc64_ctx_val = 0UL;
* will be confused and think there is an older TSB to free up.
*/
mm->context.tsb = NULL;
-
- /* If this is fork, inherit the parent's TSB size. We would
- * grow it to that size on the first page fault anyways.
- */
- initial_rss = mm->context.tsb_nentries;
- if (initial_rss)
- initial_rss -= 1;
-
- tsb_grow(mm, initial_rss, GFP_KERNEL);
+ tsb_grow(mm, 0, GFP_KERNEL);
if (unlikely(!mm->context.tsb))
return -ENOMEM;