In ll_fault0, the 'fault' struct is mostly cleared before the call to
cl_io_loop, but ft_flags is not reset. It is ordinarily set by
the call to filemap_fault in vvp_io_kernel_fault, but if Lustre
returns before calling filemap_fault, it still has the old value of
ft_flags.
ll_fault0 will then consume the ft_flags field. If it has the
VM_FAULT_RETRY bit set, it will be used as ll_fault0() and
ll_fault()'s return value.
This is a problem when VM_FAULT_RETRY is in ft_flags:
When fault/filemap_fault return with that flag set, they have already
released the mmap semaphore, and do_page_fault does not need to
release it.
Incorrectly returning this flag from ll_fault means mmap_sem
is not upped in the kernel's do_page_fault().
In addition to clearing ft_flags, this patch does not use it unless
it is valid. It's potentially misleading to return ft_flags in
"fault_ret" if ft_flags has not been set by filemap_fault.
This adds clarity, but does not change the current behavior:
When not valid, ft_flags is replaced by fault_ret, which is zero,
as is ft_flags when not set by filemap_fault.
Signed-off-by: Patrick Farrell <paf@cray.com>
Reviewed-on: http://review.whamcloud.com/10956
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-5291
Reviewed-by: Bobi Jam <bobijam@gmail.com>
Reviewed-by: Jinshan Xiong <jinshan.xiong@intel.com>
Signed-off-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* fault API used bitflags for return code.
*/
unsigned int ft_flags;
+ /**
+ * check that flags are from filemap_fault
+ */
+ bool ft_flags_valid;
} fault;
} fault;
} u;
vio->u.fault.ft_vma = vma;
vio->u.fault.ft_vmpage = NULL;
vio->u.fault.fault.ft_vmf = vmf;
+ vio->u.fault.fault.ft_flags = 0;
+ vio->u.fault.fault.ft_flags_valid = 0;
result = cl_io_loop(env, io);
- fault_ret = vio->u.fault.fault.ft_flags;
+ /* ft_flags are only valid if we reached
+ * the call to filemap_fault */
+ if (vio->u.fault.fault.ft_flags_valid)
+ fault_ret = vio->u.fault.fault.ft_flags;
+
vmpage = vio->u.fault.ft_vmpage;
if (result != 0 && vmpage != NULL) {
page_cache_release(vmpage);
struct vm_fault *vmf = cfio->fault.ft_vmf;
cfio->fault.ft_flags = filemap_fault(cfio->ft_vma, vmf);
+ cfio->fault.ft_flags_valid = 1;
if (vmf->page) {
CDEBUG(D_PAGE,