openwrt/staging/blogic.git
6 years agoIB/hfi1: Ensure VL index is within bounds
Kaike Wan [Thu, 31 May 2018 18:30:17 +0000 (11:30 -0700)]
IB/hfi1: Ensure VL index is within bounds

Improve the safety of the code and ensure the array cannot be indexed
out of bounds when picking the CPU for a given SDMA engine.

Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/hfi1: Fix user context tail allocation for DMA_RTAIL
Mike Marciniszyn [Thu, 31 May 2018 18:30:09 +0000 (11:30 -0700)]
IB/hfi1: Fix user context tail allocation for DMA_RTAIL

The following code fails to allocate a buffer for the
tail address that the hardware DMAs into when the user
context DMA_RTAIL is set.

if (HFI1_CAP_KGET_MASK(rcd->flags, DMA_RTAIL)) {
rcd->rcvhdrtail_kvaddr = dma_zalloc_coherent(
&dd->pcidev->dev, PAGE_SIZE, &dma_hdrqtail,
                gfp_flags);
if (!rcd->rcvhdrtail_kvaddr)
goto bail_free;
rcd->rcvhdrqtailaddr_dma = dma_hdrqtail;
}

So the rcvhdrtail_kvaddr would then be NULL.

The mmap logic fails to check for a NULL rcvhdrtail_kvaddr.

The fix is to test for both user and kernel DMA_TAIL options
during the allocation as well as testing for a NULL
rcvhdrtail_kvaddr during the mmap processing.

Additionally, all downstream testing of the capmask for DMA_RTAIL
have been eliminated in favor of testing rcvhdrtail_kvaddr.

Cc: <stable@vger.kernel.org> # 4.9.x
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/hns: Use zeroing memory allocator instead of allocator/memset
YueHaibing [Sun, 3 Jun 2018 09:32:22 +0000 (17:32 +0800)]
IB/hns: Use zeroing memory allocator instead of allocator/memset

Use dma_zalloc_coherent for allocating zeroed memory and
remove unnecessary memset function.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoinfiniband: fix a possible use-after-free bug
Cong Wang [Fri, 1 Jun 2018 18:31:44 +0000 (11:31 -0700)]
infiniband: fix a possible use-after-free bug

ucma_process_join() will free the new allocated "mc" struct,
if there is any error after that, especially the copy_to_user().

But in parallel, ucma_leave_multicast() could find this "mc"
through idr_find() before ucma_process_join() frees it, since it
is already published.

So "mc" could be used in ucma_leave_multicast() after it is been
allocated and freed in ucma_process_join(), since we don't refcnt
it.

Fix this by separating "publish" from ID allocation, so that we
can get an ID first and publish it later after copy_to_user().

Fixes: c8f6a362bf3e ("RDMA/cma: Add multicast communication support")
Reported-by: Noam Rathaus <noamr@beyondsecurity.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoiw_cxgb4: add INFINIBAND_ADDR_TRANS dependency
Arnd Bergmann [Wed, 30 May 2018 21:58:18 +0000 (23:58 +0200)]
iw_cxgb4: add INFINIBAND_ADDR_TRANS dependency

The newly added fill_res_ep_entry function fails to link if
CONFIG_INFINIBAND_ADDR_TRANS is not set:

drivers/infiniband/hw/cxgb4/restrack.o: In function `fill_res_ep_entry':
restrack.c:(.text+0x3cc): undefined reference to `rdma_res_to_id'
restrack.c:(.text+0x3d0): undefined reference to `rdma_iw_cm_id'

This adds a Kconfig dependency for the driver.

Fixes: 116aeb887371 ("iw_cxgb4: provide detailed provider-specific CM_ID information")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Greg Thelen <gthelen@google.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/isert: use T10-PI check mask definitions from core layer
Max Gurtovoy [Thu, 31 May 2018 08:05:26 +0000 (11:05 +0300)]
IB/isert: use T10-PI check mask definitions from core layer

No reason to use hard-coded protection information checks in ib_isert
driver. Use check masks from RDMA core driver.
Also, while we here, reduce the number of instructions made for setting
the check mask (no need to do bitwise or with 0 since we zero the mask
in the beginning of the function).

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/iser: use T10-PI check mask definitions from core layer
Max Gurtovoy [Thu, 31 May 2018 08:05:25 +0000 (11:05 +0300)]
IB/iser: use T10-PI check mask definitions from core layer

No reason to re-define protection information check in ib_iser driver.
Use check masks from RDMA core driver.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/core: introduce check masks for T10-PI offload
Max Gurtovoy [Thu, 31 May 2018 08:05:24 +0000 (11:05 +0300)]
RDMA/core: introduce check masks for T10-PI offload

T10-PI offload capability is currently supported in iSER protocol only,
and the definition of the HCA protection information checks are missing
from the core layer. Add those definition to avoid code duplication in
other drivers (such iSER target and NVMeoF).

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/isert: fix T10-pi check mask setting
Max Gurtovoy [Thu, 31 May 2018 08:05:23 +0000 (11:05 +0300)]
IB/isert: fix T10-pi check mask setting

A copy/paste bug (probably) caused setting of an app_tag check mask
in case where a ref_tag check was needed.

Fixes: 38a2d0d429f1 ("IB/isert: convert to the generic RDMA READ/WRITE API")
Fixes: 9e961ae73c2c ("IB/isert: Support T10-PI protected transactions")
Cc: stable@vger.kernel.org
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoMerge tag 'verbs_flow_counters' of git://git.kernel.org/pub/scm/linux/kernel/git...
Jason Gunthorpe [Mon, 4 Jun 2018 14:48:11 +0000 (08:48 -0600)]
Merge tag 'verbs_flow_counters' of git://git./linux/kernel/git/leon/linux-rdma.git into for-next

Pull verbs counters series from Leon Romanovsky:

====================
Verbs flow counters support

This series comes to allow user space applications to monitor real time
traffic activity and events of the verbs objects it manages, e.g.: ibv_qp,
ibv_wq, ibv_flow.

The API enables generic counters creation and define mapping to
association with a verbs object, the current mlx5 driver is using this API
for flow counters.

With this API, an application can monitor the entire life cycle of object
activity, defined here as a static counters attachment.  This API also
allows dynamic counters monitoring of measurement points for a partial
period in the verbs object life cycle.

In addition it presents the implementation of the generic counters
interface.

This will be achieved by extending flow creation by adding a new flow
count specification type which allows the user to associate a previously
created flow counters using the generic verbs counters interface to the
created flow, once associated the user could read statistics by using the
read function of the generic counters interface.

The API includes:
1. create and destroyed API of a new counters objects
2. read the counters values from HW

Note:
Attaching API to allow application to define the measurement points per
objects is a user space only API and this data is passed to kernel when
the counted object (e.g. flow) is created with the counters object.
===================

* tag 'verbs_flow_counters':
  IB/mlx5: Add counters read support
  IB/mlx5: Add flow counters read support
  IB/mlx5: Add flow counters binding support
  IB/mlx5: Add counters create and destroy support
  IB/uverbs: Add support for flow counters
  IB/core: Add support for flow counters
  IB/core: Support passing uhw for create_flow
  IB/uverbs: Add read counters support
  IB/core: Introduce counters read verb
  IB/uverbs: Add create/destroy counters support
  IB/core: Introduce counters object and its create/destroy
  IB/uverbs: Add an ib_uobject getter to ioctl() infrastructure
  net/mlx5: Export flow counter related API
  net/mlx5: Use flow counter pointer as input to the query function

6 years agoIB/mlx5: Add counters read support
Raed Salem [Thu, 31 May 2018 13:43:41 +0000 (16:43 +0300)]
IB/mlx5: Add counters read support

This patch implements the uverbs counters read API, it will use the
specific read counters function to the given type to accomplish its
task.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Add flow counters read support
Raed Salem [Thu, 31 May 2018 13:43:40 +0000 (16:43 +0300)]
IB/mlx5: Add flow counters read support

Implements the flow counters read wrapper.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Add flow counters binding support
Raed Salem [Thu, 31 May 2018 13:43:39 +0000 (16:43 +0300)]
IB/mlx5: Add flow counters binding support

Associates a counters with a flow when IB_FLOW_SPEC_ACTION_COUNT is part
of the flow specifications.

The counters user space placements of location and description (index,
description) pairs are passed as private data of the counters flow
specification.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Add counters create and destroy support
Raed Salem [Thu, 31 May 2018 13:43:38 +0000 (16:43 +0300)]
IB/mlx5: Add counters create and destroy support

This patch implements the device counters create and destroy APIs and
introducing some internal management structures.

Downstream patches in this series will add the functionality to support
flow counters binding and reading.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Add support for flow counters
Raed Salem [Thu, 31 May 2018 13:43:37 +0000 (16:43 +0300)]
IB/uverbs: Add support for flow counters

The struct ib_uverbs_flow_spec_action_count associates a counters object
with the flow.

Post this association the flow counters can be read via the counters
object.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/core: Add support for flow counters
Raed Salem [Thu, 31 May 2018 13:43:36 +0000 (16:43 +0300)]
IB/core: Add support for flow counters

A counters object could be attached to flow on creation by providing the
counter specification action.

General counters description which count packets and bytes are introduced,
downstream patches from this series will use them as part of flow counters
binding.

In addition, increase number of flow specifications supported layers to 10
upon adding count specification and for the previously added drop
specification.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/core: Support passing uhw for create_flow
Matan Barak [Thu, 31 May 2018 13:43:35 +0000 (16:43 +0300)]
IB/core: Support passing uhw for create_flow

This is required when user-space drivers need to pass extra information
regarding how to handle this flow steering specification.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Add read counters support
Raed Salem [Thu, 31 May 2018 13:43:34 +0000 (16:43 +0300)]
IB/uverbs: Add read counters support

This patch exposes the read counters verb to user space applications.  By
that verb the user can read the hardware counters which are associated
with the counters object.

The application needs to provide a sufficient memory to hold the
statistics.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/core: Introduce counters read verb
Raed Salem [Thu, 31 May 2018 13:43:33 +0000 (16:43 +0300)]
IB/core: Introduce counters read verb

The user supplies counters instance and a reference to an output array of
uint64_t.  The driver reads the hardware counters values and writes them
to the output index location in the user supplied array.  All counters
values are represented as uint64_t types.

To be able to successfully read the data the counters must be first bound
to an IB object.

Downstream patches will present binding method for flow counters.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Add create/destroy counters support
Raed Salem [Thu, 31 May 2018 13:43:32 +0000 (16:43 +0300)]
IB/uverbs: Add create/destroy counters support

User space application which uses counters functionality, is expected to
allocate/release the counters resources by calling create/destroy verbs
and in turn get a unique handle that can be used to attach the counters to
its counted type.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/core: Introduce counters object and its create/destroy
Raed Salem [Thu, 31 May 2018 13:43:31 +0000 (16:43 +0300)]
IB/core: Introduce counters object and its create/destroy

A verbs application may need to get statistics and info on various aspects
of a verb object (e.g. Flow, QP, ...), in general case the application
will state which object's counters its interested in (we refer to this
action as attach), bind this new counters object to the appropriate verb
object and on later stage read their values using the counters object.

This series introduces a general API for counters object that may
accumulate any ib object counters type, bound and read on demand.

Counters instance is allocated on an IB context and belongs to that
context.  Upon successful creation the counters can be bound to a verbs
object so that hardware counter instances can be created and read.

Downstream patches in this series will introduce the attach, bind and the
read functionality.

Counters instance can be de-allocated, upon successful destruction the
related hardware resources are released.

Prior to destroy call the user must first make sure that the counters is
not being used by any IB object, e.g. not attached to any of its counted
type otherwise an EBUSY error is invoked.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Add an ib_uobject getter to ioctl() infrastructure
Matan Barak [Thu, 31 May 2018 13:43:28 +0000 (16:43 +0300)]
IB/uverbs: Add an ib_uobject getter to ioctl() infrastructure

Previously, the user had to dig inside the attribute to get the uobject.
Add a helper function that correctly extract it (and do the required
checks) for him/her.

Signed-off-by: Matan Barak <matanb@mellanox.com>
Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agonet/mlx5: Export flow counter related API
Raed Salem [Thu, 31 May 2018 13:43:30 +0000 (16:43 +0300)]
net/mlx5: Export flow counter related API

Exports counters API to be used in both IB and EN.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Raed Salem <raeds@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agonet/mlx5: Use flow counter pointer as input to the query function
Or Gerlitz [Thu, 31 May 2018 13:43:29 +0000 (16:43 +0300)]
net/mlx5: Use flow counter pointer as input to the query function

This allows to un-expose the details of struct mlx5_fc and keep it
internal to the core driver.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agomm: Remove return value of zap_vma_ptes()
Leon Romanovsky [Tue, 29 May 2018 12:14:07 +0000 (15:14 +0300)]
mm: Remove return value of zap_vma_ptes()

All callers of zap_vma_ptes() are not interested in the return value of
that function, so let's simplify its interface and drop the return
value.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/hns_roce: Don't check return value of zap_vma_ptes()
Doug Ledford [Fri, 1 Jun 2018 15:19:19 +0000 (11:19 -0400)]
RDMA/hns_roce: Don't check return value of zap_vma_ptes()

There is no need to check return value of zap_vma_ptes()
because there is nothing to do with this knowledge.

Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/mlx4: Don't crash machine if zap_vma_ptes() fails
Leon Romanovsky [Tue, 29 May 2018 12:14:06 +0000 (15:14 +0300)]
RDMA/mlx4: Don't crash machine if zap_vma_ptes() fails

The failure reported by zap_vma_ptes() means that wrong VMA pages
were supplied, however it is impossible for this type of address.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/mlx5: Don't check return value of zap_vma_ptes()
Leon Romanovsky [Tue, 29 May 2018 12:14:05 +0000 (15:14 +0300)]
RDMA/mlx5: Don't check return value of zap_vma_ptes()

There is no need to check return value of zap_vma_ptes()
because there is nothing to do with this knowledge.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/mad: Convert BUG_ONs to error flows
Leon Romanovsky [Tue, 29 May 2018 11:56:19 +0000 (14:56 +0300)]
RDMA/mad: Convert BUG_ONs to error flows

Let's perform checks in-place instead of BUG_ONs.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/mad: Delete inaccessible BUG_ON
Leon Romanovsky [Tue, 29 May 2018 11:56:18 +0000 (14:56 +0300)]
RDMA/mad: Delete inaccessible BUG_ON

There is no need to check existence of mad_queue, because we already did
pointer dereference before call to dequeue_mad().

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/cma: Ignore unknown event
Leon Romanovsky [Tue, 29 May 2018 11:56:17 +0000 (14:56 +0300)]
RDMA/cma: Ignore unknown event

There is no need to bring down the whole machine, just because unknown
event was received. It is better to ignore it silently.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/cm: Abort loop in case of CM dequeue
Leon Romanovsky [Tue, 29 May 2018 11:56:16 +0000 (14:56 +0300)]
RDMA/cm: Abort loop in case of CM dequeue

In case CM work list is empty, the work pointer will be NULL,
so instead of kernel crash it is better to abort processing
of works.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/cxgb3: Don't crash kernel just because IDR is full
Leon Romanovsky [Tue, 29 May 2018 11:56:15 +0000 (14:56 +0300)]
RDMA/cxgb3: Don't crash kernel just because IDR is full

cxgb3 driver properly handles errors returned by IDR, so there is no
need to have special case (kernel crash) just because IDR is full.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/mlx4: Discard unknown SQP work requests
Leon Romanovsky [Tue, 29 May 2018 11:56:14 +0000 (14:56 +0300)]
RDMA/mlx4: Discard unknown SQP work requests

There is no need to crash the machine if unknown work request was
received in SQP MAD.

Cc: <stable@vger.kernel.org> # 3.6
Fixes: 37bfc7c1e83f ("IB/mlx4: SR-IOV multiplex and demultiplex MADs")
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/mlx4: Catch FW<->SW misalignment without machine crash
Leon Romanovsky [Tue, 29 May 2018 11:56:13 +0000 (14:56 +0300)]
RDMA/mlx4: Catch FW<->SW misalignment without machine crash

Any steering QP is supposed be above steering_qp_base,
see function mlx4_ib_steer_qp_alloc() for it, however in case
of misalignment between SW and FW, this qp_base can be wrong.

Use WARN() to catch such situation without killing the machine.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/qedr: fix spelling mistake: "adrresses" -> "addresses"
Colin Ian King [Wed, 30 May 2018 09:40:29 +0000 (10:40 +0100)]
RDMA/qedr: fix spelling mistake: "adrresses" -> "addresses"

Trivial fix to spelling mistake in DP_ERR error message

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/iser: Do not reduce max_sectors
Sergey Gorenko [Mon, 21 May 2018 15:55:53 +0000 (18:55 +0300)]
IB/iser: Do not reduce max_sectors

The iSER driver reduces max_sectors. For example, if you load the
ib_iser module with max_sectors=1024, you will see that
/sys/class/block/<bdev>/queue/max_hw_sectors_kb is 508. It is an
incorrect value. The expected value is (max_sectors * sector_size) /
1024 = 512.

Reducing of max_sectors can cause performance degradation due to
unnecessary splitting of IO requests.

The number of pages per MR has been fixed here, so there is no longer
any need to reduce max_sectors.

Fixes: 9c674815d346 ("IB/iser: Fix max_sectors calculation")
Signed-off-by: Sergey Gorenko <sergeygo@mellanox.com>
Reviewed-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Acked-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoMerge branch 'wip/dl-ipoib' into wip/dl-for-next
Doug Ledford [Thu, 31 May 2018 16:05:01 +0000 (12:05 -0400)]
Merge branch 'wip/dl-ipoib' into wip/dl-for-next

Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/hns: Implement the disassociate_ucontext API
Wei Hu(Xavier) [Mon, 28 May 2018 11:39:27 +0000 (19:39 +0800)]
RDMA/hns: Implement the disassociate_ucontext API

This patch implemented the IB core disassociate_ucontext API.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/uverbs: Hoist the common process of disassociate_ucontext into ib core
Wei Hu(Xavier) [Mon, 28 May 2018 11:39:26 +0000 (19:39 +0800)]
RDMA/uverbs: Hoist the common process of disassociate_ucontext into ib core

This patch hoisted the common process of disassociate_ucontext
callback function into ib core code, and these code are common
to ervery ib_device driver.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/hns: Fix the illegal memory operation when cross page
Wei Hu(Xavier) [Mon, 28 May 2018 11:39:25 +0000 (19:39 +0800)]
RDMA/hns: Fix the illegal memory operation when cross page

This patch fixed the potential illegal operation when using the
extend sge buffer cross page in post send operation. The bug
will cause the calltrace as below.

[ 3302.922107] Unable to handle kernel paging request at virtual address ffff00003b3a0004
[ 3302.930009] Mem abort info:
[ 3302.932790]   Exception class = DABT (current EL), IL = 32 bits
[ 3302.938695]   SET = 0, FnV = 0
[ 3302.941735]   EA = 0, S1PTW = 0
[ 3302.944863] Data abort info:
[ 3302.947729]   ISV = 0, ISS = 0x00000047
[ 3302.951551]   CM = 0, WnR = 1
[ 3302.954506] swapper pgtable: 4k pages, 48-bit VAs, pgd = ffff000009ea5000
[ 3302.961279] [ffff00003b3a0004] *pgd=00000023dfffe003, *pud=00000023dfffd003, *pmd=00000022dc84c003, *pte=0000000000000000
[ 3302.972224] Internal error: Oops: 96000047 [#1] SMP
[ 3302.999509] CPU: 9 PID: 19628 Comm: roce_test_main Tainted: G           OE   4.14.10 #1
[ 3303.007498] task: ffff80234df78000 task.stack: ffff00000f640000
[ 3303.013412] PC is at hns_roce_v2_post_send+0x690/0xe20 [hns_roce_pci]
[ 3303.019843] LR is at hns_roce_v2_post_send+0x658/0xe20 [hns_roce_pci]
[ 3303.026269] pc : [<ffff0000020694f8>] lr : [<ffff0000020694c0>] pstate: 804001c9
[ 3303.033649] sp : ffff00000f643870
[ 3303.036951] x29: ffff00000f643870 x28: ffff80232bfa9c00
[ 3303.042250] x27: ffff80234d909380 x26: ffff00003b37f0c0
[ 3303.047549] x25: 0000000000000000 x24: 0000000000000003
[ 3303.052848] x23: 0000000000000000 x22: 0000000000000000
[ 3303.058148] x21: 0000000000000101 x20: 0000000000000001
[ 3303.063447] x19: ffff80236163f800 x18: 0000000000000000
[ 3303.068746] x17: 0000ffff86b76fc8 x16: ffff000008301600
[ 3303.074045] x15: 000020a51c000000 x14: 3128726464615f65
[ 3303.079344] x13: 746f6d6572202c29 x12: 303035312879656b
[ 3303.084643] x11: 723a6f666e692072 x10: 573a6f666e693a5d
[ 3303.089943] x9 : 0000000000000004 x8 : ffff8023ce38b000
[ 3303.095242] x7 : ffff8023ce38b320 x6 : 0000000000000418
[ 3303.100541] x5 : ffff80232bfa9cc8 x4 : 0000000000000030
[ 3303.105839] x3 : 0000000000000100 x2 : 0000000000000200
[ 3303.111138] x1 : 0000000000000320 x0 : ffff00003b3a0000
[ 3303.116438] Process roce_test_main (pid: 19628, stack limit = 0xffff00000f640000)
[ 3303.123906] Call trace:
[ 3303.126339] Exception stack(0xffff00000f643730 to 0xffff00000f643870)
[ 3303.215790] [<ffff0000020694f8>] hns_roce_v2_post_send+0x690/0xe20 [hns_roce_pci]
[ 3303.223293] [<ffff0000021c3750>] rt_ktest_post_send+0x5d0/0x8b8 [rdma_test]
[ 3303.230261] [<ffff0000021b3234>] exec_send_cmd+0x664/0x1350 [rdma_test]
[ 3303.236881] [<ffff0000021b8b30>] rt_ktest_dispatch_cmd_3+0x1510/0x3790 [rdma_test]
[ 3303.244455] [<ffff0000021bae54>] rt_ktest_dispatch_cmd_2+0xa4/0x118 [rdma_test]
[ 3303.251770] [<ffff0000021bafec>] rt_ktest_dispatch_cmd+0x124/0xaa8 [rdma_test]
[ 3303.258997] [<ffff0000021bbc3c>] rt_ktest_dev_write+0x2cc/0x568 [rdma_test]
[ 3303.265947] [<ffff0000082ad688>] __vfs_write+0x60/0x18c
[ 3303.271158] [<ffff0000082ad998>] vfs_write+0xa8/0x198
[ 3303.276196] [<ffff0000082adc7c>] SyS_write+0x6c/0xd4
[ 3303.281147] Exception stack(0xffff00000f643ec0 to 0xffff00000f644000)
[ 3303.287573] 3ec0: 0000000000000003 0000fffffc85faa8 0000000000004e60 0000000000000000
[ 3303.295388] 3ee0: 0000000021fb2000 000000000000ffff eff0e3efe4e58080 0000fffffcc724fe
[ 3303.303204] 3f00: 0000000000000040 1999999999999999 0101010101010101 0000000000000038
[ 3303.311019] 3f20: 0000000000000005 ffffffffffffffff 0d73757461747320 ffffffffffffffff
[ 3303.318835] 3f40: 0000000000000000 0000000000459b00 0000fffffc85e360 000000000043d788
[ 3303.326650] 3f60: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 3303.334465] 3f80: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 3303.342281] 3fa0: 0000000000000000 0000fffffc85e570 0000000000438804 0000fffffc85e570
[ 3303.350096] 3fc0: 0000ffff8553f618 0000000080000000 0000000000000003 0000000000000040
[ 3303.357911] 3fe0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 3303.365729] [<ffff000008083808>] __sys_trace_return+0x0/0x4
[ 3303.371288] Code: b94008e9 34000129 b9400ce2 110006b5 (b9000402)
[ 3303.377377] ---[ end trace fd5ab98b3325cf9a ]---

Reported-by: Jie Chen <chenjie103@huawei.com>
Reported-by: Xiping Zhang (Francis) <zhangxiping3@huawei.com>
Fixes: b1c158350968("RDMA/hns: Get rid of virt_to_page and vmap calls after dma_alloc_coherent")
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/hns: Add reset process for RoCE in hip08
Wei Hu(Xavier) [Mon, 28 May 2018 11:39:24 +0000 (19:39 +0800)]
RDMA/hns: Add reset process for RoCE in hip08

This patch added reset process for RoCE in hip08.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoMerge branch 'mini_cqe' into git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma...
Jason Gunthorpe [Tue, 29 May 2018 21:23:18 +0000 (15:23 -0600)]
Merge branch 'mini_cqe' into git://git./linux/kernel/git/rdma/rdma for-next

Leon Romanovsky says:

====================
Introduce new internal to mlx5 CQE format - mini-CQE. It is a CQE in
compressed form that holds data needed to extra a single full CQE.

It is a stride index, byte count and packet checksum.
====================

* mini_cqe:
  IB/mlx5: Introduce a new mini-CQE format
  IB/mlx5: Refactor CQE compression response
  net/mlx5: Exposing a new mini-CQE format

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/core: Remove indirection through ib_cache_setup()
Jason Gunthorpe [Tue, 29 May 2018 02:32:40 +0000 (20:32 -0600)]
RDMA/core: Remove indirection through ib_cache_setup()

This once might have made sense when cache.c was in a different module
from device.c, but  today it just obfuscation. Get rid of the wrappers
and call roge_gid_mgmt_init()/cleanup() directly.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
6 years agoIB/mlx5: Introduce a new mini-CQE format
Yonatan Cohen [Sun, 27 May 2018 10:42:34 +0000 (13:42 +0300)]
IB/mlx5: Introduce a new mini-CQE format

The new mini-CQE format includes the stride index, byte count and
packet checksum.
Stride index is needed for striding WQ feature.
This patch exposes this capability and enables its setting
via mlx5 UHW data as part of query device and cq creation.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Refactor CQE compression response
Yonatan Cohen [Sun, 27 May 2018 10:42:33 +0000 (13:42 +0300)]
IB/mlx5: Refactor CQE compression response

Refactor CQE compression response to be fully set only
when it`s really supported. There is no change from user
perspective because anyway resp.cqe_comp_caps.max_num was
set to zero.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>W
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agonet/mlx5: Exposing a new mini-CQE format
Yonatan Cohen [Sun, 27 May 2018 10:42:32 +0000 (13:42 +0300)]
net/mlx5: Exposing a new mini-CQE format

The new mini-CQE format includes byte-count, checksum
and stride index.

Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agoMerge branch 'mr_fix' into git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma...
Jason Gunthorpe [Mon, 28 May 2018 17:44:35 +0000 (11:44 -0600)]
Merge branch 'mr_fix' into git://git./linux/kernel/git/rdma/rdma for-next

Update mlx4 to support user MR creation against read-only memory, previously
it required the memory to be writable.

Based on rdma for-rc due to dependencies.

* mr_fix: (2 commits)
  IB/mlx4: Mark user MR as writable if actual virtual memory is writable
  IB/core: Make testing MR flags for writability a static inline function

6 years agoIB/mlx4: Mark user MR as writable if actual virtual memory is writable
Jack Morgenstein [Wed, 23 May 2018 12:30:31 +0000 (15:30 +0300)]
IB/mlx4: Mark user MR as writable if actual virtual memory is writable

To allow rereg_user_mr to modify the MR from read-only to writable without
using get_user_pages again, we needed to define the initial MR as writable.
However, this was originally done unconditionally, without taking into
account the writability of the underlying virtual memory.

As a result, any attempt to register a read-only MR over read-only
virtual memory failed.

To fix this, do not add the writable flag bit when the user virtual memory
is not writable (e.g. const memory).

However, when the underlying memory is NOT writable (and we therefore
do not define the initial MR as writable), the IB core adds a
"force writable" flag to its user-pages request. If this succeeds,
the reg_user_mr caller gets a writable copy of the original pages.

If the user-space caller then does a rereg_user_mr operation to enable
writability, this will succeed. This should not be allowed, since
the original virtual memory was not writable.

Cc: <stable@vger.kernel.org>
Fixes: 9376932d0c26 ("IB/mlx4_ib: Add support for user MR re-registration")
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
6 years agoIB/core: Make testing MR flags for writability a static inline function
Jack Morgenstein [Wed, 23 May 2018 12:30:30 +0000 (15:30 +0300)]
IB/core: Make testing MR flags for writability a static inline function

Make the MR writability flags check, which is performed in umem.c,
a static inline function in file ib_verbs.h

This allows the function to be used by low-level infiniband drivers.

Cc: <stable@vger.kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
6 years agoIB/rxe: avoid unnecessary export
Zhu Yanjun [Mon, 28 May 2018 09:03:41 +0000 (05:03 -0400)]
IB/rxe: avoid unnecessary export

The function rxe_remove_all is only used in this modules.
There is no other modules that call this function. So it
is not necessary to export it.

Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/hns: Increase checking CMQ status timeout value
Wei Hu(Xavier) [Wed, 23 May 2018 10:16:28 +0000 (18:16 +0800)]
RDMA/hns: Increase checking CMQ status timeout value

This patch increases checking CMQ status timeout value and
uses the same value with NIC driver to avoid deficiency of
time.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/hns: Modify uar allocation algorithm to avoid bitmap exhaust
Wei Hu(Xavier) [Wed, 23 May 2018 10:16:27 +0000 (18:16 +0800)]
RDMA/hns: Modify uar allocation algorithm to avoid bitmap exhaust

This patch modified uar allocation algorithm in hns_roce_uar_alloc
function to avoid bitmap exhaust.

Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoMerge tag 'mlx5-updates-2018-05-17' of git://git.kernel.org/pub/scm/linux/kernel...
Jason Gunthorpe [Thu, 24 May 2018 15:40:43 +0000 (09:40 -0600)]
Merge tag 'mlx5-updates-2018-05-17' of git://git./linux/kernel/git/mellanox/linux into for-next

mlx5-updates-2018-05-17

mlx5 core dirver updates for both net-next and rdma-next branches.

From Christophe JAILLET, first three patche to use kvfree where needed.

From: Or Gerlitz <ogerlitz@mellanox.com>

Next six patches from Roi and Co adds support for merged
sriov e-switch which comes to serve cases where both PFs, VFs set
on them and both uplinks are to be used in single v-switch SW model.
When merged e-switch is supported, the per-port e-switch is logically
merged into one e-switch that spans both physical ports and all the VFs.

This model allows to offload TC eswitch rules between VFs belonging
to different PFs (and hence have different eswitch affinity), it also
sets the some of the foundations needed for uplink LAG support.

* tag 'mlx5-updates-2018-05-17' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux:
  net/mlx5e: Explicitly set source e-switch in offloaded TC rules
  net/mlx5: Add source e-switch owner
  net/mlx5e: Explicitly set destination e-switch in FDB rules
  net/mlx5: Add destination e-switch owner
  net/mlx5: Properly handle a vport destination when setting FTE
  net/mlx5: Add merged e-switch cap
  IB/mlx5: Use 'kvfree()' for memory allocated by 'kvzalloc()'
  net/mlx5: Eswitch, Use 'kvfree()' for memory allocated by 'kvzalloc()'
  net/mlx5: Vport, Use 'kvfree()' for memory allocated by 'kvzalloc()'

6 years agoIB/core: Introduce and use rdma_gid_table()
Parav Pandit [Tue, 22 May 2018 17:33:46 +0000 (20:33 +0300)]
IB/core: Introduce and use rdma_gid_table()

There are several places a gid table is accessed.
Have a helper tiny function rdma_gid_table() to avoid code
duplication at such places.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/core: Reduce the places that use zgid
Parav Pandit [Tue, 22 May 2018 17:33:45 +0000 (20:33 +0300)]
IB/core: Reduce the places that use zgid

Instead of open coding memcmp() to check whether a given GID is zero or
not, use a helper function to do so, and replace instances of
memcpy(z,&zgid) with memset.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Fetch soft WQE's on fatal error state
Erez Shitrit [Mon, 21 May 2018 08:41:01 +0000 (11:41 +0300)]
IB/mlx5: Fetch soft WQE's on fatal error state

On fatal error the driver simulates CQE's for ULPs that rely on
completion of all their posted work-request.

For the GSI traffic, the mlx5 has its own mechanism that sends the
completions via software CQE's directly to the relevant CQ.

This should be kept in fatal error too, so the driver should simulate
such CQE's with the specified error state in order to complete GSI QP
work requests.

Without the fix the next deadlock might appears:
        schedule_timeout+0x274/0x350
        wait_for_common+0xec/0x240
        mcast_remove_one+0xd0/0x120 [ib_core]
        ib_unregister_device+0x12c/0x230 [ib_core]
        mlx5_ib_remove+0xc4/0x270 [mlx5_ib]
        mlx5_detach_device+0x184/0x1a0 [mlx5_core]
        mlx5_unload_one+0x308/0x340 [mlx5_core]
        mlx5_pci_err_detected+0x74/0xe0 [mlx5_core]

Cc: <stable@vger.kernel.org> # 4.7
Fixes: 89ea94a7b6c4 ("IB/mlx5: Reset flow support for IB kernel ULPs")
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/ucm: Mark UCM interface as BROKEN
Leon Romanovsky [Wed, 23 May 2018 05:22:11 +0000 (08:22 +0300)]
RDMA/ucm: Mark UCM interface as BROKEN

In commit 357d23c811a7 ("Remove the obsolete libibcm library")
in rdma-core [1], we removed obsolete library which used the
/dev/infiniband/ucmX interface.

Following multiple syzkaller reports about non-sanitized
user input in the UCMA module, the short audit reveals the same
issues in UCM module too.

It is better to disable this interface in the kernel,
before syzkaller team invests time and energy to harden
this unused interface.

[1] https://github.com/linux-rdma/rdma-core/pull/279

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/core: Remove duplicate declaration of gid_cache_wq
Parav Pandit [Tue, 22 May 2018 05:34:09 +0000 (08:34 +0300)]
IB/core: Remove duplicate declaration of gid_cache_wq

Remove duplicate declaration of gid_cache_wq.

Fixes: d41861942 ("IB/core: Add generic function to extract IB speed from netdev")
Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/mlx5: Remove debug prints of VMA pointers
Leon Romanovsky [Tue, 22 May 2018 05:31:03 +0000 (08:31 +0300)]
RDMA/mlx5: Remove debug prints of VMA pointers

Remove various prints of VMA pointers.

Reported-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Reviewed-by: Michal Kalderon <michal.kalderon@cavium.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/hns: Rename the idx field of db
oulijun [Tue, 22 May 2018 12:47:16 +0000 (20:47 +0800)]
RDMA/hns: Rename the idx field of db

The lower 15 bit of paramter of db structure means different
meanings when db type is sq, rq and srq.

Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/qib: Fix DMA api warning with debug kernel
Mike Marciniszyn [Sat, 19 May 2018 00:07:01 +0000 (17:07 -0700)]
IB/qib: Fix DMA api warning with debug kernel

The following error occurs in a debug build when running MPI PSM:

[  307.415911] WARNING: CPU: 4 PID: 23867 at lib/dma-debug.c:1158
check_unmap+0x4ee/0xa20
[  307.455661] ib_qib 0000:05:00.0: DMA-API: device driver failed to check map
error[device address=0x00000000df82b000] [size=4096 bytes] [mapped as page]
[  307.517494] Modules linked in:
[  307.531584]  ib_isert iscsi_target_mod ib_srpt target_core_mod rpcrdma
sunrpc ib_srp scsi_transport_srp scsi_tgt ib_iser libiscsi ib_ipoib
scsi_transport_iscsi rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm
ib_qib intel_powerclamp coretemp rdmavt intel_rapl iosf_mbi kvm_intel kvm
irqbypass crc32_pclmul ghash_clmulni_intel ipmi_ssif ib_core aesni_intel sg
ipmi_si lrw gf128mul dca glue_helper ipmi_devintf iTCO_wdt gpio_ich hpwdt
iTCO_vendor_support ablk_helper hpilo acpi_power_meter cryptd ipmi_msghandler
ie31200_edac shpchp pcc_cpufreq lpc_ich pcspkr ip_tables xfs libcrc32c sd_mod
crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea
sysfillrect sysimgblt fb_sys_fops ttm ahci crct10dif_pclmul crct10dif_common
drm crc32c_intel libahci tg3 libata serio_raw ptp i2c_core
[  307.846113]  pps_core dm_mirror dm_region_hash dm_log dm_mod
[  307.866505] CPU: 4 PID: 23867 Comm: mpitests-IMB-MP Kdump: loaded Not
tainted 3.10.0-862.el7.x86_64.debug #1
[  307.911178] Hardware name: HP ProLiant DL320e Gen8, BIOS J05 11/09/2013
[  307.944206] Call Trace:
[  307.956973]  [<ffffffffbd9e915b>] dump_stack+0x19/0x1b
[  307.982201]  [<ffffffffbd2a2f58>] __warn+0xd8/0x100
[  308.005999]  [<ffffffffbd2a2fdf>] warn_slowpath_fmt+0x5f/0x80
[  308.034260]  [<ffffffffbd5f667e>] check_unmap+0x4ee/0xa20
[  308.060801]  [<ffffffffbd41acaa>] ? page_add_file_rmap+0x2a/0x1d0
[  308.090689]  [<ffffffffbd5f6c4d>] debug_dma_unmap_page+0x9d/0xb0
[  308.120155]  [<ffffffffbd4082e0>] ? might_fault+0xa0/0xb0
[  308.146656]  [<ffffffffc07761a5>] qib_tid_free.isra.14+0x215/0x2a0 [ib_qib]
[  308.180739]  [<ffffffffc0776bf4>] qib_write+0x894/0x1280 [ib_qib]
[  308.210733]  [<ffffffffbd540b00>] ? __inode_security_revalidate+0x70/0x80
[  308.244837]  [<ffffffffbd53c2b7>] ? security_file_permission+0x27/0xb0
[  308.266025] qib_ib0.8006: multicast join failed for
ff12:401b:8006:0000:0000:0000:ffff:ffff, status -22
[  308.323421]  [<ffffffffbd46f5d3>] vfs_write+0xc3/0x1f0
[  308.347077]  [<ffffffffbd492a5c>] ? fget_light+0xfc/0x510
[  308.372533]  [<ffffffffbd47045a>] SyS_write+0x8a/0x100
[  308.396456]  [<ffffffffbd9ff355>] system_call_fastpath+0x1c/0x21

The code calls a qib_map_page() which has never correctly tested for a
mapping error.

Fix by testing for pci_dma_mapping_error() in all cases and properly
handling the failure in the caller.

Additionally, streamline qib_map_page() arguments to satisfy just
the single caller.

Cc: <stable@vger.kernel.org>
Reviewed-by: Alex Estrin <alex.estrin@intel.com>
Tested-by: Don Dutile <ddutile@redhat.com>
Reviewed-by: Don Dutile <ddutile@redhat.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/isert: Fix for lib/dma_debug check_sync warning
Alex Estrin [Wed, 16 May 2018 01:31:39 +0000 (18:31 -0700)]
IB/isert: Fix for lib/dma_debug check_sync warning

The following error message occurs on a target host in a debug build
during session login:

[ 3524.411874] WARNING: CPU: 5 PID: 12063 at lib/dma-debug.c:1207 check_sync+0x4ec/0x5b0
[ 3524.421057] infiniband hfi1_0: DMA-API: device driver tries to sync DMA memory it has not allocated [device address=0x0000000000000000] [size=76 bytes]
......snip .....

[ 3524.535846] CPU: 5 PID: 12063 Comm: iscsi_np Kdump: loaded Not tainted 3.10.0-862.el7.x86_64.debug #1
[ 3524.546764] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6 06/08/2015
[ 3524.555740] Call Trace:
[ 3524.559102]  [<ffffffffa5fe915b>] dump_stack+0x19/0x1b
[ 3524.565477]  [<ffffffffa58a2f58>] __warn+0xd8/0x100
[ 3524.571557]  [<ffffffffa58a2fdf>] warn_slowpath_fmt+0x5f/0x80
[ 3524.578610]  [<ffffffffa5bf5b8c>] check_sync+0x4ec/0x5b0
[ 3524.585177]  [<ffffffffa58efc3f>] ? set_cpus_allowed_ptr+0x5f/0x1c0
[ 3524.592812]  [<ffffffffa5bf5cd0>] debug_dma_sync_single_for_cpu+0x80/0x90
[ 3524.601029]  [<ffffffffa586add3>] ? x2apic_send_IPI_mask+0x13/0x20
[ 3524.608574]  [<ffffffffa585ee1b>] ? native_smp_send_reschedule+0x5b/0x80
[ 3524.616699]  [<ffffffffa58e9b76>] ? resched_curr+0xf6/0x140
[ 3524.623567]  [<ffffffffc0879af0>] isert_create_send_desc.isra.26+0xe0/0x110 [ib_isert]
[ 3524.633060]  [<ffffffffc087af95>] isert_put_login_tx+0x55/0x8b0 [ib_isert]
[ 3524.641383]  [<ffffffffa58ef114>] ? try_to_wake_up+0x1a4/0x430
[ 3524.648561]  [<ffffffffc098cfed>] iscsi_target_do_tx_login_io+0xdd/0x230 [iscsi_target_mod]
[ 3524.658557]  [<ffffffffc098d827>] iscsi_target_do_login+0x1a7/0x600 [iscsi_target_mod]
[ 3524.668084]  [<ffffffffa59f9bc9>] ? kstrdup+0x49/0x60
[ 3524.674420]  [<ffffffffc098e976>] iscsi_target_start_negotiation+0x56/0xc0 [iscsi_target_mod]
[ 3524.684656]  [<ffffffffc098c2ee>] __iscsi_target_login_thread+0x90e/0x1070 [iscsi_target_mod]
[ 3524.694901]  [<ffffffffc098ca50>] ? __iscsi_target_login_thread+0x1070/0x1070 [iscsi_target_mod]
[ 3524.705446]  [<ffffffffc098ca50>] ? __iscsi_target_login_thread+0x1070/0x1070 [iscsi_target_mod]
[ 3524.715976]  [<ffffffffc098ca78>] iscsi_target_login_thread+0x28/0x60 [iscsi_target_mod]
[ 3524.725739]  [<ffffffffa58d60ff>] kthread+0xef/0x100
[ 3524.732007]  [<ffffffffa58d6010>] ? insert_kthread_work+0x80/0x80
[ 3524.739540]  [<ffffffffa5fff1b7>] ret_from_fork_nospec_begin+0x21/0x21
[ 3524.747558]  [<ffffffffa58d6010>] ? insert_kthread_work+0x80/0x80
[ 3524.755088] ---[ end trace 23f8bf9238bd1ed8 ]---
[ 3595.510822] iSCSI/iqn.1994-05.com.redhat:537fa56299: Unsupported SCSI Opcode 0xa3, sending CHECK_CONDITION.

The code calls dma_sync on login_tx_desc->dma_addr prior to initializing it
with dma-mapped address.
login_tx_desc is a part of iser_conn structure and is used only once
during login negotiation, so the issue is fixed by eliminating
dma_sync call for this buffer using a special case routine.

Cc: <stable@vger.kernel.org>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Don Dutile <ddutile@redhat.com>
Signed-off-by: Alex Estrin <alex.estrin@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/{rdmavt,hfi1}: Change hrtimer add to use pinned version
Mike Marciniszyn [Wed, 16 May 2018 01:31:24 +0000 (18:31 -0700)]
IB/{rdmavt,hfi1}: Change hrtimer add to use pinned version

Given we are dealing with nano-second level timers, when the timer
pops, ensure it happens on the CPU which caused the timer to be set
in the first place.  This avoids excessive jitter from the desired
expiration time by avoiding the cost of switching our context to
another CPU that is cache cold for this given timer.

Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/hfi1: Set port number for errorinfo MAD response
Michael J. Ruhl [Wed, 16 May 2018 01:31:17 +0000 (18:31 -0700)]
IB/hfi1: Set port number for errorinfo MAD response

For errorinfo MAD requests, the response has a 0 port number left over
from a memset. Instead we should always set the port number in the
response.

Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/hfi1: Cleanup of exp_rcv
Mike Marciniszyn [Wed, 16 May 2018 01:31:09 +0000 (18:31 -0700)]
IB/hfi1: Cleanup of exp_rcv

The knowledge of the internal workings of the expect receive
is too distributed.

Fix by:
- right size several rcd fields associated with
  expect receive
- making an init entrance to init all the lists
- consolidate all the allocations into an array anchored
  in the rcd

Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/hfi1: Add 16B Management Packet trace support
Don Hiatt [Wed, 16 May 2018 01:28:22 +0000 (18:28 -0700)]
IB/hfi1: Add 16B Management Packet trace support

Add trace support for 16B Management Packets.

Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/hfi1: Add support for 16B Management Packets
Don Hiatt [Wed, 16 May 2018 01:28:15 +0000 (18:28 -0700)]
IB/hfi1: Add support for 16B Management Packets

16B Management Packets (L4=0x08) replace the BTH and DETH
of normal MAD packet packets with a header containing the
the source and destination queue pair numbers; fields that
were originally retrieved from the BTH/DETH are now populated
from this header as well as from the 16B LRH (e.g. pkey).

16B Management Packets are used as an optimized management
format on 16B fabrics.

These management packets have an opcode of IB_OPCODE_UD_SEND_ONLY,
a fixed 3Byte pad, and a header length of 24Bytes.

The decision as to when we send a management packet is based
upon either the source or destination queue pair number being
0 or 1.

Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoIB/hfi1: Define 16B Management Packets
Don Hiatt [Wed, 16 May 2018 01:28:07 +0000 (18:28 -0700)]
IB/hfi1: Define 16B Management Packets

Add 16B Management Packet definition. This optimized packet
format replaces the ib_other_headers and BTH with a source
and destination QP number.

To support these packets we introduce struct opa_16b_mgmt
into the struct hfi1_16b_header.

This packet format is only used for MAD packets using the
IB_OPCODE_UD_SEND_ONLY opcode on QP0/1.

The original 16B implementation failed to use 16B management
packets so now we add their definition.

Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoiw_cxgb4: provide detailed driver-specific MR information
Steve Wise [Thu, 10 May 2018 14:32:01 +0000 (07:32 -0700)]
iw_cxgb4: provide detailed driver-specific MR information

Add a table of important fields from the fw_ri_tpte structure to the mr
resource tracking table.  This is helpful in debugging.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoiw_cxgb4: provide detailed driver-specific CQ information
Steve Wise [Thu, 10 May 2018 14:31:51 +0000 (07:31 -0700)]
iw_cxgb4: provide detailed driver-specific CQ information

Add a table of important fields from the c4iw_cq* structures to the cq
resource tracking table.  This is helpful in debugging.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoiw_cxgb4: provide detailed provider-specific CM_ID information
Steve Wise [Thu, 10 May 2018 14:31:43 +0000 (07:31 -0700)]
iw_cxgb4: provide detailed provider-specific CM_ID information

Add a table of important fields from the c4iw_ep* structures to the cm_id
resource tracking table.  This is helpful in debugging.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/hns: Move the location for initializing tmp_len
oulijun [Tue, 22 May 2018 12:47:15 +0000 (20:47 +0800)]
RDMA/hns: Move the location for initializing tmp_len

When posted work request, it need to compute the length of
all sges of every wr and fill it into the msg_len field of
send wqe. Thus, While posting multiple wr,
tmp_len should be reinitialized to zero.

Fixes: 8b9b8d143b46 ("RDMA/hns: Fix the endian problem for hns")
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/hns: Bugfix for cq record db for kernel
oulijun [Tue, 22 May 2018 12:47:14 +0000 (20:47 +0800)]
RDMA/hns: Bugfix for cq record db for kernel

When use cq record db for kernel, it needs to set the hr_cq->db_en
to 1 and configure the dma address of record cq db of qp context.

Fixes: 86188a8810ed ("RDMA/hns: Support cq record doorbell for kernel space")
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Fix uverbs_attr_get_obj
Jason Gunthorpe [Tue, 22 May 2018 21:56:51 +0000 (15:56 -0600)]
IB/uverbs: Fix uverbs_attr_get_obj

The err pointer comes from uverbs_attr_get, not from the uobject member,
which does not store an ERR_PTR.

Fixes: be934cca9e98 ("IB/uverbs: Add device memory registration ioctl support")
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
6 years agoRDMA/qedr: Fix doorbell bar mapping for dpi > 1
Kalderon, Michal [Tue, 15 May 2018 12:13:33 +0000 (15:13 +0300)]
RDMA/qedr: Fix doorbell bar mapping for dpi > 1

Each user_context receives a separate dpi value and thus a different
address on the doorbell bar. The qedr_mmap function needs to validate
the address and map the doorbell bar accordingly.
The current implementation always checked against dpi=0 doorbell range
leading to a wrong mapping for doorbell bar. (It entered an else case
that mapped the address differently). qedr_mmap should only be used
for doorbells, so the else was actually wrong in the first place.
This only has an affect on arm architecture and not an issue on a
x86 based architecture.
This lead to doorbells not occurring on arm based systems and left
applications that use more than one dpi (or several applications
run simultaneously ) to hang.

Fixes: ac1b36e55a51 ("qedr: Add support for user context verbs")
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/ipoib: drop skb on path record lookup failure
Evgenii Smirnov [Tue, 22 May 2018 14:00:47 +0000 (16:00 +0200)]
RDMA/ipoib: drop skb on path record lookup failure

In unicast_arp_send function there is an inconsistency in error handling
of path_rec_start call. If path_rec_start is called because of an absent
ah field, skb will be dropped. But if it is called on a creation of a
new path, or if the path is invalid, skb will be added to the tail of
path queue.  In case of a new path it will be dropped on path_free, but
in case of invalid path it can stay in the queue forever.

This patch unifies the behavior, dropping skb in all cases
of path_rec_start failure.

Signed-off-by: Evgenii Smirnov <evgenii.smirnov@profitbricks.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/CMA: add rdma_iw_cm_id() and rdma_res_to_id() helpers
Steve Wise [Thu, 10 May 2018 14:31:36 +0000 (07:31 -0700)]
RDMA/CMA: add rdma_iw_cm_id() and rdma_res_to_id() helpers

Add a helper function for iwarp drivers to be able to map an
rdma_cm_id to an iw_cm_id.  This is useful for dumping driver specific
NLDEV/RESTRACK connection state.

Add a helper to return the rdma_cm_id pointer from the rdma_restack
pointer.  This is needed for rdma drivers to map a res entry back to
the public rdma_cm_id struct.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoiw_cxgb4: always set iw_cm_id.provider_data
Steve Wise [Thu, 10 May 2018 14:31:28 +0000 (07:31 -0700)]
iw_cxgb4: always set iw_cm_id.provider_data

In active side connections, the provider_data field is not
getting set.  This will be used in a subsequent patch to dump
state, so always set it.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agoRDMA/ipoib: Update paths on CLIENT_REREG/SM_CHANGE events
Doug Ledford [Fri, 18 May 2018 15:36:09 +0000 (11:36 -0400)]
RDMA/ipoib: Update paths on CLIENT_REREG/SM_CHANGE events

We do a light flush on CLIENT_REREG and SM_CHANGE events.  This goes
through and marks paths invalid. But we weren't always checking for this
validity when we needed to, and so we could keep using a path marked
invalid.  What's more, once we establish a path with a valid ah, we put
a pointer to the ah in the neigh struct directly, so even if we mark the
path as invalid, as long as the neigh has a direct pointer to the ah, it
keeps using the old, outdated ah.

To fix this we do several things.

1) Put the valid flag in the ah instead of the path struct, so when we
put the ah pointer directly in the neigh struct, we can easily check the
validity of the ah on send events.
2) Check the neigh->ah and neigh->ah->valid elements in the needed
places, and if we have an ah, but it's invalid, then invoke a refresh of
the ah.
3) Fix the various places that check for path, but didn't check for
path->valid (now path->ah && path->ah->valid).

Reported-by: Evgenii Smirnov <evgenii.smirnov@profitbricks.com>
Fixes: ee1e2c82c245 ("IPoIB: Refresh paths instead of flushing them on SM change events")
Signed-off-by: Doug Ledford <dledford@redhat.com>
6 years agonet/mlx5e: Explicitly set source e-switch in offloaded TC rules
Shahar Klein [Sun, 18 Mar 2018 07:03:49 +0000 (09:03 +0200)]
net/mlx5e: Explicitly set source e-switch in offloaded TC rules

Set a specific source e-switch when setting a rule that matches on the
ingress port.

Signed-off-by: Shahar Klein <shahark@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Add source e-switch owner
Shahar Klein [Sun, 18 Mar 2018 07:02:06 +0000 (09:02 +0200)]
net/mlx5: Add source e-switch owner

The source e-switch owner allows a vport on one e-switch port be associated
with a rule defined on the second port e-switch.

The role of the source eswitch owner valid bit in the flow group is to
allow the firmware fail driver attempts to wild card the source eswitch
match field. If this bit is not set, the firmware ignores the source
eswitch owner field totally.

Signed-off-by: Shahar Klein <shahark@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Explicitly set destination e-switch in FDB rules
Rabie Loulou [Sun, 18 Mar 2018 06:29:04 +0000 (08:29 +0200)]
net/mlx5e: Explicitly set destination e-switch in FDB rules

Set a specific destination e-switch when setting a destination vport.

Signed-off-by: Rabie Loulou <rabiel@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Shahar Klein <shahark@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Add destination e-switch owner
Shahar Klein [Thu, 22 Mar 2018 10:32:12 +0000 (12:32 +0200)]
net/mlx5: Add destination e-switch owner

The destination e-switch owner allows a rule in namespace of one e-switch
owner to point to a vport that is natively associated with another
e-switch owner.

Signed-off-by: Shahar Klein <shahark@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Properly handle a vport destination when setting FTE
Shahar Klein [Thu, 22 Mar 2018 09:56:51 +0000 (11:56 +0200)]
net/mlx5: Properly handle a vport destination when setting FTE

When creating FTE, properly distinguish between destination being vport
or tir. The previous code just worked accidentally b/c of both dest being
in the same offset within a union.

Signed-off-by: Shahar Klein <shahark@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Add merged e-switch cap
Roi Dayan [Tue, 5 Dec 2017 08:38:58 +0000 (10:38 +0200)]
net/mlx5: Add merged e-switch cap

When merged e-switch is supported, the per-port e-switch is logically
merged into one e-switch that spans both physical ports and all the VFs.
Under merged eswitch, both the matching on source vport and setting
destination vport can have a 2nd attribute which is the vhca id of the
eswitch owner.

For example:
esw0: {match: <src vport=1 owner=0> action: fwd to <dst vport=7, owner=1>}
is a flow set on eswitch0 matching on source vport=1 from his eswitch
and the action being fwd to dest vport=7 of eswitch1.

Signed-off-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Shahar Klein <shahark@mellanox.com>
Reviewed-by: Or Gerlitz Klein <ogerlitz@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agoIB/rxe: avoid calling WARN_ON_ONCE twice
Zhu Yanjun [Thu, 17 May 2018 08:55:22 +0000 (04:55 -0400)]
IB/rxe: avoid calling WARN_ON_ONCE twice

In the exit branch, WARN_ON_ONCE is called to show stack. So it is
not necessary to call WARN_ON_ONCE before going to exit.

Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoRDMA/hns: Add 64KB page size support for hip08
Yixian Liu [Fri, 11 May 2018 08:31:23 +0000 (16:31 +0800)]
RDMA/hns: Add 64KB page size support for hip08

This patch adds the support of 64KB page size for hip08
in kernel.

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/ipoib: replace local_irq_disable() with proper locking
Sebastian Andrzej Siewior [Wed, 16 May 2018 19:47:32 +0000 (21:47 +0200)]
IB/ipoib: replace local_irq_disable() with proper locking

In ipoib_mcast_restart_task() the netif_addr_lock() is invoked prior
local_irq_save(). netif_addr_lock() should not be invoked in interrupt disabled
section, only in BH disabled sections.
The priv->lock is always acquired with disabled interrupts. The only place
where netif_addr_lock() and priv->lock nest ist ipoib_mcast_restart_task().

Drop the local_irq_save() and acquire priv->lock with spin_lock_irq() inside
the netif_addr locked section. It's safe to do so because the caller is either
a worker function or __ipoib_ib_dev_flush() which are both calling with
interrupts enabled (and since BH is enabled here, too so
netif_addr_lock_bh() needs to be used).

Cc: Doug Ledford <dledford@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Expose MPLS related tunneling offloads
Ariel Levkovich [Sun, 13 May 2018 11:33:35 +0000 (14:33 +0300)]
IB/mlx5: Expose MPLS related tunneling offloads

This patch reports the device's capbilities to offload
encapsulated MPLS tunnel protocols to user-space:
- Capability to offload MPLS over GRE.
- Capability to offload MPLS over UDP.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Add support for MPLS flow specification
Ariel Levkovich [Sun, 13 May 2018 11:33:34 +0000 (14:33 +0300)]
IB/mlx5: Add support for MPLS flow specification

This patch introduces support for the MPLS flow spec and
allows the creation of rules that are matching on the
MPLS label.

Applying the rule matching depends on the flow specs order and
the location of the MPLS in the spec list as there are different
configurations to be made in the device in the cases of MPLSoGRE
and MPLSoUDP vs. non-encapsulated MPLS.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Add support for GRE flow specification
Ariel Levkovich [Sun, 13 May 2018 11:33:33 +0000 (14:33 +0300)]
IB/mlx5: Add support for GRE flow specification

This patch introduces support for the GRE flow spec and
allowing the creation of rules based on the protocol and
key fields that are part of GRE protocol header.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Introduce a MPLS steering match filter
Ariel Levkovich [Sun, 13 May 2018 11:33:32 +0000 (14:33 +0300)]
IB/uverbs: Introduce a MPLS steering match filter

Add a new MPLS steering match filter that can match against
a single MPLS tag field.

Since the MPLS header can reside in different locations in the packet's
protocol stack as well as be encapsulated with a tunnel protocol, it
is required to know the exact location of the header in the protocol
stack.

Therefore, when including the MPLS protocol spec in the specs list,
it is mandatory to provide the list in an ordered manner, so
that it represents the actual header order in a matching packet.

Drivers that process the spec list and apply the matching rule
should treat the position of the MPLS spec in the spec list as the
actual location of the MPLS label in the packet's protocol stack.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Expose MPLS flow spec to the user-kernel ABI header
Ariel Levkovich [Sun, 13 May 2018 11:33:31 +0000 (14:33 +0300)]
IB/uverbs: Expose MPLS flow spec to the user-kernel ABI header

Add ib_uverbs_flow_spec_mpls to define a rule to match the MPLS
protocol.

The spec includes the generic specs header, type, size and reserved
fields while the filter itself is defined as ib_uverbs_flow_mpls_filter
and includes a single 32bit field named 'label' which consists of:
Bits 0:19  - The MPLS label.
Bits 20:22 - Traffic class field.
Bit  23    - Bottom of stack bit.
Bits 24:31 - Time to live (TTL) field.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Introduce a GRE steering match filter
Ariel Levkovich [Sun, 13 May 2018 11:33:30 +0000 (14:33 +0300)]
IB/uverbs: Introduce a GRE steering match filter

Adding a new GRE steering match filter that can match against
key and protocol fields.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/uverbs: Expose GRE flow spec to the user-kernel ABI header
Ariel Levkovich [Sun, 13 May 2018 11:33:29 +0000 (14:33 +0300)]
IB/uverbs: Expose GRE flow spec to the user-kernel ABI header

Add ib_uverbs_flow_spec_gre to define a rule to match the GRE
encapsulation protocol.

The spec includes the generic specs header, type, size and reserved
fields while the filter itself is defined as ib_uverbs_flow_gre_filter
and includes:
1. Checksum present bit, key present bit and version bits in a single
   16bit field.
2. Protocol type field - Indicates the ether protocol type of the
   encapsulated payload.
3. Key field - present if key bit is set and contains an application
   specific key value.

Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
6 years agoIB/mlx5: Use 'kvfree()' for memory allocated by 'kvzalloc()'
Christophe JAILLET [Thu, 17 May 2018 00:50:19 +0000 (17:50 -0700)]
IB/mlx5: Use 'kvfree()' for memory allocated by 'kvzalloc()'

When 'kvzalloc()' is used to allocate memory, 'kvfree()' must be used to
free it.

Fixes: 1cbe6fc86ccfe ("IB/mlx5: Add support for CQE compressing")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Eswitch, Use 'kvfree()' for memory allocated by 'kvzalloc()'
Christophe JAILLET [Thu, 17 May 2018 00:49:01 +0000 (17:49 -0700)]
net/mlx5: Eswitch, Use 'kvfree()' for memory allocated by 'kvzalloc()'

When 'kvzalloc()' is used to allocate memory, 'kvfree()' must be used to
free it.

Fixes: fed9ce22bf8ae ("net/mlx5: E-Switch, Add API to create vport rx rules")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Vport, Use 'kvfree()' for memory allocated by 'kvzalloc()'
Christophe JAILLET [Thu, 17 May 2018 00:46:45 +0000 (17:46 -0700)]
net/mlx5: Vport, Use 'kvfree()' for memory allocated by 'kvzalloc()'

When 'kvzalloc()' is used to allocate memory, 'kvfree()' must be used to
free it.

Fixes: 9efa75254593d ("net/mlx5_core: Introduce access functions to query vport RoCE fields")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agoIB/cm: Store and restore ah_attr during CM message processing
Parav Pandit [Mon, 14 May 2018 08:11:09 +0000 (11:11 +0300)]
IB/cm: Store and restore ah_attr during CM message processing

During CM request processing flow, ah_attr is initialized twice.
First based on wc. Secondly based on primary path record.
ah_attr initialization from path record can fail, which leads to ah_attr
zeroed out.

Therefore, always initialize ah_attr on stack during reinitialization
phase. If ah_attr init is successful, use the new ah_attry by
overwriting the old one. If the ah_attr init fails, continue to use the
last ah_attr.

Signed-off-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>