[PATCH rdma v2] RDMA: Add rdma_connect_locked()

Leon Romanovsky leonro at nvidia.com
Tue Oct 27 09:19:36 EDT 2020


On Tue, Oct 27, 2020 at 09:20:36AM -0300, Jason Gunthorpe wrote:
> There are two flows for handling RDMA_CM_EVENT_ROUTE_RESOLVED, either the
> handler triggers a completion and another thread does rdma_connect() or
> the handler directly calls rdma_connect().
>
> In all cases rdma_connect() needs to hold the handler_mutex, but when
> handler's are invoked this is already held by the core code. This causes
> ULPs using the 2nd method to deadlock.
>
> Provide a rdma_connect_locked() and have all ULPs call it from their
> handlers.
>
> Link: https://lore.kernel.org/r/0-v1-75e124dbad74+b05-rdma_connect_locking_jgg@nvidia.com
> Reported-and-tested-by: Guoqing Jiang <guoqing.jiang at cloud.ionos.com>
> Fixes: 2a7cec538169 ("RDMA/cma: Fix locking for the RDMA_CM_CONNECT state")
> Acked-by: Santosh Shilimkar <santosh.shilimkar at oracle.com>
> Acked-by: Jack Wang <jinpu.wang at cloud.ionos.com>
> Reviewed-by: Christoph Hellwig <hch at lst.de>
> Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
> ---
>  drivers/infiniband/core/cma.c            | 40 +++++++++++++++++++++---
>  drivers/infiniband/ulp/iser/iser_verbs.c |  2 +-
>  drivers/infiniband/ulp/rtrs/rtrs-clt.c   |  4 +--
>  drivers/nvme/host/rdma.c                 |  4 +--
>  include/rdma/rdma_cm.h                   | 14 ++-------
>  net/rds/ib_cm.c                          |  5 +--
>  6 files changed, 46 insertions(+), 23 deletions(-)
>
> v2:
>  - Remove extra code from nvme (Chao)
>  - Fix long lines (CH)
>
> I've applied this version to rdma-rc - expecting to get these ULPs unbroken for rc2
> release
>
> Thanks,
> Jason
>
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index 7c2ab1f2fbea37..193c8902b9db26 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
> @@ -405,10 +405,10 @@ static int cma_comp_exch(struct rdma_id_private *id_priv,
>  	/*
>  	 * The FSM uses a funny double locking where state is protected by both
>  	 * the handler_mutex and the spinlock. State is not allowed to change
> -	 * away from a handler_mutex protected value without also holding
> +	 * to/from a handler_mutex protected value without also holding
>  	 * handler_mutex.
>  	 */
> -	if (comp == RDMA_CM_CONNECT)
> +	if (comp == RDMA_CM_CONNECT || exch == RDMA_CM_CONNECT)
>  		lockdep_assert_held(&id_priv->handler_mutex);
>
>  	spin_lock_irqsave(&id_priv->lock, flags);
> @@ -4038,13 +4038,21 @@ static int cma_connect_iw(struct rdma_id_private *id_priv,
>  	return ret;
>  }
>
> -int rdma_connect(struct rdma_cm_id *id, struct rdma_conn_param *conn_param)
> +/**
> + * rdma_connect_locked - Initiate an active connection request.
> + * @id: Connection identifier to connect.
> + * @conn_param: Connection information used for connected QPs.
> + *
> + * Same as rdma_connect() but can only be called from the
> + * RDMA_CM_EVENT_ROUTE_RESOLVED handler callback.
> + */
> +int rdma_connect_locked(struct rdma_cm_id *id,
> +			struct rdma_conn_param *conn_param)
>  {
>  	struct rdma_id_private *id_priv =
>  		container_of(id, struct rdma_id_private, id);
>  	int ret;
>
> -	mutex_lock(&id_priv->handler_mutex);
>  	if (!cma_comp_exch(id_priv, RDMA_CM_ROUTE_RESOLVED, RDMA_CM_CONNECT)) {
>  		ret = -EINVAL;
>  		goto err_unlock;

Not a big deal, but his label is not correct anymore.

Thanks



More information about the Linux-nvme mailing list