[PATCH v2] nvme-tcp: fix a possible UAF when failing to allocate an io queue

Sagi Grimberg sagi at grimberg.me
Mon Mar 20 06:30:17 PDT 2023


>> @@ -1691,18 +1676,34 @@ static void nvme_tcp_stop_queue(struct 
>> nvme_ctrl *nctrl, int qid)
>>   static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
>>   {
>>       struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
>> +    struct nvme_tcp_queue *queue = &ctrl->queues[idx];
>>       int ret;
>> +    queue->rd_enabled = true;
>> +    nvme_tcp_init_recv_ctx(queue);
>> +    write_lock_bh(&queue->sock->sk->sk_callback_lock);
>> +    queue->sock->sk->sk_user_data = queue;
>> +    queue->state_change = queue->sock->sk->sk_state_change;
>> +    queue->data_ready = queue->sock->sk->sk_data_ready;
>> +    queue->write_space = queue->sock->sk->sk_write_space;
>> +    queue->sock->sk->sk_data_ready = nvme_tcp_data_ready;
>> +    queue->sock->sk->sk_state_change = nvme_tcp_state_change;
>> +    queue->sock->sk->sk_write_space = nvme_tcp_write_space;
>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>> +    queue->sock->sk->sk_ll_usec = 1;
>> +#endif
>> +    write_unlock_bh(&queue->sock->sk->sk_callback_lock);
>> +
> Can't you put this into a separate function?
> (Will be needing that for TLS support anyway :-)

Sure.

> And shouldn't we consider 'rcu_write_lock_bh'

I don't know what that is...

> and 'rcu_assign_sk_user_data()' here?

I would like to understand what would be a possible
race condition if it is already accessed under rw_lock_bh.



More information about the Linux-nvme mailing list