[PATCH 1/1] RDMA over Fibre Channel
Christoph Hellwig
hch at infradead.org
Thu Apr 19 02:39:46 PDT 2018
On Wed, Apr 18, 2018 at 10:23:45PM +0530, Anand Nataraja Sundaram wrote:
> Just wanted to understand more on your concerns on the mods done to Linux
> NVMe.
>
> The whole work was to tunnel IB protocol over existing NVMe protocol. To
> do this we first made sure NVMe stack (host, target) is able to send block
> traffic and non-block (object based ) traffic. To do this, no changes were
> required in the NVMe protocol itself. Only the target stack needed some
> modifications to vector
> (a) NVMe block traffic to backend NVMe Namespace block driver
> (b) non-block IB protocol traffic to RFC transport layer
>
> The NVMe changes are restricted to below:
> drivers/nvme/target/fc.c | 94 +-
> drivers/nvme/target/io-cmd.c | 44 +-
> include/linux/nvme-fc-driver.h | 6 +
You forgot the larger chunks of Linux NVMe code you copied while
stripping the copyrights and incorrectly relicensing it to a BSD-like
license.
The point is that IFF you really want to do RDMA over NVMe you need to
defined a new NVMe I/O command set for it and get it standardized. If
that is done we could do a proper upper level protocol interface for it,
instead of just hacking it into the protocol and code through the
backdoor. But as said before there is no upside of using NVMe, I can
see the interest in layering on top of FCP to reuse existing hardware
accelerations, similar to how NVMe layers on top of FCP for that reason,
but there isn't really any value in throwing in another NVMe layer.
More information about the Linux-nvme
mailing list