[PATCH V1 3/6] xen/virtio: Add option to restrict memory access under Xen

Oleksandr olekstysh at gmail.com
Sun Apr 24 09:53:30 PDT 2022


On 23.04.22 19:40, Christoph Hellwig wrote:


Hello Christoph

> Please split this into one patch that creates grant-dma-ops, and another
> that sets up the virtio restricted access helpers.


Sounds reasonable, will do:

1. grant-dma-ops.c with config XEN_GRANT_DMA_OPS

2. arch_has_restricted_virtio_memory_access() with config XEN_VIRTIO


>
>> +
>> +#ifdef CONFIG_ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
>> +int arch_has_restricted_virtio_memory_access(void)
>> +{
>> +	return (xen_has_restricted_virtio_memory_access() ||
>> +			cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
>> +}
> So instead of hardcoding Xen here, this seems like a candidate for
> another cc_platform_has flag.


I have a limited knowledge of x86 and Xen on x86.

Would the Xen specific bits fit into Confidential Computing Platform 
checks? I will let Juergen/Boris comment on this.


>
>> +config XEN_VIRTIO
>> +	bool "Xen virtio support"
>> +	default n
> n is the default default, so no need to specify it.

ok, will drop


>
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/******************************************************************************
> The all * line is not the usual kernel style, I'd suggest to drop it.

ok, will drop


>
>> +static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size,
>> +					      dma_addr_t *dma_handle,
>> +					      enum dma_data_direction dir,
>> +					      gfp_t gfp)
>> +{
>> +	WARN_ONCE(1, "xen_grant_dma_alloc_pages size %zu\n", size);
>> +	return NULL;
>> +}
>> +
>> +static void xen_grant_dma_free_pages(struct device *dev, size_t size,
>> +				     struct page *vaddr, dma_addr_t dma_handle,
>> +				     enum dma_data_direction dir)
>> +{
>> +	WARN_ONCE(1, "xen_grant_dma_free_pages size %zu\n", size);
>> +}
> Please just wire this up to the same implementation as .alloc and .free.

I got it, will implement


>
>> +	spin_lock(&xen_grant_dma_lock);
>> +	list_add(&data->list, &xen_grant_dma_devices);
>> +	spin_unlock(&xen_grant_dma_lock);
> Hmm, having to do this device lookup for every DMA operation is going
> to suck. It might make sense to add a private field (e.g. as a union
> with the iommu field) in struct device instead.


I was thinking about it, but decided to not alter common struct device 
for adding Xen specific field, but haven't managed to think of a better 
idea than just using that brute lookup ...


>
> But if not you probably want to switch to a more efficient data
> structure like the xarray at least.

... I think, this is good point, thank you. I have no idea how faster it 
is going to be, but the resulting code looks simple (if of course I 
correctly understood the usage of xarray)


diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index a512c0a..7ecc0b0 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -11,6 +11,7 @@
  #include <linux/dma-map-ops.h>
  #include <linux/of.h>
  #include <linux/pfn.h>
+#include <linux/xarray.h>
  #include <xen/xen.h>
  #include <xen/grant_table.h>

@@ -19,12 +20,9 @@ struct xen_grant_dma_data {
         domid_t dev_domid;
         /* Is device behaving sane? */
         bool broken;
-       struct device *dev;
-       struct list_head list;
  };

-static LIST_HEAD(xen_grant_dma_devices);
-static DEFINE_SPINLOCK(xen_grant_dma_lock);
+static DEFINE_XARRAY(xen_grant_dma_devices);

  #define XEN_GRANT_DMA_ADDR_OFF (1ULL << 63)

@@ -40,21 +38,13 @@ static inline grant_ref_t dma_to_grant(dma_addr_t dma)

  static struct xen_grant_dma_data *find_xen_grant_dma_data(struct 
device *dev)
  {
-       struct xen_grant_dma_data *data = NULL;
-       bool found = false;
-
-       spin_lock(&xen_grant_dma_lock);
-
-       list_for_each_entry(data, &xen_grant_dma_devices, list) {
-               if (data->dev == dev) {
-                       found = true;
-                       break;
-               }
-       }
+       struct xen_grant_dma_data *data;

-       spin_unlock(&xen_grant_dma_lock);
+       xa_lock(&xen_grant_dma_devices);
+       data = xa_load(&xen_grant_dma_devices, (unsigned long)dev);
+       xa_unlock(&xen_grant_dma_devices);

-       return found ? data : NULL;
+       return data;
  }

  /*
@@ -310,11 +300,12 @@ void xen_grant_setup_dma_ops(struct device *dev)
                 goto err;

         data->dev_domid = dev_domid;
-       data->dev = dev;

-       spin_lock(&xen_grant_dma_lock);
-       list_add(&data->list, &xen_grant_dma_devices);
-       spin_unlock(&xen_grant_dma_lock);
+       if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev, 
data,
+                       GFP_KERNEL))) {
+               dev_err(dev, "Cannot store Xen grant DMA data\n");
+               goto err;
+       }

         dev->dma_ops = &xen_grant_dma_ops;


>
>> +EXPORT_SYMBOL_GPL(xen_grant_setup_dma_ops);
> I don't think this has any modular users, or did I miss something?

No, you didn't. Will drop here and in the next patch for 
xen_is_grant_dma_device() as well.


-- 
Regards,

Oleksandr Tyshchenko




More information about the linux-arm-kernel mailing list