[PATCH v5 01/17] iommu/arm-smmu-v3: Make STE programming independent of the callers

Will Deacon will at kernel.org
Thu Feb 15 05:49:53 PST 2024


Hi Jason,

On Tue, Feb 06, 2024 at 11:12:38AM -0400, Jason Gunthorpe wrote:
> As the comment in arm_smmu_write_strtab_ent() explains, this routine has
> been limited to only work correctly in certain scenarios that the caller
> must ensure. Generally the caller must put the STE into ABORT or BYPASS
> before attempting to program it to something else.

This is looking pretty good now, but I have a few comments inline.

>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 330 ++++++++++++++++----
>  1 file changed, 263 insertions(+), 67 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index 0ffb1cf17e0b2e..f0b915567cbcdc 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -48,6 +48,21 @@ enum arm_smmu_msi_index {
>  	ARM_SMMU_MAX_MSIS,
>  };
>  
> +struct arm_smmu_entry_writer_ops;
> +struct arm_smmu_entry_writer {
> +	const struct arm_smmu_entry_writer_ops *ops;
> +	struct arm_smmu_master *master;
> +};
> +
> +struct arm_smmu_entry_writer_ops {
> +	unsigned int num_entry_qwords;
> +	__le64 v_bit;
> +	void (*get_used)(const __le64 *entry, __le64 *used);
> +	void (*sync)(struct arm_smmu_entry_writer *writer);
> +};

Can we avoid the indirection for now, please? I'm sure we'll want it later
when you extend this to CDs, but for the initial support it just makes it
more difficult to follow the flow. Should be a trivial thing to drop, I
hope.

> +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
>  {
> +	unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0]));
> +
> +	used_bits[0] = cpu_to_le64(STRTAB_STE_0_V);
> +	if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V)))
> +		return;
> +
> +	/*
> +	 * See 13.5 Summary of attribute/permission configuration fields for the
> +	 * SHCFG behavior. It is only used for BYPASS, including S1DSS BYPASS,
> +	 * and S2 only.
> +	 */
> +	if (cfg == STRTAB_STE_0_CFG_BYPASS ||
> +	    cfg == STRTAB_STE_0_CFG_S2_TRANS ||
> +	    (cfg == STRTAB_STE_0_CFG_S1_TRANS &&
> +	     FIELD_GET(STRTAB_STE_1_S1DSS, le64_to_cpu(ent[1])) ==
> +		     STRTAB_STE_1_S1DSS_BYPASS))
> +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG);

Huh, SHCFG is really getting in the way here, isn't it? I think it also
means we don't have a "hitless" transition from stage-2 translation ->
bypass. I'm inclined to leave it set to "use incoming" all the time; the
only difference I can see is if you have stage-2 translation and a
master emitting outer-shareable transactions, in which case they'd now
be outer-shareable instead of inner-shareable, which I think is harmless.

Additionally, it looks like there's an existing buglet here in that we
shouldn't set SHCFG if SMMU_IDR1.ATTR_TYPES_OVR == 0.

> +
> +	used_bits[0] |= cpu_to_le64(STRTAB_STE_0_CFG);
> +	switch (cfg) {
> +	case STRTAB_STE_0_CFG_ABORT:
> +	case STRTAB_STE_0_CFG_BYPASS:
> +		break;
> +	case STRTAB_STE_0_CFG_S1_TRANS:
> +		used_bits[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT |
> +					    STRTAB_STE_0_S1CTXPTR_MASK |
> +					    STRTAB_STE_0_S1CDMAX);
> +		used_bits[1] |=
> +			cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
> +				    STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
> +				    STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW);
> +		used_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
> +		used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID);
> +		break;
> +	case STRTAB_STE_0_CFG_S2_TRANS:
> +		used_bits[1] |=
> +			cpu_to_le64(STRTAB_STE_1_EATS);
> +		used_bits[2] |=
> +			cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
> +				    STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
> +				    STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R);
> +		used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK);
> +		break;

With SHCFG fixed, can we go a step further with this and simply identify
the live qwords directly, rather than on a field-by-field basis? I think
we should be able to do the same "hitless" transitions you want with the
coarser granularity.

Will



More information about the linux-arm-kernel mailing list