[PATCH 1/3] drivers: misc: add omap_hwspinlock driver

Kevin Hilman khilman at deeprootsystems.com
Tue Oct 19 12:58:42 EDT 2010


Ohad Ben-Cohen <ohad at wizery.com> writes:

> From: Simon Que <sque at ti.com>
>
> Add driver for OMAP's Hardware Spinlock module.
>
> The OMAP Hardware Spinlock module, initially introduced in OMAP4,
> provides hardware assistance for synchronization between the
> multiple processors in the device (Cortex-A9, Cortex-M3 and
> C64x+ DSP).

[...]

> +/**
> + * omap_hwspin_trylock() - attempt to lock a specific hwspinlock
> + * @hwlock: a hwspinlock which we want to trylock
> + * @flags: a pointer to where the caller's interrupt state will be saved at
> + *
> + * This function attempt to lock the underlying hwspinlock. Unlike
> + * hwspinlock_lock, this function will immediately fail if the hwspinlock
> + * is already taken.
> + *
> + * Upon a successful return from this function, preemption and interrupts
> + * are disabled, so the caller must not sleep, and is advised to release
> + * the hwspinlock as soon as possible. This is required in order to minimize
> + * remote cores polling on the hardware interconnect.
> + *
> + * This function can be called from any context.
> + *
> + * Returns 0 if we successfully locked the hwspinlock, -EBUSY if
> + * the hwspinlock was already taken, and -EINVAL if @hwlock is invalid.
> + */
> +int omap_hwspin_trylock(struct omap_hwspinlock *hwlock, unsigned long *flags)
> +{
> +	u32 ret;
> +
> +	if (IS_ERR_OR_NULL(hwlock)) {
> +		pr_err("invalid hwlock\n");
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * This spin_trylock_irqsave serves two purposes:
> +
> +	 * 1. Disable local interrupts and preemption, in order to
> +	 *    minimize the period of time in which the hwspinlock
> +	 *    is taken (so caller will not preempted). This is
> +	 *    important in order to minimize the possible polling on
> +	 *    the hardware interconnect by a remote user of this lock.
> +	 *
> +	 * 2. Make this hwspinlock primitive SMP-safe (so we can try to
> +	 *    take it from additional contexts on the local cpu)
> +	 */

3. Ensures that in_atomic/might_sleep checks catch potential problems
   with hwspinlock usage (e.g. scheduler checks like 'scheduling while
   atomic' etc.)

> +	if (!spin_trylock_irqsave(&hwlock->lock, *flags))
> +		return -EBUSY;
> +
> +	/* attempt to acquire the lock by reading its value */
> +	ret = readl(hwlock->addr);
> +
> +	/* lock is already taken */
> +	if (ret == SPINLOCK_TAKEN) {
> +		spin_unlock_irqrestore(&hwlock->lock, *flags);
> +		return -EBUSY;
> +	}
> +
> +	/*
> +	 * We can be sure the other core's memory operations
> +	 * are observable to us only _after_ we successfully take
> +	 * the hwspinlock, so we must make sure that subsequent memory
> +	 * operations will not be reordered before we actually took the
> +	 * hwspinlock.
> +	 * Note: the implicit memory barrier of the spinlock above is too
> +	 * early, so we need this additional explicit memory barrier.
> +	 */
> +	mb();
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(omap_hwspin_trylock);

[...]

> +/**
> + * omap_hwspinlock_unlock() - unlock a specific hwspinlock

minor nit: s/lock_unlock/_unlock/  to match name below

> + * @hwlock: a previously-acquired hwspinlock which we want to unlock
> + * @flags: a pointer to the caller's saved interrupts state
> + *
> + * This function will unlock a specific hwspinlock, enable preemption and
> + * restore the interrupts state. @hwlock must be taken (by us!) before
> + * calling this function: it is a bug to call unlock on a @hwlock that was
> + * not taken by us, i.e. using one of omap_hwspin_{lock trylock, lock_timeout}.
> + *
> + * This function can be called from any context.
> + *
> + * Returns 0 when the @hwlock on success, or -EINVAL if @hwlock is invalid.
> + */
> +int omap_hwspin_unlock(struct omap_hwspinlock *hwlock, unsigned long *flags)
> +{
> +	if (IS_ERR_OR_NULL(hwlock)) {
> +		pr_err("invalid hwlock\n");
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * We must make sure that memory operations, done before unlocking
> +	 * the hwspinlock, will not be reordered after the lock is released.
> +	 * The memory barrier induced by the spin_unlock below is too late:
> +	 * the other core is going to access memory soon after it will take
> +	 * the hwspinlock, and by then we want to be sure our memory operations
> +	 * were already observable.
> +	 */
> +	mb();
> +
> +	/* release the lock by writing 0 to it (NOTTAKEN) */
> +	writel(SPINLOCK_NOTTAKEN, hwlock->addr);
> +
> +	/* undo the spin_trylock_irqsave called in the locking function */
> +	spin_unlock_irqrestore(&hwlock->lock, *flags);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(omap_hwspin_unlock);

[...]

Kevin



More information about the linux-arm-kernel mailing list