[RFC, PATCH, RESEND] fs: push rcu_barrier() from deactivate_locked_super() to filesystems
Kirill A. Shutemov
kirill.shutemov at linux.intel.com
Fri Jun 8 18:14:46 EDT 2012
On Fri, Jun 08, 2012 at 03:02:53PM -0700, Andrew Morton wrote:
> On Sat, 9 Jun 2012 00:41:03 +0300
> "Kirill A. Shutemov" <kirill.shutemov at linux.intel.com> wrote:
>
> > There's no reason to call rcu_barrier() on every deactivate_locked_super().
> > We only need to make sure that all delayed rcu free inodes are flushed
> > before we destroy related cache.
> >
> > Removing rcu_barrier() from deactivate_locked_super() affects some
> > fas paths. E.g. on my machine exit_group() of a last process in IPC
> > namespace takes 0.07538s. rcu_barrier() takes 0.05188s of that time.
>
> What an unpleasant patch. Is final-process-exiting-ipc-namespace a
> sufficiently high-frequency operation to justify the change?
> I don't really understand what's going on here. Are you saying that
> there is some filesystem against which we run deactivate_locked_super()
> during exit_group(), and that this filesystem doesn't use rcu-freeing
> of inodes? The description needs this level of detail, please.
I think the rcu_barrier() is in wrong place. We need it to safely destroy
inode cache. deactivate_locked_super() is part of umount() path, but all
filesystems I've checked have inode cache for whole filesystem, not
per-mount.
> The implementation would be less unpleasant if we could do the
> rcu_barrier() in kmem_cache_destroy(). I can't see a way of doing that
> without adding a dedicated slab flag, which would require editing all
> the filesystems anyway.
I think rcu_barrier() for all kmem_cache_destroy() would be too expensive.
> (kmem_cache_destroy() already has an rcu_barrier(). Can we do away
> with the private rcu games in the vfs and switch to
> SLAB_DESTROY_BY_RCU?)
--
Kirill A. Shutemov
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-mtd/attachments/20120609/eb0e9ef4/attachment.sig>
More information about the linux-mtd
mailing list