Maximum transaction size/splitting up large transactions?
Valerie Aurora
val at versity.com
Wed Apr 2 08:37:55 PDT 2025
On Fri, Mar 28, 2025 at 7:05 PM Zach Brown <zab at zabbo.net> wrote:
>
> On Thu, Mar 27, 2025 at 09:33:30PM +0100, Valerie Aurora wrote:
> > Hey ngneers,
> >
> > I've implemented file data reads/writes, and I have this comment
> > outside the main loop of per block reads/writes:
> >
> > /* XXX break up into smaller transactions */
> >
> > Is this something to leave for later, or is it worth discussing now?
> > For file data, my ideal would be some way to check how much space I
> > have left in the transaction before I add another block and its
> > potential additional indirect blocks.
>
> I imagine we should leave it for later, specifically for when the block
> cache is able to track and send writes as transactions. But it's always
> nice to discuss to frame the problem :).
>
> I think my initial bias is to try to not have code paths that are
> managing transaction sizes that way. Simply 'cause not doing it is less
> code (and control flows that need to be (not) tested).
>
> These client side transaction boundaries are just what are being
> atomically dirtied. They might well be being merged into larger
> transactions that will eventually be sent over the wire.
That makes sense. I think my question is more "what do I do when a
user requests a single gigantic IO?" Right now, if you ask ngnfs to
write 1GB at once, it will try to put it all in one transaction. I
assume the transaction system would like something a little bit
smaller to work with!
The easy answer is to truncate the length of all IO requests to
something "reasonable" for the transaction system and have the user do
the usual loop until all bytes are written. Or ngnfs can do this
internally, it's just another loop. Or if it's being called from the
kernel, there are other limits that may split it up for us.
But what is reasonable? 5MB? I have no idea.
Valerie
More information about the ngnfs-devel
mailing list