[PATCH net-next 4/9] rxrpc: Randomise epoch and starting client conn ID values

Jeffrey Altman jaltman at auristor.com
Mon Sep 5 19:12:49 PDT 2016


Reply inline ....

On 9/5/2016 12:24 PM, David Howells wrote:
> [cc'ing Jeff Altman for comment]
> 
> David Laight <David.Laight at ACULAB.COM> wrote:
> 
>>> Create a random epoch value rather than a time-based one on startup and set
>>> the top bit to indicate that this is the case.
>>
>> Why set the top bit?
>> There is nothing to stop the time (in seconds) from having the top bit set.
>> Nothing else can care - otherwise this wouldn't work.
> 
> This is what I'm told I should do by purveyors of other RxRPC solutions.

The protocol specification requires that the top bit be 1 for a random
epoch and 0 for a time derived epoch.
> 
>>> Also create a random starting client connection ID value.  This will be
>>> incremented from here as new client connections are created.
>>
>> I'm guessing this is to make duplicates less likely after a restart?

Its to reduce the possibility of duplicates on multiple machines that
might at some point exchange an endpoint address either due to mobility
or NAT/PAT.
> 
> Again, it's been suggested that I do this, but I would guess so.
> 
>> You may want to worry about duplicate allocations (after 2^32 connects).
> 
> It's actually a quarter of that, but connection != call, so a connection may
> be used for up to ~16 billion RPC operations before it *has* to be flushed.
> 
>> There are id allocation algorithms that guarantee not to generate duplicates
>> and not to reuse values quickly while still being fixed cost.
>> Look at the code NetBSD uses to allocate process ids for an example.
> 
> I'm using idr_alloc_cyclic()[*] with a fixed size "window" on the active conn
> ID values.  Client connections with IDs outside of that window are discarded
> as soon as possible to keep the memory consumption of the tree down (and to
> force security renegotiation occasionally).  However, given that there are a
> billion IDs to cycle through, it will take quite a while for reuse to become
> an issue.
> 
> I like the idea of incrementing the epoch every time we cycle through the ID
> space, but I'm told that a change in the epoch value is an indication that the
> client rebooted - with what consequences I cannot say.

State information might be recorded about an rx peer with the assumption
that state will be reset when the epoch changes.  The most frequent use
of this technique is for rx rpc statistics monitoring.


> 
> [*] which is what Linux uses to allocate process IDs.
> 
> David
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: jaltman.vcf
Type: text/x-vcard
Size: 395 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-afs/attachments/20160905/e47fcf7b/attachment.vcf>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4333 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.infradead.org/pipermail/linux-afs/attachments/20160905/e47fcf7b/attachment.p7s>


More information about the linux-afs mailing list