[Xen-devel] "tcp: refine TSO autosizing" causes performance regression on Xen
George Dunlap
george.dunlap at eu.citrix.com
Thu Apr 16 03:01:30 PDT 2015
On 04/16/2015 10:20 AM, Daniel Borkmann wrote:
> So mid term, it would be much more beneficial if you attempt fix the
> underlying driver issues that actually cause high tx completion delays,
> instead of reintroducing bufferbloat. So that we all can move forward
> and not backwards in time.
Yes, I think we definitely see the need for this. I think we certainly
agree that bufferbloat needs to be reduced, and minimizing the data we
need "in the pipe" for full performance on xennet is an important part
of that.
It should be said, however, that any virtual device is always going to
have higher latency than a physical device. Hopefully we'll be able to
get the latency of xennet down to something that's more "reasonable",
but it may just not be possible. And in any case, if we're going to be
cranking down these limits to just barely within the tolerance of
physical NICs, virtual devices (either xennet or virtio_net) are never
going to be able to catch up. (Without cheating that is.)
> What Eric described to you was that you introduce a new netdev member
> like netdev->needs_bufferbloat, set that indication from driver site,
> and cache that in the socket that binds to it, so you can adjust the
> test in tcp_xmit_size_goal(). It should merely be seen as a hint/indication
> for such devices. Hmm?
He suggested that after he'd been prodded by 4 more e-mails in which two
of us guessed what he was trying to get at. That's what I was
complaining about.
Having a per-device "long transmit latency" hint sounds like a sensible
short-term solution to me.
-George
More information about the linux-arm-kernel
mailing list