[PATCH] psi: reduce min window size to 50ms

Suren Baghdasaryan surenb at google.com
Tue Feb 14 11:34:30 PST 2023


On Mon, Feb 13, 2023 at 6:12 PM Sudarshan Rajagopalan
<quic_sudaraja at quicinc.com> wrote:
>
>
> On 2/10/2023 6:13 PM, Suren Baghdasaryan wrote:
> > On Fri, Feb 10, 2023 at 5:46 PM Sudarshan Rajagopalan
> > <quic_sudaraja at quicinc.com> wrote:
> >>
> >> On 2/10/2023 5:09 PM, Suren Baghdasaryan wrote:
> >>> On Fri, Feb 10, 2023 at 4:45 PM Sudarshan Rajagopalan
> >>> <quic_sudaraja at quicinc.com> wrote:
> >>>> On 2/10/2023 3:03 PM, Suren Baghdasaryan wrote:
> >>>>> On Fri, Feb 10, 2023 at 2:31 PM Sudarshan Rajagopalan
> >>>>> <quic_sudaraja at quicinc.com> wrote:
> >>>>>> The PSI mechanism is useful tool to monitor pressure stall
> >>>>>> information in the system. Currently, the minimum window size
> >>>>>> is set to 500ms. May we know what is the rationale for this?
> >>>>> The limit was set to avoid regressions in performance and power
> >>>>> consumption if the window is set too small and the system ends up
> >>>>> polling too frequently. That said, the limit was chosen based on
> >>>>> results of specific experiments which might not represent all
> >>>> Rightly as you said, the effect on power and performance depends on type
> >>>> of the system - embedded systems, or Android mobile, or commercial VMs
> >>>> or servers. With higher PSI sampling, it may not be much of power impact
> >>>> to embedded systems with low-tier chipsets or performance impact to
> >>>> powerful servers.
> >>>>
> >>>>> usecases. If you want to change this limit, you would need to describe
> >>>>> why the new limit is inherently better than the current one (why not
> >>>>> higher, why not lower).
> >>>> This is in regards to the userspace daemon [1] that we are working on,
> >>>> that dynamically resizes the VM memory based on PSI memory pressure
> >>>> events. With current min window size of 500ms, the PSI monitor sampling
> >>>> period would be 50ms. So to detect increase in memory demand in system
> >>>> and plug-in memory into VM when pressure goes up, the minimum time the
> >>>> process needs to stall for is 50ms before a event can be generated and
> >>>> sent out to userspace and the daemon can do actions.
> >>>>
> >>>> This again I'm talking w.r.t. lightweight embedded systems, where even
> >>>> background kswapd/kcompd (which I'm calling it as natural memory
> >>>> pressure) in the system would be less than 5-10ms stall. So any stall
> >>>> more than 5-10ms would "hint" us that a memory consuming usecase has
> >>>> ranB  and memory may need to be plugged in.
> >>>>
> >>>> So in these cases, having as low as 5ms psimon sampling time would give
> >>>> us faster reaction time and daemon can be responsive more quickly. In
> >>>> general, this will reduce the malloc latencies significantly.
> >>>>
> >>>> Pasting here the same excerpt I mentioned in [1].
> >>> My question is: why do you think 5ms is the optimal limit here? I want
> >>> to avoid a race to the bottom where next time someone can argue that
> >>> they would like to detect a stall within a lower period than 5ms.
> >>> Technically the limit can be as small as one wants but at some point I
> >>> think we should consider the possibility of this being used for a DoS
> >>> attack.
> >> Well the optimal limit should be something which is least destructive? I
> >> do understand about possibility of DoS attacks, but wouldn't that still
> >> be possible with 500ms window today? Which will atleast be 1/10th less
> >> severe compared to 50ms window. The way I see it is - min pressure
> >> sampling should be such that even the least pressure stall which we
> >> think is significant should be captured (this could be 5ms or 50ms at
> >> present) while balancing the power and performance impact across all
> >> usecases.
> >>
> >> At present, Android's LMKD sets 1000ms as window for which it considers
> >> 100ms sampling to be significant. And here, with psi_daemon usecase we
> >> are saying 5ms sampling would be significant. So there's no actual
> >> optimal limit, but we must limit as much possible without effecting
> >> power or performance as a whole. Also, this is just the "minimum
> >> allowable" window, and system admins can configure it as per the system
> >> type/requirement.
> > Ok, let me ask you another way which might be more productive. What
> > caused you to choose 5ms as the time you care to react to a stall
> > buildup?
>
> We basically want to capture any stalls caused by direct reclaim. And
> ignore any stalls caused by indirect reclaim and alloc retries. Stalls
> due to direct reclaim is what indicates that memory pressure is building
> up in system and memory needs to be free'd (by oom-killer or LMKD
> killing apps) or made available (by plugin-in any available memory or
> requesting memory from Primary host). We see that any stalls above 5ms
> is significant enough that alloc request would've invoked direct
> reclaim, hinting that
> memory pressure is starting to build up.
>
> Keeping the 5ms and other numbers aside, lets see what is smallest
> pressure that is of significance to be captured.
>
> A PSI memory stall is wholly comprised of: compaction (kcompactd),
> thrashing, kswapd, direct compaction and direct reclaim. Out of these,
> compaction, thrashing and kswapd stalls may not necessarily give
> significance towards memory demand building up (i.e. system is in need
> of more memory). Direct compaction stall would indicate memory is
> fragmented. But a significant direct reclaim stall would indicate that
> system is in memory pressure. Usually, direct compaction and direct
> reclaim are the smallest in an aggregated PSI memory stall.
>
> So now the question - what is the smallest direct reclaim stall that we
> should capture, which would be significant to us? Now this depends on
> system type and configuration and the nature of work loads. For Android
> mobile maybe 100ms (lmkd), and for servers maybe 1s (because max window
> is 10s?). For Linux Embedded Systems, this would be even smaller. From
> our experiments, we observed that 5ms stall would be significant to
> capture direct reclaim stalls that would indicate pressure build up.
>
> I think the min window size should be set such that even the smallest
> pressure stall that we think is significant should be be captured.
> Rather than hard-coding min window to 500ms, let the system admin choose
> what's best? We anyway have bigger cap set for max window of 10s (though
> I hardly doubt one would think 1s is the least pressure that they care
> of for cpu/io/memory). Also these window thresholds have never changed
> since psi monitor was introduced in kernel.org, and are based on
> previous experiments which may not have represented all workloads.
>
> Finding the true bottom of the well would be hard. But to keep things in
> ms range, we can define range 1ms-500ms in Kconfig:
>
> --- a/init/Kconfig
> +++ b/init/Kconfig
>
> +config PSI_MIN_WINDOW_MS
> +B B B B B B  int "Minimum PSI window (ms)"
> +B B B B B B  range 1 500
> +B B B B B B  default 500
> +
> +
>
> With PSI mechanism finding more of its uses, the same requirement might
> be applicable for io and cpu as well. Giving more flexibility in setting
> window size and sampling period would be beneficial.

Hmm. I understand the need and I still don't see a definite answer why
5ms minimum is optimal. The above description argues that 5ms is
indicative of direct reclaim but if some kernel internals change and
system stalls less during direct reclaim (for example, experimenting
with MGLRU we see less stalls), 5ms might end up being too high. You
would need to adjust the limit again.
Your suggestion to have this limit configurable sounds like obvious
solution. I would like to get some opinions from other maintainers.
Johannes, WDYT? CC'ing Michal to chime in as well since this is mostly
related to memory stalls.


>
> >> Also, about possible DoS attacks - file permissions for
> >> /proc/pressure/... can be set such that not any random user can register
> >> to psi events right?
> > True. We have a CAP_SYS_RESOURCE check for the writers of these files.
> >
> >>>> "
> >>>>
> >>>> 4. Detecting increase in memory demand b   when a certain usecase starts
> >>>> in VM that does memory allocations, it will stall causing PSI mechanism
> >>>> to generate a memory pressure event to userspace. To simply put, when
> >>>> pressure increases certain set threshold, it can make educated guess
> >>>> that a memory requiring usecase has ran and VM system needs memory to be
> >>>> added.
> >>>>
> >>>> "
> >>>>
> >>>> [1]
> >>>> https://lore.kernel.org/linux-arm-kernel/1bf30145-22a5-cc46-e583-25053460b105@redhat.com/T/#m95ccf038c568271e759a277a08b8e44e51e8f90b
> >>>>
> >>>>> Thanks,
> >>>>> Suren.
> >>>>>
> >>>>>> For lightweight systems such as Linux Embedded Systems, PSI
> >>>>>> can be used to monitor and track memory pressure building up
> >>>>>> in the system and respond quickly to such memory demands.
> >>>>>> Example, the Linux Embedded Systems could be a secondary VM
> >>>>>> system which requests for memory from Primary host. With 500ms
> >>>>>> window size, the sampling period is 50ms (one-tenth of windwo
> >>>>>> size). So the minimum amount of time the process needs to stall,
> >>>>>> so that a PSI event can be generated and actions can be done
> >>>>>> is 50ms. This reaction time can be much reduced by reducing the
> >>>>>> sampling time (by reducing window size), so that responses to
> >>>>>> such memory pressures in system can be serviced much quicker.
> >>>>>>
> >>>>>> Please let us know your thoughts on reducing window size to 50ms.
> >>>>>>
> >>>>>> Sudarshan Rajagopalan (1):
> >>>>>>      psi: reduce min window size to 50ms
> >>>>>>
> >>>>>>     kernel/sched/psi.c | 2 +-
> >>>>>>     1 file changed, 1 insertion(+), 1 deletion(-)
> >>>>>>
> >>>>>> --
> >>>>>> 2.7.4
> >>>>>>



More information about the linux-arm-kernel mailing list