[PATCH 00/10] mm: Linux VM Infrastructure to support Memory Power Management
Paul E. McKenney
paulmck at linux.vnet.ibm.com
Fri Jun 10 11:11:21 EDT 2011
On Fri, Jun 10, 2011 at 09:51:53AM +0900, Kyungmin Park wrote:
> On Fri, Jun 10, 2011 at 3:52 AM, Paul E. McKenney
> <paulmck at linux.vnet.ibm.com> wrote:
> > On Sat, May 28, 2011 at 12:56:40AM -0700, Andrew Morton wrote:
> >> On Fri, 27 May 2011 18:01:28 +0530 Ankita Garg <ankita at in.ibm.com> wrote:
> >>
> >> > This patchset proposes a generic memory regions infrastructure that can be
> >> > used to tag boundaries of memory blocks which belongs to a specific memory
> >> > power management domain and further enable exploitation of platform memory
> >> > power management capabilities.
> >>
> >> A couple of quick thoughts...
> >>
> >> I'm seeing no estimate of how much energy we might save when this work
> >> is completed. But saving energy is the entire point of the entire
> >> patchset! So please spend some time thinking about that and update and
> >> maintain the [patch 0/n] description so others can get some idea of the
> >> benefit we might get from all of this. That estimate should include an
> >> estimate of what proportion of machines are likely to have hardware
> >> which can use this feature and in what timeframe.
> >>
> >> IOW, if it saves one microwatt on 0.001% of machines, not interested ;)
> >
> > FWIW, I have seen estimates on the order of a 5% reduction in power
> > consumption for some common types of embedded devices.
>
> Wow interesting. I can't expect it can reduce 5% power reduction.
> If it uses the 1GiBytes LPDDR2 memory. each memory port has 4Gib,
> another has 4Gib. so one bank size is 64MiB (512MiB / 8).
> So I don't expect it's difficult to contain the free or inactive
> memory more than 64MiB during runtime.
>
> Anyway can you describe the exact test environment? esp., memory type?
> As you know there are too much embedded devices which use the various
> environment.
Indeed, your mileage may vary. It involved a very low-power CPU,
and the change enabled not just powering off memory, but reducing
the amount of physical memory provided.
Of course, on a server, you could get similar results by having a very
large amount of memory (say 256GB) and a workload that needed all the
memory only occasionally for short periods, but could get by with much
less (say 8GB) the rest of the time. I have no idea whether or not
anyone actually has such a system.
Thanx, Paul
> Thank you,
> Kyungmin Park
> >
> > Thanx, Paul
> >
> >> Also, all this code appears to be enabled on all machines? So machines
> >> which don't have the requisite hardware still carry any additional
> >> overhead which is added here. I can see that ifdeffing a feature like
> >> this would be ghastly but please also have a think about the
> >> implications of this and add that discussion also.
> >>
> >> If possible, it would be good to think up some microbenchmarks which
> >> probe the worst-case performance impact and describe those and present
> >> the results. So others can gain an understanding of the runtime costs.
> >>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> >> the body of a message to majordomo at vger.kernel.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >> Please read the FAQ at http://www.tux.org/lkml/
> >>
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to majordomo at kvack.org. For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> > Don't email: <a href=mailto:"dont at kvack.org"> email at kvack.org </a>
> >
More information about the linux-arm-kernel
mailing list