[PATCH 3/3] arm: pm: add deep sleep support for LS1

Chenhui Zhao chenhui.zhao at freescale.com
Mon Sep 29 02:42:55 PDT 2014


On Sun, Sep 28, 2014 at 03:26:33PM +0100, Russell King - ARM Linux wrote:
> On Sun, Sep 28, 2014 at 07:06:40PM +0800, Chenhui Zhao wrote:
> > On Fri, Sep 26, 2014 at 01:14:27PM +0100, Russell King - ARM Linux wrote:
> > > On Fri, Sep 26, 2014 at 07:25:03PM +0800, Chenhui Zhao wrote:
> > > > +/*
> > > > + * r0: the physical entry address of SRAM code
> > > > + *
> > > > + */
> > > > +ENTRY(ls1_do_deepsleep)
> > > > +	mov	r13, r0
> > > > +
> > > > +	/* flush cache */
> > > > +	bl	v7_flush_dcache_all
> > > 
> > > Please explain the purpose of this call via a comment in the code.
> > > 
> > > The generic code will have saved the CPU state, and will have called
> > > flush_cache_louis() to flush the caches to the point of unification.
> > > 
> > > The only data which will have been loaded into the cache between that
> > > point is the stack for the return from __cpu_suspend_save, and
> > > speculative prefetches.
> > > 
> > > So, the only reason I can gather is that you need to flush data from
> > > lower levels of the cache below the point of unification.
> > > 
> > 
> > In deep sleep process, all caches will lost, so flush all caches to prevent
> > data loss.
> 
> You haven't answered my question.
> 

I want to flush L1 and L2 caches of Cortex-A7 in LS1021. However,
flush_cache_louis() only flushed L1 cache. Therefore, call
v7_flush_dcache_all() to flush L1 and L2 cache. 

> > > > +
> > > > +	/* disable cache, C bit in SCTLR */
> > > > +	mrc	p15, 0, r0, c1, c0, 0
> > > > +	bic	r0, r0, #(1 << 2)
> > > > +	mcr	p15, 0, r0, c1, c0, 0
> > > > +	isb
> > > > +
> > > > +	/* disable coherency, SMP bit in ACTLR */
> > > > +	mrc	p15, 0, r0, c1, c0, 1
> > > > +	bic	r0, r0, #(1 << 6)
> > > > +	mcr	p15, 0, r0, c1, c0, 1
> > > > +	isb
> > > > +	dsb
> > > > +
> > > > +	/* disable MMU, M bit in SCTLR */
> > > > +	mrc	p15, 0, r0, c1, c0, 0
> > > > +	bic	r0, r0, #1
> > > > +	mcr	p15, 0, r0, c1, c0, 0
> > > > +	isb
> > > > +
> > > > +	/* jump to sram code using physical address */
> > > > +	bx	r13
> > > 
> > > This looks extremely fragile.  You are running in virtual space, and you
> > > turn the MMU off.  You are reliant on the MMU still being on for the
> > > following instructions to already be in the pipeline.  That's not a
> > > very good assumption to make (we've made it in the past and it breaks
> > > every so often when things change, eg when the code is no longer laid
> > > out how we expect.)
> > > 
> > > You need to disable the MMU safely, which means using the identity map
> > > page tables and executing code in the identity map region.
> > 
> > Yes, the code will switch off MMU, and switch to physical address space.
> > On LS1021, the DDR memory located at the physical address space started from
> > 0x80000000, the kernel space also started at 0x80000000 (CONFIG_PAGE_OFFSET = 0x80000000).
> > So the virtual address of kernel code is equal to the physical address.
> 
> You can't rely on that.
> 
> Sorry, NAK.

I'll try to use a safer method to implement it.

Chenhui



More information about the linux-arm-kernel mailing list