From jan.adamczyk at ventrixor.pl Fri Apr 4 01:05:41 2025 From: jan.adamczyk at ventrixor.pl (Jan Adamczyk) Date: Fri, 4 Apr 2025 08:05:41 GMT Subject: Rekrutacja handlowca Message-ID: <20250404064500-0.1.6b.1c9ig.0.nfcamr34ew@ventrixor.pl> Witam, Czy maj? Pa?stwa aktualnie potrzeby z zakresu rekrutacji os?b do dzia?u handlowego? Pomagamy skutecznie rekrutowa? osoby na stanowiska Przedstawiciela Handlowego, Sales Managera, a tak?e Kadry Managerskiej w obszarze sprzeda?y. Czy mo?emy porozmawia?? Pozdrawiam Jan Adamczyk From jan.adamczyk at ventrixor.pl Fri Apr 4 01:33:09 2025 From: jan.adamczyk at ventrixor.pl (Jan Adamczyk) Date: Fri, 4 Apr 2025 08:33:09 GMT Subject: Rekrutacja handlowca Message-ID: <20250404074859-0.1.6c.1c9ig.0.qu399d9h3j@ventrixor.pl> Witam, Czy maj? Pa?stwa aktualnie potrzeby z zakresu rekrutacji os?b do dzia?u handlowego? Pomagamy skutecznie rekrutowa? osoby na stanowiska Przedstawiciela Handlowego, Sales Managera, a tak?e Kadry Managerskiej w obszarze sprzeda?y. Czy mo?emy porozmawia?? Pozdrawiam Jan Adamczyk From sam at gentoo.org Sat Apr 5 10:09:11 2025 From: sam at gentoo.org (Sam James) Date: Sat, 05 Apr 2025 18:09:11 +0100 Subject: [PATCH v2 1/1] mm: pgtable: fix pte_swp_exclusive In-Reply-To: <87cyfejafj.fsf@gentoo.org> References: <87cyfejafj.fsf@gentoo.org> Message-ID: <87v7rik020.fsf@gentoo.org> Sam James writes: > Lovely cleanup and a great suggestion from Al. > > Reviewed-by: Sam James > > I'd suggest adding a: > Suggested-by: Al Viro Al, were you planning on taking this through your tree? > > thanks, > sam From glaubitz at physik.fu-berlin.de Sat Apr 5 10:21:56 2025 From: glaubitz at physik.fu-berlin.de (John Paul Adrian Glaubitz) Date: Sat, 05 Apr 2025 19:21:56 +0200 Subject: [PATCH v2 1/1] mm: pgtable: fix pte_swp_exclusive In-Reply-To: <20250218175735.19882-2-linmag7@gmail.com> References: <20250218175735.19882-1-linmag7@gmail.com> <20250218175735.19882-2-linmag7@gmail.com> Message-ID: <4209b9816551367f8e5670cc5a08e139f0f2c215.camel@physik.fu-berlin.de> Hi Magnus, On Tue, 2025-02-18 at 18:55 +0100, Magnus Lindholm wrote: > Make pte_swp_exclusive return bool instead of int. This will better reflect > how pte_swp_exclusive is actually used in the code. This fixes swap/swapoff > problems on Alpha due pte_swp_exclusive not returning correct values when > _PAGE_SWP_EXCLUSIVE bit resides in upper 32-bits of PTE (like on alpha). Minor nitpick: "when _PAGE_SWP_EXCLUSIVE" => "when the _PAGE_SWP_EXCLUSIVE" > > Signed-off-by: Magnus Lindholm > --- > arch/alpha/include/asm/pgtable.h | 2 +- > arch/arc/include/asm/pgtable-bits-arcv2.h | 2 +- > arch/arm/include/asm/pgtable.h | 2 +- > arch/arm64/include/asm/pgtable.h | 2 +- > arch/csky/include/asm/pgtable.h | 2 +- > arch/hexagon/include/asm/pgtable.h | 2 +- > arch/loongarch/include/asm/pgtable.h | 2 +- > arch/m68k/include/asm/mcf_pgtable.h | 2 +- > arch/m68k/include/asm/motorola_pgtable.h | 2 +- > arch/m68k/include/asm/sun3_pgtable.h | 2 +- > arch/microblaze/include/asm/pgtable.h | 2 +- > arch/mips/include/asm/pgtable.h | 4 ++-- > arch/nios2/include/asm/pgtable.h | 2 +- > arch/openrisc/include/asm/pgtable.h | 2 +- > arch/parisc/include/asm/pgtable.h | 2 +- > arch/powerpc/include/asm/book3s/32/pgtable.h | 2 +- > arch/powerpc/include/asm/book3s/64/pgtable.h | 2 +- > arch/powerpc/include/asm/nohash/pgtable.h | 2 +- > arch/riscv/include/asm/pgtable.h | 2 +- > arch/s390/include/asm/pgtable.h | 2 +- > arch/sh/include/asm/pgtable_32.h | 2 +- > arch/sparc/include/asm/pgtable_32.h | 2 +- > arch/sparc/include/asm/pgtable_64.h | 2 +- > arch/um/include/asm/pgtable.h | 2 +- > arch/x86/include/asm/pgtable.h | 2 +- > arch/xtensa/include/asm/pgtable.h | 2 +- > 26 files changed, 27 insertions(+), 27 deletions(-) > > diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h > index 02e8817a8921..b0870de4b5b8 100644 > --- a/arch/alpha/include/asm/pgtable.h > +++ b/arch/alpha/include/asm/pgtable.h > @@ -334,7 +334,7 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset) > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h > index 8ebec1b21d24..3084c53f402d 100644 > --- a/arch/arc/include/asm/pgtable-bits-arcv2.h > +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h > @@ -130,7 +130,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma, > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h > index be91e376df79..aa4f3f71789c 100644 > --- a/arch/arm/include/asm/pgtable.h > +++ b/arch/arm/include/asm/pgtable.h > @@ -303,7 +303,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(swp) __pte((swp).val) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_isset(pte, L_PTE_SWP_EXCLUSIVE); > } > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 0b2a2ad1b9e8..b48b70d8d12d 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -496,7 +496,7 @@ static inline pte_t pte_swp_mkexclusive(pte_t pte) > return set_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE)); > } > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & PTE_SWP_EXCLUSIVE; > } > diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h > index a397e1718ab6..e68722eb33d9 100644 > --- a/arch/csky/include/asm/pgtable.h > +++ b/arch/csky/include/asm/pgtable.h > @@ -200,7 +200,7 @@ static inline pte_t pte_mkyoung(pte_t pte) > return pte; > } > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h > index 8c5b7a1c3d90..fa007eb9aad3 100644 > --- a/arch/hexagon/include/asm/pgtable.h > +++ b/arch/hexagon/include/asm/pgtable.h > @@ -390,7 +390,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) > (((type & 0x1f) << 1) | \ > ((offset & 0x3ffff8) << 10) | ((offset & 0x7) << 7)) }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h > index da346733a1da..bac946693d87 100644 > --- a/arch/loongarch/include/asm/pgtable.h > +++ b/arch/loongarch/include/asm/pgtable.h > @@ -302,7 +302,7 @@ static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset) > #define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val(pmd) }) > #define __swp_entry_to_pmd(x) ((pmd_t) { (x).val | _PAGE_HUGE }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h > index 48f87a8a8832..7e9748b29c44 100644 > --- a/arch/m68k/include/asm/mcf_pgtable.h > +++ b/arch/m68k/include/asm/mcf_pgtable.h > @@ -274,7 +274,7 @@ extern pgd_t kernel_pg_dir[PTRS_PER_PGD]; > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) (__pte((x).val)) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h > index 9866c7acdabe..26da9b985c5f 100644 > --- a/arch/m68k/include/asm/motorola_pgtable.h > +++ b/arch/m68k/include/asm/motorola_pgtable.h > @@ -191,7 +191,7 @@ extern pgd_t kernel_pg_dir[128]; > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h > index 30081aee8164..ac0793f57f31 100644 > --- a/arch/m68k/include/asm/sun3_pgtable.h > +++ b/arch/m68k/include/asm/sun3_pgtable.h > @@ -175,7 +175,7 @@ extern pgd_t kernel_pg_dir[PTRS_PER_PGD]; > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h > index e4ea2ec3642f..b281c2bbd6c0 100644 > --- a/arch/microblaze/include/asm/pgtable.h > +++ b/arch/microblaze/include/asm/pgtable.h > @@ -406,7 +406,7 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 2 }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val << 2 }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h > index c29a551eb0ca..c19da4ab7552 100644 > --- a/arch/mips/include/asm/pgtable.h > +++ b/arch/mips/include/asm/pgtable.h > @@ -540,7 +540,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) > #endif > > #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte.pte_low & _PAGE_SWP_EXCLUSIVE; > } > @@ -557,7 +557,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) > return pte; > } > #else > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h > index eab87c6beacb..64ce06bae8ac 100644 > --- a/arch/nios2/include/asm/pgtable.h > +++ b/arch/nios2/include/asm/pgtable.h > @@ -265,7 +265,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) > #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h > index 60c6ce7ff2dc..34cad9177a48 100644 > --- a/arch/openrisc/include/asm/pgtable.h > +++ b/arch/openrisc/include/asm/pgtable.h > @@ -413,7 +413,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf, > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h > index babf65751e81..dfeba45b6d6f 100644 > --- a/arch/parisc/include/asm/pgtable.h > +++ b/arch/parisc/include/asm/pgtable.h > @@ -431,7 +431,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h > index 42c3af90d1f0..92d21c6faf1e 100644 > --- a/arch/powerpc/include/asm/book3s/32/pgtable.h > +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h > @@ -365,7 +365,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h > index 6d98e6f08d4d..dbf772bef20d 100644 > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h > @@ -693,7 +693,7 @@ static inline pte_t pte_swp_mkexclusive(pte_t pte) > return __pte_raw(pte_raw(pte) | cpu_to_be64(_PAGE_SWP_EXCLUSIVE)); > } > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return !!(pte_raw(pte) & cpu_to_be64(_PAGE_SWP_EXCLUSIVE)); > } > diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h > index 8d1f0b7062eb..7d6b9e5b286e 100644 > --- a/arch/powerpc/include/asm/nohash/pgtable.h > +++ b/arch/powerpc/include/asm/nohash/pgtable.h > @@ -286,7 +286,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) > return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); > } > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > index 050fdc49b5ad..433c78c44e02 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -880,7 +880,7 @@ extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 3ca5af4cfe43..cb86dbf7126a 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -913,7 +913,7 @@ static inline int pmd_protnone(pmd_t pmd) > } > #endif > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h > index f939f1215232..5f221f3269e3 100644 > --- a/arch/sh/include/asm/pgtable_32.h > +++ b/arch/sh/include/asm/pgtable_32.h > @@ -478,7 +478,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) > /* In both cases, we borrow bit 6 to store the exclusive marker in swap PTEs. */ > #define _PAGE_SWP_EXCLUSIVE _PAGE_USER > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte.pte_low & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h > index 62bcafe38b1f..0362f8357371 100644 > --- a/arch/sparc/include/asm/pgtable_32.h > +++ b/arch/sparc/include/asm/pgtable_32.h > @@ -353,7 +353,7 @@ static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset) > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & SRMMU_SWP_EXCLUSIVE; > } > diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h > index 2b7f358762c1..65e53491fe07 100644 > --- a/arch/sparc/include/asm/pgtable_64.h > +++ b/arch/sparc/include/asm/pgtable_64.h > @@ -1027,7 +1027,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h > index 5601ca98e8a6..c32309614a15 100644 > --- a/arch/um/include/asm/pgtable.h > +++ b/arch/um/include/asm/pgtable.h > @@ -316,7 +316,7 @@ extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); > ((swp_entry_t) { pte_val(pte_mkuptodate(pte)) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_get_bits(pte, _PAGE_SWP_EXCLUSIVE); > } > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index 593f10aabd45..4c7ce40023d3 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -1586,7 +1586,7 @@ static inline pte_t pte_swp_mkexclusive(pte_t pte) > return pte_set_flags(pte, _PAGE_SWP_EXCLUSIVE); > } > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_flags(pte) & _PAGE_SWP_EXCLUSIVE; > } > diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h > index 1647a7cc3fbf..6da0aa0604f1 100644 > --- a/arch/xtensa/include/asm/pgtable.h > +++ b/arch/xtensa/include/asm/pgtable.h > @@ -355,7 +355,7 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > -static inline int pte_swp_exclusive(pte_t pte) > +static inline bool pte_swp_exclusive(pte_t pte) > { > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > } I'm not so sure about this implicit cast from unsigned long to bool though. Is this verified to work correctly on all architectures? I wonder why this bug was not caught earlier on alpha on the other hand. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer `. `' Physicist `- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 From geert at linux-m68k.org Mon Apr 7 01:06:03 2025 From: geert at linux-m68k.org (Geert Uytterhoeven) Date: Mon, 7 Apr 2025 10:06:03 +0200 Subject: [PATCH v2 1/1] mm: pgtable: fix pte_swp_exclusive In-Reply-To: <4209b9816551367f8e5670cc5a08e139f0f2c215.camel@physik.fu-berlin.de> References: <20250218175735.19882-1-linmag7@gmail.com> <20250218175735.19882-2-linmag7@gmail.com> <4209b9816551367f8e5670cc5a08e139f0f2c215.camel@physik.fu-berlin.de> Message-ID: Hi Adrian, On Sat, 5 Apr 2025 at 19:22, John Paul Adrian Glaubitz wrote: > On Tue, 2025-02-18 at 18:55 +0100, Magnus Lindholm wrote: > > Make pte_swp_exclusive return bool instead of int. This will better reflect > > how pte_swp_exclusive is actually used in the code. This fixes swap/swapoff > > problems on Alpha due pte_swp_exclusive not returning correct values when > > _PAGE_SWP_EXCLUSIVE bit resides in upper 32-bits of PTE (like on alpha). > > Minor nitpick: > > "when _PAGE_SWP_EXCLUSIVE" => "when the _PAGE_SWP_EXCLUSIVE" > > > > > Signed-off-by: Magnus Lindholm > > --- a/arch/alpha/include/asm/pgtable.h > > +++ b/arch/alpha/include/asm/pgtable.h > > @@ -334,7 +334,7 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset) > > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > > > -static inline int pte_swp_exclusive(pte_t pte) > > +static inline bool pte_swp_exclusive(pte_t pte) > > { > > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > > } > > --- a/arch/xtensa/include/asm/pgtable.h > > +++ b/arch/xtensa/include/asm/pgtable.h > > @@ -355,7 +355,7 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) > > #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) > > #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) > > > > -static inline int pte_swp_exclusive(pte_t pte) > > +static inline bool pte_swp_exclusive(pte_t pte) > > { > > return pte_val(pte) & _PAGE_SWP_EXCLUSIVE; > > } > > I'm not so sure about this implicit cast from unsigned long to bool though. > > Is this verified to work correctly on all architectures? I wonder why this Should work fine: any non-zero value is mapped to one. > bug was not caught earlier on alpha on the other hand. On Alpha, "pte_val(pte) & _PAGE_SWP_EXCLUSIVE" is either _PAGE_SWP_EXCLUSIVE == 0x8000000000UL or zero. Due to the return type being int, the return value was truncated, and the function always returned zero. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert at linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds From christophe.leroy at csgroup.eu Mon Apr 7 22:48:22 2025 From: christophe.leroy at csgroup.eu (Christophe Leroy) Date: Tue, 8 Apr 2025 07:48:22 +0200 Subject: [PATCH v2 09/13] arch, mm: set max_mapnr when allocating memory map for FLATMEM In-Reply-To: <4b9627f2-65ff-4baf-931f-4e23b5732e6b@csgroup.eu> References: <20250313135003.836600-1-rppt@kernel.org> <20250313135003.836600-10-rppt@kernel.org> <4b9627f2-65ff-4baf-931f-4e23b5732e6b@csgroup.eu> Message-ID: Hi Mike, Le 14/03/2025 ? 10:25, Christophe Leroy a ?crit?: > > > Le 13/03/2025 ? 14:49, Mike Rapoport a ?crit?: >> From: "Mike Rapoport (Microsoft)" >> >> max_mapnr is essentially the size of the memory map for systems that use >> FLATMEM. There is no reason to calculate it in each and every >> architecture >> when it's anyway calculated in alloc_node_mem_map(). >> >> Drop setting of max_mapnr from architecture code and set it once in >> alloc_node_mem_map(). > > As far as I can see alloc_node_mem_map() is called quite late. > > I fear that it will regress commit daa9ada2093e ("powerpc/mm: Fix boot > crash with FLATMEM") > > Can you check ? I see this patch is now merged into mainline (v6.15-rc1). Have you been able to check and/or analyse whether it doesn't regress the fix in commit daa9ada2093e ("powerpc/mm: Fix boot crash with FLATMEM") ? Thanks Christophe From jgg at nvidia.com Tue Apr 8 10:22:56 2025 From: jgg at nvidia.com (Jason Gunthorpe) Date: Tue, 8 Apr 2025 14:22:56 -0300 Subject: [PATCH] ARC: atomics: Implement arch_atomic64_cmpxchg using _relaxed Message-ID: <0-v1-2a485c0aa33a+505-arc_atomic_jgg@nvidia.com> The core atomic code has a number of macros where it elaborates architecture primitives into more functions. ARC uses arch_atomic64_cmpxchg() as it's architecture primitive which disable alot of the additional functions. Instead provide arch_cmpxchg64_relaxed() as the primitive and rely on the core macros to create arch_cmpxchg64(). The macros will also provide other functions, for instance, try_cmpxchg64_release(), giving a more complete implementation. Suggested-by: Mark Rutland Link: https://lore.kernel.org/r/Z0747n5bSep4_1VX at J2N7QTR9R3 Signed-off-by: Jason Gunthorpe --- arch/arc/include/asm/atomic64-arcv2.h | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h index 9b5791b8547133..73080a664369b4 100644 --- a/arch/arc/include/asm/atomic64-arcv2.h +++ b/arch/arc/include/asm/atomic64-arcv2.h @@ -137,12 +137,9 @@ ATOMIC64_OPS(xor, xor, xor) #undef ATOMIC64_OP_RETURN #undef ATOMIC64_OP -static inline s64 -arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new) +static inline u64 __arch_cmpxchg64_relaxed(volatile void *ptr, u64 old, u64 new) { - s64 prev; - - smp_mb(); + u64 prev; __asm__ __volatile__( "1: llockd %0, [%1] \n" @@ -152,14 +149,12 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new) " bnz 1b \n" "2: \n" : "=&r"(prev) - : "r"(ptr), "ir"(expected), "r"(new) - : "cc"); /* memory clobber comes from smp_mb() */ - - smp_mb(); + : "r"(ptr), "ir"(old), "r"(new) + : "memory", "cc"); return prev; } -#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg +#define arch_cmpxchg64_relaxed __arch_cmpxchg64_relaxed static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new) { base-commit: ea8f6ee2111cd78b32d0363ea630ba9b08ada22d -- 2.43.0 From eleanor15x at gmail.com Wed Apr 9 10:11:16 2025 From: eleanor15x at gmail.com (Yu-Chun Lin) Date: Thu, 10 Apr 2025 01:11:16 +0800 Subject: [PATCH] ARC: unwind: Use built-in sort swap to reduce code size and improve performance Message-ID: <20250409171116.550665-1-eleanor15x@gmail.com> The custom swap function used in sort() was identical to the default built-in sort swap. Remove the custom swap function and passes NULL to sort(), allowing it to use the default swap function. This change reduces code size and improves performance, particularly when CONFIG_MITIGATION_RETPOLINE is enabled. With RETPOLINE mitigation, indirect function calls incur significant overhead, and using the default swap function avoids this cost. $ ./scripts/bloat-o-meter ./unwind.o.old ./unwind.o.new add/remove: 0/1 grow/shrink: 0/1 up/down: 0/-22 (-22) Function old new delta init_unwind_hdr.constprop 544 540 -4 swap_eh_frame_hdr_table_entries 18 - -18 Total: Before=4410, After=4388, chg -0.50% Signed-off-by: Yu-Chun Lin --- Compile test only arch/arc/kernel/unwind.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c index d8969dab12d4..789cfb9ea14e 100644 --- a/arch/arc/kernel/unwind.c +++ b/arch/arc/kernel/unwind.c @@ -241,15 +241,6 @@ static int cmp_eh_frame_hdr_table_entries(const void *p1, const void *p2) return (e1->start > e2->start) - (e1->start < e2->start); } -static void swap_eh_frame_hdr_table_entries(void *p1, void *p2, int size) -{ - struct eh_frame_hdr_table_entry *e1 = p1; - struct eh_frame_hdr_table_entry *e2 = p2; - - swap(e1->start, e2->start); - swap(e1->fde, e2->fde); -} - static void init_unwind_hdr(struct unwind_table *table, void *(*alloc) (unsigned long)) { @@ -345,7 +336,7 @@ static void init_unwind_hdr(struct unwind_table *table, sort(header->table, n, sizeof(*header->table), - cmp_eh_frame_hdr_table_entries, swap_eh_frame_hdr_table_entries); + cmp_eh_frame_hdr_table_entries, NULL); table->hdrsz = hdrSize; smp_wmb(); -- 2.43.0