x86: fix warning on 32-bit non-PAE
authorJeremy Fitzhardinge <jeremy@goop.org>
Tue, 20 May 2008 07:26:18 +0000 (08:26 +0100)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 20 May 2008 14:51:20 +0000 (07:51 -0700)
Fix the warning:

include2/asm/pgtable.h: In function `pte_modify':
include2/asm/pgtable.h:290: warning: left shift count >= width of type

On 32-bit PAE the virtual and physical addresses are both 32-bits,
so it ends up evaluating 1<<32.  Do the shift as a 64-bit shift then
cast to the appropriate size.  This should all be done at compile time,
and so have no effect on generated code.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Tested-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
include/asm-x86/page.h

index 76b35e636d7d018f995f455de6922a6df2a294c9..223146da2faf909fa7a3103a160b1818019e3fb8 100644 (file)
@@ -29,7 +29,7 @@
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)       (((addr)+PAGE_SIZE-1)&PAGE_MASK)
 
-#define __PHYSICAL_MASK                ((((phys_addr_t)1) << __PHYSICAL_MASK_SHIFT) - 1)
+#define __PHYSICAL_MASK                ((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
 #define __VIRTUAL_MASK         ((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 #ifndef __ASSEMBLY__