ARMv8 Trusted Firmware release v0.2
authorAchin Gupta <achin.gupta@arm.com>
Fri, 25 Oct 2013 08:08:21 +0000 (09:08 +0100)
committerJames Morrissey <james.morrissey@arm.com>
Fri, 25 Oct 2013 08:37:16 +0000 (09:37 +0100)
98 files changed:
Makefile [new file with mode: 0644]
arch/aarch64/cpu/cpu_helpers.S [new file with mode: 0644]
arch/system/gic/aarch64/gic_v3_sysregs.S [new file with mode: 0644]
arch/system/gic/gic.h [new file with mode: 0644]
arch/system/gic/gic_v2.c [new file with mode: 0644]
arch/system/gic/gic_v3.c [new file with mode: 0644]
bl1/aarch64/bl1_arch_setup.c [new file with mode: 0644]
bl1/aarch64/bl1_entrypoint.S [new file with mode: 0644]
bl1/aarch64/early_exceptions.S [new file with mode: 0644]
bl1/bl1.ld.S [new file with mode: 0644]
bl1/bl1.mk [new file with mode: 0644]
bl1/bl1_main.c [new file with mode: 0644]
bl2/aarch64/bl2_arch_setup.c [new file with mode: 0644]
bl2/aarch64/bl2_entrypoint.S [new file with mode: 0644]
bl2/bl2.ld.S [new file with mode: 0644]
bl2/bl2.mk [new file with mode: 0644]
bl2/bl2_main.c [new file with mode: 0644]
bl31/aarch64/bl31_arch_setup.c [new file with mode: 0644]
bl31/aarch64/bl31_entrypoint.S [new file with mode: 0644]
bl31/aarch64/exception_handlers.c [new file with mode: 0644]
bl31/aarch64/runtime_exceptions.S [new file with mode: 0644]
bl31/bl31.ld.S [new file with mode: 0644]
bl31/bl31.mk [new file with mode: 0644]
bl31/bl31_main.c [new file with mode: 0644]
common/bl_common.c [new file with mode: 0644]
common/psci/psci_afflvl_off.c [new file with mode: 0644]
common/psci/psci_afflvl_on.c [new file with mode: 0644]
common/psci/psci_afflvl_suspend.c [new file with mode: 0644]
common/psci/psci_common.c [new file with mode: 0644]
common/psci/psci_entry.S [new file with mode: 0644]
common/psci/psci_main.c [new file with mode: 0644]
common/psci/psci_private.h [new file with mode: 0644]
common/psci/psci_setup.c [new file with mode: 0644]
common/runtime_svc.c [new file with mode: 0644]
docs/change-log.md [new file with mode: 0644]
docs/porting-guide.md [new file with mode: 0644]
docs/user-guide.md [new file with mode: 0644]
drivers/arm/interconnect/cci-400/cci400.c [new file with mode: 0644]
drivers/arm/interconnect/cci-400/cci400.h [new file with mode: 0644]
drivers/arm/peripherals/pl011/console.h [new file with mode: 0644]
drivers/arm/peripherals/pl011/pl011.c [new file with mode: 0644]
drivers/arm/peripherals/pl011/pl011.h [new file with mode: 0644]
drivers/power/fvp_pwrc.c [new file with mode: 0644]
drivers/power/fvp_pwrc.h [new file with mode: 0644]
fdts/fvp-base-gicv2-psci.dtb [new file with mode: 0644]
fdts/fvp-base-gicv2-psci.dts [new file with mode: 0644]
fdts/fvp-base-gicv2legacy-psci.dtb [new file with mode: 0644]
fdts/fvp-base-gicv2legacy-psci.dts [new file with mode: 0644]
fdts/fvp-base-gicv3-psci.dtb [new file with mode: 0644]
fdts/fvp-base-gicv3-psci.dts [new file with mode: 0644]
fdts/rtsm_ve-motherboard.dtsi [new file with mode: 0644]
include/aarch64/arch.h [new file with mode: 0644]
include/aarch64/arch_helpers.h [new file with mode: 0644]
include/asm_macros.S [new file with mode: 0644]
include/bakery_lock.h [new file with mode: 0644]
include/bl1.h [new file with mode: 0644]
include/bl2.h [new file with mode: 0644]
include/bl31.h [new file with mode: 0644]
include/bl_common.h [new file with mode: 0644]
include/mmio.h [new file with mode: 0644]
include/pm.h [new file with mode: 0644]
include/psci.h [new file with mode: 0644]
include/runtime_svc.h [new file with mode: 0644]
include/semihosting.h [new file with mode: 0644]
include/spinlock.h [new file with mode: 0644]
lib/arch/aarch64/cache_helpers.S [new file with mode: 0644]
lib/arch/aarch64/misc_helpers.S [new file with mode: 0644]
lib/arch/aarch64/sysreg_helpers.S [new file with mode: 0644]
lib/arch/aarch64/tlb_helpers.S [new file with mode: 0644]
lib/mmio.c [new file with mode: 0644]
lib/non-semihosting/ctype.h [new file with mode: 0644]
lib/non-semihosting/mem.c [new file with mode: 0644]
lib/non-semihosting/std.c [new file with mode: 0644]
lib/non-semihosting/strcmp.c [new file with mode: 0644]
lib/non-semihosting/string.c [new file with mode: 0644]
lib/non-semihosting/strlen.c [new file with mode: 0644]
lib/non-semihosting/strncmp.c [new file with mode: 0644]
lib/non-semihosting/strncpy.c [new file with mode: 0644]
lib/non-semihosting/strsep.c [new file with mode: 0644]
lib/non-semihosting/strtol.c [new file with mode: 0644]
lib/non-semihosting/strtoull.c [new file with mode: 0644]
lib/non-semihosting/subr_prf.c [new file with mode: 0644]
lib/semihosting/aarch64/semihosting_call.S [new file with mode: 0644]
lib/semihosting/semihosting.c [new file with mode: 0644]
lib/sync/locks/bakery/bakery_lock.c [new file with mode: 0644]
lib/sync/locks/exclusive/spinlock.S [new file with mode: 0644]
license.md [new file with mode: 0644]
plat/common/aarch64/platform_helpers.S [new file with mode: 0644]
plat/fvp/aarch64/bl1_plat_helpers.S [new file with mode: 0644]
plat/fvp/aarch64/fvp_common.c [new file with mode: 0644]
plat/fvp/aarch64/fvp_helpers.S [new file with mode: 0644]
plat/fvp/bl1_plat_setup.c [new file with mode: 0644]
plat/fvp/bl2_plat_setup.c [new file with mode: 0644]
plat/fvp/bl31_plat_setup.c [new file with mode: 0644]
plat/fvp/fvp_pm.c [new file with mode: 0644]
plat/fvp/fvp_topology.c [new file with mode: 0644]
plat/fvp/platform.h [new file with mode: 0644]
readme.md [new file with mode: 0644]

diff --git a/Makefile b/Makefile
new file mode 100644 (file)
index 0000000..5aa9ee8
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,169 @@
+#
+# Copyright (c) 2013, ARM Limited. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# Neither the name of ARM nor the names of its contributors may be used
+# to endorse or promote products derived from this software without specific
+# prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+# POSSIBILITY OF SUCH DAMAGE.
+#
+
+# Decrease the verbosity of the make script
+# can be made verbose by passing V=1 at the make command line
+ifdef V
+  KBUILD_VERBOSE = $(V)
+else
+  KBUILD_VERBOSE = 0
+endif
+
+ifeq "$(KBUILD_VERBOSE)" "0"
+       Q=@
+else
+       Q=
+endif
+
+DEBUG  ?= 0
+BL_COMMON_OBJS         =       misc_helpers.o cache_helpers.o tlb_helpers.o            \
+                               semihosting_call.o mmio.o pl011.o semihosting.o         \
+                               std.o bl_common.o platform_helpers.o sysreg_helpers.o
+
+ARCH                   :=      aarch64
+
+all: $(patsubst %,%.bin,bl1 bl2 bl31) ;
+
+
+#$(info $(filter bl2.%, $(MAKECMDGOALS)))
+#$(info $(filter bl1.%, $(MAKECMDGOALS)))
+#$(info $(MAKECMDGOALS))
+
+$(info Including bl1.mk)
+include bl1/bl1.mk
+
+$(info Including bl2.mk)
+include bl2/bl2.mk
+
+$(info Including bl31.mk)
+include bl31/bl31.mk
+
+OBJS                   +=      $(BL_COMMON_OBJS)
+
+INCLUDES               +=      -Ilib/include/ -Iinclude/aarch64/ -Iinclude/    \
+                               -Idrivers/arm/interconnect/cci-400/             \
+                               -Idrivers/arm/peripherals/pl011/                \
+                               -Iplat/fvp -Idrivers/power                      \
+                               -Iarch/system/gic -Icommon/psci
+
+ASFLAGS                        +=       -D__ASSEMBLY__ $(INCLUDES)
+CFLAGS                 :=      -Wall -std=c99 -c -Os -DDEBUG=$(DEBUG) $(INCLUDES) ${CFLAGS}
+
+LDFLAGS                        +=      -O1
+BL1_LDFLAGS            :=      -Map=$(BL1_MAPFILE) --script $(BL1_LINKERFILE) --entry=$(BL1_ENTRY_POINT)
+BL2_LDFLAGS            :=      -Map=$(BL2_MAPFILE) --script $(BL2_LINKERFILE) --entry=$(BL2_ENTRY_POINT)
+BL31_LDFLAGS           :=      -Map=$(BL31_MAPFILE) --script $(BL31_LINKERFILE) --entry=$(BL31_ENTRY_POINT)
+
+
+vpath %.ld.S bl1:bl2:bl31
+vpath %.c bl1:bl2:bl31
+vpath %.c bl1/${ARCH}:bl2/${ARCH}:bl31/${ARCH}
+vpath %.S bl1/${ARCH}:bl2/${ARCH}:bl31/${ARCH}
+
+
+ifneq ($(DEBUG), 0)
+#CFLAGS                        +=      -g -O0
+CFLAGS                 +=      -g
+# -save-temps -fverbose-asm
+ASFLAGS                        +=      -g -Wa,--gdwarf-2
+endif
+
+
+CC                     =       $(CROSS_COMPILE)gcc
+CPP                    =       $(CROSS_COMPILE)cpp
+AS                     =       $(CROSS_COMPILE)gcc
+AR                     =       $(CROSS_COMPILE)ar
+LD                     =       $(CROSS_COMPILE)ld
+OC                     =       $(CROSS_COMPILE)objcopy
+OD                     =       $(CROSS_COMPILE)objdump
+NM                     =       $(CROSS_COMPILE)nm
+PP                     =       $(CROSS_COMPILE)gcc -E $(CFLAGS)
+
+
+distclean: clean
+                       @echo "  DISTCLEAN"
+                       $(Q)rm -rf *.zi
+                       $(Q)rm -rf *.dump
+                       $(Q)rm -rf *.bin
+                       $(Q)rm -f *.axf
+                       $(Q)rm -f *.i *.s
+                       $(Q)rm -f *.ar
+                       $(Q)rm -f *.map
+                       $(Q)rm -f *.scf
+                       $(Q)rm -f *.txt
+                       $(Q)rm -f *.elf
+                       $(Q)rm -rf *.bin
+                       $(Q)rm -f $(LISTFILE)
+
+clean:
+                       @echo "  CLEAN"
+                       $(Q)rm -f *.o *.ld
+
+.PHONY:                        dump
+
+dump:
+                       @echo "  OBJDUMP"
+                       $(OD) -d bl1.elf > bl1.dump
+                       $(OD) -d bl2.elf > bl2.dump
+                       $(OD) -d bl31.elf > bl31.dump
+
+%.o:                   %.S
+                       @echo "  AS      $<"
+                       $(Q)$(AS) $(ASFLAGS) -c $< -o $@
+
+%.o:                   %.c
+                       @echo "  CC      $<"
+                       $(Q)$(CC) $(CFLAGS) -c $< -o $@
+
+%.ld:                  %.ld.S
+                       @echo "  LDS      $<"
+                       $(Q)$(AS) $(ASFLAGS) -P -E $< -o $@
+
+
+bl1.elf:               $(OBJS) $(BL1_OBJS) bl1.ld
+                       @echo "  LD      $@"
+                       $(Q)$(LD) -o $@ $(LDFLAGS) $(BL1_LDFLAGS) $(OBJS) $(BL1_OBJS)
+                       @echo "Built $@ successfully"
+                       @echo
+
+bl2.elf:               $(OBJS) $(BL2_OBJS) bl2.ld
+                       @echo "  LD      $@"
+                       $(Q)$(LD) -o $@ $(LDFLAGS) $(BL2_LDFLAGS) $(OBJS) $(BL2_OBJS)
+                       @echo "Built $@ successfully"
+                       @echo
+
+bl31.elf:              $(OBJS) $(BL31_OBJS) bl31.ld
+                       @echo "  LD      $@"
+                       $(Q)$(LD) -o $@ $(LDFLAGS) $(BL31_LDFLAGS) $(OBJS) $(BL31_OBJS)
+                       @echo "Built $@ successfully"
+                       @echo
+
+%.bin:                 %.elf
+                       $(OC) -O binary $< $@
diff --git a/arch/aarch64/cpu/cpu_helpers.S b/arch/aarch64/cpu/cpu_helpers.S
new file mode 100644 (file)
index 0000000..600b72f
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+
+       .weak   cpu_reset_handler
+
+
+       .section        aarch64_code, "ax"; .align 3
+
+cpu_reset_handler:; .type cpu_reset_handler, %function
+       mov     x19, x30 // lr
+
+       /* ---------------------------------------------
+        * As a bare minimal enable the SMP bit and the
+        * I$ for all aarch64 processors. Also set the
+        * exception vector to something sane.
+        * ---------------------------------------------
+        */
+       adr     x0, early_exceptions
+       bl      write_vbar
+
+       bl      read_midr
+       lsr     x0, x0, #MIDR_PN_SHIFT
+       and     x0, x0, #MIDR_PN_MASK
+       cmp     x0, #MIDR_PN_A57
+       b.eq    smp_setup_begin
+       cmp     x0, #MIDR_PN_A53
+       b.ne    smp_setup_end
+smp_setup_begin:
+       bl      read_cpuectlr
+       orr     x0, x0, #CPUECTLR_SMP_BIT
+       bl      write_cpuectlr
+smp_setup_end:
+       bl      read_sctlr
+       orr     x0, x0, #SCTLR_I_BIT
+       bl      write_sctlr
+
+       ret     x19
diff --git a/arch/system/gic/aarch64/gic_v3_sysregs.S b/arch/system/gic/aarch64/gic_v3_sysregs.S
new file mode 100644 (file)
index 0000000..3a2fb6e
--- /dev/null
@@ -0,0 +1,89 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+       .globl  read_icc_sre_el1
+       .globl  read_icc_sre_el2
+       .globl  read_icc_sre_el3
+       .globl  write_icc_sre_el1
+       .globl  write_icc_sre_el2
+       .globl  write_icc_sre_el3
+       .globl  write_icc_pmr_el1
+
+
+/*
+ * Register definitions used by GCC for GICv3 access.
+ * These are defined by ARMCC, so keep them in the GCC specific code for now.
+ */
+#define ICC_SRE_EL1     S3_0_C12_C12_5
+#define ICC_SRE_EL2     S3_4_C12_C9_5
+#define ICC_SRE_EL3     S3_6_C12_C12_5
+#define ICC_CTLR_EL1    S3_0_C12_C12_4
+#define ICC_CTLR_EL3    S3_6_C12_C12_4
+#define ICC_PMR_EL1     S3_0_C4_C6_0
+
+       .section        platform_code, "ax"; .align 3
+
+read_icc_sre_el1:; .type read_icc_sre_el1, %function
+       mrs     x0, ICC_SRE_EL1
+       ret
+
+
+read_icc_sre_el2:; .type read_icc_sre_el2, %function
+       mrs     x0, ICC_SRE_EL2
+       ret
+
+
+read_icc_sre_el3:; .type read_icc_sre_el3, %function
+       mrs     x0, ICC_SRE_EL3
+       ret
+
+
+write_icc_sre_el1:; .type write_icc_sre_el1, %function
+       msr     ICC_SRE_EL1, x0
+       isb
+       ret
+
+
+write_icc_sre_el2:; .type write_icc_sre_el2, %function
+       msr     ICC_SRE_EL2, x0
+       isb
+       ret
+
+
+write_icc_sre_el3:; .type write_icc_sre_el3, %function
+       msr     ICC_SRE_EL3, x0
+       isb
+       ret
+
+
+write_icc_pmr_el1:; .type write_icc_pmr_el1, %function
+       msr     ICC_PMR_EL1, x0
+       isb
+       ret
diff --git a/arch/system/gic/gic.h b/arch/system/gic/gic.h
new file mode 100644 (file)
index 0000000..91ada03
--- /dev/null
@@ -0,0 +1,217 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __GIC_H__
+#define __GIC_H__
+
+#define MAX_SPIS               480
+#define MAX_PPIS               14
+#define MAX_SGIS               16
+
+#define GRP0                   0
+#define GRP1                   1
+#define MAX_PRI_VAL            0xff
+
+#define ENABLE_GRP0            (1 << 0)
+#define ENABLE_GRP1            (1 << 1)
+
+/* Distributor interface definitions */
+#define GICD_CTLR              0x0
+#define GICD_TYPER             0x4
+#define GICD_IGROUPR           0x80
+#define GICD_ISENABLER         0x100
+#define GICD_ICENABLER         0x180
+#define GICD_ISPENDR           0x200
+#define GICD_ICPENDR           0x280
+#define GICD_ISACTIVER         0x300
+#define GICD_ICACTIVER         0x380
+#define GICD_IPRIORITYR                0x400
+#define GICD_ITARGETSR         0x800
+#define GICD_ICFGR             0xC00
+#define GICD_SGIR              0xF00
+#define GICD_CPENDSGIR         0xF10
+#define GICD_SPENDSGIR         0xF20
+
+#define IGROUPR_SHIFT          5
+#define ISENABLER_SHIFT                5
+#define ICENABLER_SHIFT                ISENABLER_SHIFT
+#define ISPENDR_SHIFT          5
+#define ICPENDR_SHIFT          ISPENDR_SHIFT
+#define ISACTIVER_SHIFT                5
+#define ICACTIVER_SHIFT                ISACTIVER_SHIFT
+#define IPRIORITYR_SHIFT       2
+#define ITARGETSR_SHIFT                2
+#define ICFGR_SHIFT            4
+#define CPENDSGIR_SHIFT                2
+#define SPENDSGIR_SHIFT                CPENDSGIR_SHIFT
+
+/* GICD_TYPER bit definitions */
+#define IT_LINES_NO_MASK       0x1f
+
+/* Physical CPU Interface registers */
+#define GICC_CTLR              0x0
+#define GICC_PMR               0x4
+#define GICC_BPR               0x8
+#define GICC_IAR               0xC
+#define GICC_EOIR              0x10
+#define GICC_RPR               0x14
+#define GICC_HPPIR             0x18
+#define GICC_IIDR              0xFC
+#define GICC_DIR               0x1000
+#define GICC_PRIODROP           GICC_EOIR
+
+/* GICC_CTLR bit definitions */
+#define EOI_MODE_NS            (1 << 10)
+#define EOI_MODE_S             (1 << 9)
+#define IRQ_BYP_DIS_GRP1       (1 << 8)
+#define FIQ_BYP_DIS_GRP1       (1 << 7)
+#define IRQ_BYP_DIS_GRP0       (1 << 6)
+#define FIQ_BYP_DIS_GRP0       (1 << 5)
+#define CBPR                   (1 << 4)
+#define FIQ_EN                 (1 << 3)
+#define ACK_CTL                        (1 << 2)
+
+/* GICC_IIDR bit masks and shifts */
+#define GICC_IIDR_PID_SHIFT    20
+#define GICC_IIDR_ARCH_SHIFT   16
+#define GICC_IIDR_REV_SHIFT    12
+#define GICC_IIDR_IMP_SHIFT    0
+
+#define GICC_IIDR_PID_MASK     0xfff
+#define GICC_IIDR_ARCH_MASK    0xf
+#define GICC_IIDR_REV_MASK     0xf
+#define GICC_IIDR_IMP_MASK     0xfff
+
+/* HYP view virtual CPU Interface registers */
+#define GICH_CTL               0x0
+#define GICH_VTR               0x4
+#define GICH_ELRSR0            0x30
+#define GICH_ELRSR1            0x34
+#define GICH_APR0              0xF0
+#define GICH_LR_BASE           0x100
+
+/* Virtual CPU Interface registers */
+#define GICV_CTL               0x0
+#define GICV_PRIMASK           0x4
+#define GICV_BP                        0x8
+#define GICV_INTACK            0xC
+#define GICV_EOI               0x10
+#define GICV_RUNNINGPRI                0x14
+#define GICV_HIGHESTPEND       0x18
+#define GICV_DEACTIVATE                0x1000
+
+/* GICv3 Re-distributor interface registers & shifts */
+#define GICR_PCPUBASE_SHIFT    0x11
+#define GICR_WAKER             0x14
+
+/* GICR_WAKER bit definitions */
+#define WAKER_CA               (1UL << 2)
+#define WAKER_PS               (1UL << 1)
+
+/* GICv3 ICC_SRE register bit definitions*/
+#define ICC_SRE_EN             (1UL << 3)
+#define ICC_SRE_SRE            (1UL << 0)
+
+#ifndef __ASSEMBLY__
+
+/*******************************************************************************
+ * Function prototypes
+ ******************************************************************************/
+extern inline unsigned int gicd_read_typer(unsigned int);
+extern inline unsigned int gicd_read_ctlr(unsigned int);
+extern unsigned int gicd_read_igroupr(unsigned int, unsigned int);
+extern unsigned int gicd_read_isenabler(unsigned int, unsigned int);
+extern unsigned int gicd_read_icenabler(unsigned int, unsigned int);
+extern unsigned int gicd_read_ispendr(unsigned int, unsigned int);
+extern unsigned int gicd_read_icpendr(unsigned int, unsigned int);
+extern unsigned int gicd_read_isactiver(unsigned int, unsigned int);
+extern unsigned int gicd_read_icactiver(unsigned int, unsigned int);
+extern unsigned int gicd_read_ipriorityr(unsigned int, unsigned int);
+extern unsigned int gicd_read_itargetsr(unsigned int, unsigned int);
+extern unsigned int gicd_read_icfgr(unsigned int, unsigned int);
+extern unsigned int gicd_read_sgir(unsigned int);
+extern unsigned int gicd_read_cpendsgir(unsigned int, unsigned int);
+extern unsigned int gicd_read_spendsgir(unsigned int, unsigned int);
+extern inline void gicd_write_ctlr(unsigned int, unsigned int);
+extern void gicd_write_igroupr(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_isenabler(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_icenabler(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_ispendr(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_icpendr(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_isactiver(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_icactiver(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_ipriorityr(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_itargetsr(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_icfgr(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_sgir(unsigned int, unsigned int);
+extern void gicd_write_cpendsgir(unsigned int, unsigned int, unsigned int);
+extern void gicd_write_spendsgir(unsigned int, unsigned int, unsigned int);
+extern unsigned int gicd_get_igroupr(unsigned int, unsigned int);
+extern void gicd_set_igroupr(unsigned int, unsigned int);
+extern void gicd_clr_igroupr(unsigned int, unsigned int);
+extern void gicd_set_isenabler(unsigned int, unsigned int);
+extern void gicd_set_icenabler(unsigned int, unsigned int);
+extern void gicd_set_ispendr(unsigned int, unsigned int);
+extern void gicd_set_icpendr(unsigned int, unsigned int);
+extern void gicd_set_isactiver(unsigned int, unsigned int);
+extern void gicd_set_icactiver(unsigned int, unsigned int);
+extern void gicd_set_ipriorityr(unsigned int, unsigned int, unsigned int);
+extern void gicd_set_itargetsr(unsigned int, unsigned int, unsigned int);
+extern inline unsigned int gicc_read_ctlr(unsigned int);
+extern inline unsigned int gicc_read_pmr(unsigned int);
+extern inline unsigned int gicc_read_BPR(unsigned int);
+extern inline unsigned int gicc_read_IAR(unsigned int);
+extern inline unsigned int gicc_read_EOIR(unsigned int);
+extern inline unsigned int gicc_read_hppir(unsigned int);
+extern inline unsigned int gicc_read_iidr(unsigned int);
+extern inline unsigned int gicc_read_dir(unsigned int);
+extern inline void gicc_write_ctlr(unsigned int, unsigned int);
+extern inline void gicc_write_pmr(unsigned int, unsigned int);
+extern inline void gicc_write_BPR(unsigned int, unsigned int);
+extern inline void gicc_write_IAR(unsigned int, unsigned int);
+extern inline void gicc_write_EOIR(unsigned int, unsigned int);
+extern inline void gicc_write_hppir(unsigned int, unsigned int);
+extern inline void gicc_write_dir(unsigned int, unsigned int);
+
+/* GICv3 functions */
+extern inline unsigned int gicr_read_waker(unsigned int);
+extern inline void gicr_write_waker(unsigned int, unsigned int);
+extern unsigned int read_icc_sre_el1(void);
+extern unsigned int read_icc_sre_el2(void);
+extern unsigned int read_icc_sre_el3(void);
+extern void write_icc_sre_el1(unsigned int);
+extern void write_icc_sre_el2(unsigned int);
+extern void write_icc_sre_el3(unsigned int);
+extern void write_icc_pmr_el1(unsigned int);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __GIC_H__ */
+
diff --git a/arch/system/gic/gic_v2.c b/arch/system/gic/gic_v2.c
new file mode 100644 (file)
index 0000000..4b3d0c5
--- /dev/null
@@ -0,0 +1,426 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <gic.h>
+#include <mmio.h>
+
+/*******************************************************************************
+ * GIC Distributor interface accessesors for reading entire registers
+ ******************************************************************************/
+inline unsigned int gicd_read_ctlr(unsigned int base)
+{
+       return mmio_read_32(base + GICD_CTLR);
+}
+
+inline unsigned int gicd_read_typer(unsigned int base)
+{
+       return mmio_read_32(base + GICD_TYPER);
+}
+
+unsigned int gicd_read_igroupr(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> IGROUPR_SHIFT;
+       return mmio_read_32(base + GICD_IGROUPR + (n << 2));
+}
+
+unsigned int gicd_read_isenabler(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ISENABLER_SHIFT;
+       return mmio_read_32(base + GICD_ISENABLER + (n << 2));
+}
+
+unsigned int gicd_read_icenabler(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ICENABLER_SHIFT;
+       return mmio_read_32(base + GICD_ICENABLER + (n << 2));
+}
+
+unsigned int gicd_read_ispendr(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ISPENDR_SHIFT;
+       return mmio_read_32(base + GICD_ISPENDR + (n << 2));
+}
+
+unsigned int gicd_read_icpendr(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ICPENDR_SHIFT;
+       return mmio_read_32(base + GICD_ICPENDR + (n << 2));
+}
+
+unsigned int gicd_read_isactiver(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ISACTIVER_SHIFT;
+       return mmio_read_32(base + GICD_ISACTIVER + (n << 2));
+}
+
+unsigned int gicd_read_icactiver(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ICACTIVER_SHIFT;
+       return mmio_read_32(base + GICD_ICACTIVER + (n << 2));
+}
+
+unsigned int gicd_read_ipriorityr(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> IPRIORITYR_SHIFT;
+       return mmio_read_32(base + GICD_IPRIORITYR + (n << 2));
+}
+
+unsigned int gicd_read_itargetsr(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ITARGETSR_SHIFT;
+       return mmio_read_32(base + GICD_ITARGETSR + (n << 2));
+}
+
+unsigned int gicd_read_icfgr(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> ICFGR_SHIFT;
+       return mmio_read_32(base + GICD_ICFGR + (n << 2));
+}
+
+unsigned int gicd_read_sgir(unsigned int base)
+{
+       return mmio_read_32(base + GICD_SGIR);
+}
+
+unsigned int gicd_read_cpendsgir(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> CPENDSGIR_SHIFT;
+       return mmio_read_32(base + GICD_CPENDSGIR + (n << 2));
+}
+
+unsigned int gicd_read_spendsgir(unsigned int base, unsigned int id)
+{
+       unsigned n = id >> SPENDSGIR_SHIFT;
+       return mmio_read_32(base + GICD_SPENDSGIR + (n << 2));
+}
+
+/*******************************************************************************
+ * GIC Distributor interface accessesors for writing entire registers
+ ******************************************************************************/
+inline void gicd_write_ctlr(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICD_CTLR, val);
+       return;
+}
+
+void gicd_write_igroupr(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> IGROUPR_SHIFT;
+       mmio_write_32(base + GICD_IGROUPR + (n << 2), val);
+       return;
+}
+
+void gicd_write_isenabler(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ISENABLER_SHIFT;
+       mmio_write_32(base + GICD_ISENABLER + (n << 2), val);
+       return;
+}
+
+void gicd_write_icenabler(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ICENABLER_SHIFT;
+       mmio_write_32(base + GICD_ICENABLER + (n << 2), val);
+       return;
+}
+
+void gicd_write_ispendr(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ISPENDR_SHIFT;
+       mmio_write_32(base + GICD_ISPENDR + (n << 2), val);
+       return;
+}
+
+void gicd_write_icpendr(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ICPENDR_SHIFT;
+       mmio_write_32(base + GICD_ICPENDR + (n << 2), val);
+       return;
+}
+
+void gicd_write_isactiver(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ISACTIVER_SHIFT;
+       mmio_write_32(base + GICD_ISACTIVER + (n << 2), val);
+       return;
+}
+
+void gicd_write_icactiver(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ICACTIVER_SHIFT;
+       mmio_write_32(base + GICD_ICACTIVER + (n << 2), val);
+       return;
+}
+
+void gicd_write_ipriorityr(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> IPRIORITYR_SHIFT;
+       mmio_write_32(base + GICD_IPRIORITYR + (n << 2), val);
+       return;
+}
+
+void gicd_write_itargetsr(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ITARGETSR_SHIFT;
+       mmio_write_32(base + GICD_ITARGETSR + (n << 2), val);
+       return;
+}
+
+void gicd_write_icfgr(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> ICFGR_SHIFT;
+       mmio_write_32(base + GICD_ICFGR + (n << 2), val);
+       return;
+}
+
+void gicd_write_sgir(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICD_SGIR, val);
+       return;
+}
+
+void gicd_write_cpendsgir(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> CPENDSGIR_SHIFT;
+       mmio_write_32(base + GICD_CPENDSGIR + (n << 2), val);
+       return;
+}
+
+void gicd_write_spendsgir(unsigned int base, unsigned int id, unsigned int val)
+{
+       unsigned n = id >> SPENDSGIR_SHIFT;
+       mmio_write_32(base + GICD_SPENDSGIR + (n << 2), val);
+       return;
+}
+
+/*******************************************************************************
+ * GIC Distributor interface accessesors for individual interrupt manipulation
+ ******************************************************************************/
+unsigned int gicd_get_igroupr(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << IGROUPR_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_igroupr(base, id);
+
+       return (reg_val >> bit_num) & 0x1;
+}
+
+void gicd_set_igroupr(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << IGROUPR_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_igroupr(base, id);
+
+       gicd_write_igroupr(base, id, reg_val | (1 << bit_num));
+       return;
+}
+
+void gicd_clr_igroupr(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << IGROUPR_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_igroupr(base, id);
+
+       gicd_write_igroupr(base, id, reg_val & ~(1 << bit_num));
+       return;
+}
+
+void gicd_set_isenabler(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << ISENABLER_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_isenabler(base, id);
+
+       gicd_write_isenabler(base, id, reg_val | (1 << bit_num));
+       return;
+}
+
+void gicd_set_icenabler(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << ICENABLER_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_icenabler(base, id);
+
+       gicd_write_icenabler(base, id, reg_val & ~(1 << bit_num));
+       return;
+}
+
+void gicd_set_ispendr(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << ISPENDR_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_ispendr(base, id);
+
+       gicd_write_ispendr(base, id, reg_val | (1 << bit_num));
+       return;
+}
+
+void gicd_set_icpendr(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << ICPENDR_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_icpendr(base, id);
+
+       gicd_write_icpendr(base, id, reg_val & ~(1 << bit_num));
+       return;
+}
+
+void gicd_set_isactiver(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << ISACTIVER_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_isactiver(base, id);
+
+       gicd_write_isactiver(base, id, reg_val | (1 << bit_num));
+       return;
+}
+
+void gicd_set_icactiver(unsigned int base, unsigned int id)
+{
+       unsigned bit_num = id & ((1 << ICACTIVER_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_icactiver(base, id);
+
+       gicd_write_icactiver(base, id, reg_val & ~(1 << bit_num));
+       return;
+}
+
+/*
+ * Make sure that the interrupt's group is set before expecting
+ * this function to do its job correctly.
+ */
+void gicd_set_ipriorityr(unsigned int base, unsigned int id, unsigned int pri)
+{
+       unsigned byte_off = id & ((1 << ICACTIVER_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_icactiver(base, id);
+
+       /*
+        * Enforce ARM recommendation to manage priority values such
+        * that group1 interrupts always have a lower priority than
+        * group0 interrupts
+        */
+       if (gicd_get_igroupr(base, id) == GRP1)
+               pri |= 1 << 7;
+       else
+               pri &= ~(1 << 7);
+
+       gicd_write_icactiver(base, id, reg_val & ~(pri << (byte_off << 3)));
+       return;
+}
+
+void gicd_set_itargetsr(unsigned int base, unsigned int id, unsigned int iface)
+{
+       unsigned byte_off = id & ((1 << ITARGETSR_SHIFT) - 1);
+       unsigned int reg_val = gicd_read_itargetsr(base, id);
+
+       gicd_write_itargetsr(base, id, reg_val |
+                            (1 << iface) << (byte_off << 3));
+       return;
+}
+
+/*******************************************************************************
+ * GIC CPU interface accessesors for reading entire registers
+ ******************************************************************************/
+inline unsigned int gicc_read_ctlr(unsigned int base)
+{
+       return mmio_read_32(base + GICC_CTLR);
+}
+
+inline unsigned int gicc_read_pmr(unsigned int base)
+{
+       return mmio_read_32(base + GICC_PMR);
+}
+
+inline unsigned int gicc_read_BPR(unsigned int base)
+{
+       return mmio_read_32(base + GICC_BPR);
+}
+
+inline unsigned int gicc_read_IAR(unsigned int base)
+{
+       return mmio_read_32(base + GICC_IAR);
+}
+
+inline unsigned int gicc_read_EOIR(unsigned int base)
+{
+       return mmio_read_32(base + GICC_EOIR);
+}
+
+inline unsigned int gicc_read_hppir(unsigned int base)
+{
+       return mmio_read_32(base + GICC_HPPIR);
+}
+
+inline unsigned int gicc_read_dir(unsigned int base)
+{
+       return mmio_read_32(base + GICC_DIR);
+}
+
+inline unsigned int gicc_read_iidr(unsigned int base)
+{
+       return mmio_read_32(base + GICC_IIDR);
+}
+
+/*******************************************************************************
+ * GIC CPU interface accessesors for writing entire registers
+ ******************************************************************************/
+inline void gicc_write_ctlr(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_CTLR, val);
+       return;
+}
+
+inline void gicc_write_pmr(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_PMR, val);
+       return;
+}
+
+inline void gicc_write_BPR(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_BPR, val);
+       return;
+}
+
+inline void gicc_write_IAR(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_IAR, val);
+       return;
+}
+
+inline void gicc_write_EOIR(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_EOIR, val);
+       return;
+}
+
+inline void gicc_write_hppir(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_HPPIR, val);
+       return;
+}
+
+inline void gicc_write_dir(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICC_DIR, val);
+       return;
+}
+
diff --git a/arch/system/gic/gic_v3.c b/arch/system/gic/gic_v3.c
new file mode 100644 (file)
index 0000000..7806a0d
--- /dev/null
@@ -0,0 +1,46 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <gic.h>
+#include <mmio.h>
+
+/*******************************************************************************
+ * GIC Redistributor interface accessesors
+ ******************************************************************************/
+inline unsigned int gicr_read_waker(unsigned int base)
+{
+       return mmio_read_32(base + GICR_WAKER);
+}
+
+inline void gicr_write_waker(unsigned int base, unsigned int val)
+{
+       mmio_write_32(base + GICR_WAKER, val);
+       return;
+}
diff --git a/bl1/aarch64/bl1_arch_setup.c b/bl1/aarch64/bl1_arch_setup.c
new file mode 100644 (file)
index 0000000..d4be9d6
--- /dev/null
@@ -0,0 +1,83 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+#include <platform.h>
+#include <assert.h>
+
+/*******************************************************************************
+ * Function that does the first bit of architectural setup that affects
+ * execution in the non-secure address space.
+ ******************************************************************************/
+void bl1_arch_setup(void)
+{
+       unsigned long tmp_reg = 0;
+       unsigned int counter_base_frequency;
+
+       /* Enable alignment checks and set the exception endianess to LE */
+       tmp_reg = read_sctlr();
+       tmp_reg |= (SCTLR_A_BIT | SCTLR_SA_BIT);
+       tmp_reg &= ~SCTLR_EE_BIT;
+       write_sctlr(tmp_reg);
+
+       /*
+        * Enable HVCs, route FIQs to EL3, set the next EL to be aarch64
+        */
+       tmp_reg = SCR_RES1_BITS | SCR_RW_BIT | SCR_HCE_BIT | SCR_FIQ_BIT;
+       write_scr(tmp_reg);
+
+       /* Do not trap coprocessor accesses from lower ELs to EL3 */
+       write_cptr_el3(0);
+
+       /* Read the frequency from Frequency modes table */
+       counter_base_frequency = mmio_read_32(SYS_CNTCTL_BASE + CNTFID_OFF);
+       /* The first entry of the frequency modes table must not be 0 */
+       assert(counter_base_frequency != 0);
+
+       /* Program the counter frequency */
+       write_cntfrq_el0(counter_base_frequency);
+       return;
+}
+
+/*******************************************************************************
+ * Set the Secure EL1 required architectural state
+ ******************************************************************************/
+void bl1_arch_next_el_setup(void) {
+       unsigned long current_sctlr, next_sctlr;
+
+       /* Use the same endianness than the current BL */
+       current_sctlr = read_sctlr();
+       next_sctlr = (current_sctlr & SCTLR_EE_BIT);
+
+       /* Set SCTLR Secure EL1 */
+       next_sctlr |= SCTLR_EL1_RES1;
+
+       write_sctlr_el1(next_sctlr);
+}
diff --git a/bl1/aarch64/bl1_entrypoint.S b/bl1/aarch64/bl1_entrypoint.S
new file mode 100644 (file)
index 0000000..f5ccc65
--- /dev/null
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+
+       .globl  reset_handler
+
+
+       .section        reset_code, "ax"; .align 3
+
+       /* -----------------------------------------------------
+        * reset_handler() is the entry point into the trusted
+        * firmware code when a cpu is released from warm or
+        * cold reset.
+        * -----------------------------------------------------
+        */
+
+reset_handler:; .type reset_handler, %function
+       /* ---------------------------------------------
+        * Perform any processor specific actions upon
+        * reset e.g. cache, tlb invalidations etc.
+        * ---------------------------------------------
+        */
+       bl      cpu_reset_handler
+
+_wait_for_entrypoint:
+       /* ---------------------------------------------
+        * Find the type of reset and jump to handler
+        * if present. If the handler is null then it is
+        * a cold boot. The primary cpu will set up the
+        * platform while the secondaries wait for
+        * their turn to be woken up
+        * ---------------------------------------------
+        */
+       bl      read_mpidr
+       bl      platform_get_entrypoint
+       cbnz    x0, _do_warm_boot
+       bl      read_mpidr
+       bl      platform_is_primary_cpu
+       cbnz    x0, _do_cold_boot
+
+       /* ---------------------------------------------
+        * Perform any platform specific secondary cpu
+        * actions
+        * ---------------------------------------------
+        */
+       bl      plat_secondary_cold_boot_setup
+       b       _wait_for_entrypoint
+
+_do_cold_boot:
+       /* ---------------------------------------------
+        * Initialize platform and jump to our c-entry
+        * point for this type of reset
+        * ---------------------------------------------
+        */
+       adr     x0, bl1_main
+       bl      platform_cold_boot_init
+       b       _panic
+
+_do_warm_boot:
+       /* ---------------------------------------------
+        * Jump to BL31 for all warm boot init.
+        * ---------------------------------------------
+        */
+       blr     x0
+_panic:
+       b       _panic
diff --git a/bl1/aarch64/early_exceptions.S b/bl1/aarch64/early_exceptions.S
new file mode 100644 (file)
index 0000000..08a1122
--- /dev/null
@@ -0,0 +1,216 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+#include <bl_common.h>
+#include <bl1.h>
+#include <platform.h>
+#include <runtime_svc.h>
+
+       .globl  early_exceptions
+
+
+       .section        .text, "ax"; .align 11
+
+       /* -----------------------------------------------------
+        * Very simple exception handlers used by BL1 and BL2.
+        * Apart from one SMC exception all other traps loop
+        * endlessly.
+        * -----------------------------------------------------
+        */
+       .align  7
+early_exceptions:
+       /* -----------------------------------------------------
+        * Current EL with SP0 : 0x0 - 0x180
+        * -----------------------------------------------------
+        */
+SynchronousExceptionSP0:
+       mov     x0, #SYNC_EXCEPTION_SP_EL0
+       bl      plat_report_exception
+       b       SynchronousExceptionSP0
+
+       .align  7
+IrqSP0:
+       mov     x0, #IRQ_SP_EL0
+       bl      plat_report_exception
+       b       IrqSP0
+
+       .align  7
+FiqSP0:
+       mov     x0, #FIQ_SP_EL0
+       bl      plat_report_exception
+       b       FiqSP0
+
+       .align  7
+SErrorSP0:
+       mov     x0, #SERROR_SP_EL0
+       bl      plat_report_exception
+       b       SErrorSP0
+
+       /* -----------------------------------------------------
+        * Current EL with SPx: 0x200 - 0x380
+        * -----------------------------------------------------
+        */
+       .align  7
+SynchronousExceptionSPx:
+       mov     x0, #SYNC_EXCEPTION_SP_ELX
+       bl      plat_report_exception
+       b       SynchronousExceptionSPx
+
+       .align  7
+IrqSPx:
+       mov     x0, #IRQ_SP_ELX
+       bl      plat_report_exception
+       b       IrqSPx
+
+       .align  7
+FiqSPx:
+       mov     x0, #FIQ_SP_ELX
+       bl      plat_report_exception
+       b       FiqSPx
+
+       .align  7
+SErrorSPx:
+       mov     x0, #SERROR_SP_ELX
+       bl      plat_report_exception
+       b       SErrorSPx
+
+       /* -----------------------------------------------------
+        * Lower EL using AArch64 : 0x400 - 0x580
+        * -----------------------------------------------------
+        */
+       .align  7
+SynchronousExceptionA64:
+       /* ---------------------------------------------
+        * Only a single SMC exception from BL2 to ask
+        * BL1 to pass EL3 control to BL31 is expected
+        * here.
+        * ---------------------------------------------
+        */
+       sub     sp, sp, #0x40
+       stp     x0, x1, [sp, #0x0]
+       stp     x2, x3, [sp, #0x10]
+       stp     x4, x5, [sp, #0x20]
+       stp     x6, x7, [sp, #0x30]
+       mov     x19, x0
+       mov     x20, x1
+       mov     x21, x2
+
+       mov     x0, #SYNC_EXCEPTION_AARCH64
+       bl      plat_report_exception
+
+       bl      read_esr
+       ubfx    x1, x0, #ESR_EC_SHIFT, #ESR_EC_LENGTH
+       cmp     x1, #EC_AARCH64_SMC
+       b.ne    panic
+       mov     x1, #RUN_IMAGE
+       cmp     x19, x1
+       b.ne    panic
+       mov     x0, x20
+       mov     x1, x21
+       mov     x2, x3
+       mov     x3, x4
+       bl      display_boot_progress
+       mov     x0, x20
+       bl      write_elr
+       mov     x0, x21
+       bl      write_spsr
+       ubfx    x0, x21, #MODE_EL_SHIFT, #2
+       cmp     x0, #MODE_EL3
+       b.ne    skip_mmu_teardown
+       /* ---------------------------------------------
+        * If BL31 is to be executed in EL3 as well
+        * then turn off the MMU so that it can perform
+        * its own setup. TODO: Assuming flat mapped
+        * translations here. Also all should go into a
+        * separate MMU teardown function
+        * ---------------------------------------------
+        */
+       mov     x1, #(SCTLR_M_BIT | SCTLR_C_BIT | SCTLR_I_BIT)
+       bl      read_sctlr
+       bic     x0, x0, x1
+       bl      write_sctlr
+       mov     x0, #DCCISW
+       bl      dcsw_op_all
+       bl      tlbialle3
+skip_mmu_teardown:
+       ldp     x6, x7, [sp, #0x30]
+       ldp     x4, x5, [sp, #0x20]
+       ldp     x2, x3, [sp, #0x10]
+       ldp     x0, x1, [sp, #0x0]
+       add     sp, sp, #0x40
+       eret
+panic:
+       b       panic
+       .align  7
+IrqA64:
+       mov     x0, #IRQ_AARCH64
+       bl      plat_report_exception
+       b       IrqA64
+
+       .align  7
+FiqA64:
+       mov     x0, #FIQ_AARCH64
+       bl      plat_report_exception
+       b       FiqA64
+
+       .align  7
+SErrorA64:
+       mov     x0, #SERROR_AARCH64
+       bl      plat_report_exception
+       b       SErrorA64
+
+       /* -----------------------------------------------------
+        * Lower EL using AArch32 : 0x0 - 0x180
+        * -----------------------------------------------------
+        */
+       .align  7
+SynchronousExceptionA32:
+       mov     x0, #SYNC_EXCEPTION_AARCH32
+       bl      plat_report_exception
+       b       SynchronousExceptionA32
+
+       .align  7
+IrqA32:
+       mov     x0, #IRQ_AARCH32
+       bl      plat_report_exception
+       b       IrqA32
+
+       .align  7
+FiqA32:
+       mov     x0, #FIQ_AARCH32
+       bl      plat_report_exception
+       b       FiqA32
+
+       .align  7
+SErrorA32:
+       mov     x0, #SERROR_AARCH32
+       bl      plat_report_exception
+       b       SErrorA32
diff --git a/bl1/bl1.ld.S b/bl1/bl1.ld.S
new file mode 100644 (file)
index 0000000..5327715
--- /dev/null
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <platform.h>
+
+OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
+OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
+
+MEMORY {
+    /* ROM is read-only and executable */
+    ROM (rx): ORIGIN = TZROM_BASE, LENGTH = TZROM_SIZE
+    /* RAM is read/write and Initialised */
+    RAM (rwx): ORIGIN = TZRAM_BASE, LENGTH = TZRAM_SIZE
+}
+
+SECTIONS
+{
+    FIRMWARE_ROM : {
+        *(reset_code)
+        *(.text)
+        *(.rodata)
+    } >ROM
+
+    .bss : {
+        __BSS_RAM_START__ = .;
+        *(.bss)
+        *(COMMON)
+        __BSS_RAM_STOP__ = .;
+    } >RAM AT>ROM
+
+    .data : {
+        __DATA_RAM_START__ = .;
+        *(.data)
+        __DATA_RAM_STOP__ = .;
+     } >RAM AT>ROM
+
+    FIRMWARE_RAM_STACKS ALIGN (PLATFORM_CACHE_LINE_SIZE) : {
+        . += 0x1000;
+        *(tzfw_normal_stacks)
+        . = ALIGN(4096);
+    } >RAM AT>ROM
+
+    FIRMWARE_RAM_COHERENT ALIGN (4096): {
+        *(tzfw_coherent_mem)
+/*      . += 0x1000;*/
+/* Do we need to make sure this is at least 4k? */
+         . = ALIGN(4096);
+    } >RAM
+
+    __FIRMWARE_ROM_START__ = LOADADDR(FIRMWARE_ROM);
+    __FIRMWARE_ROM_SIZE__  = SIZEOF(FIRMWARE_ROM);
+
+    __FIRMWARE_DATA_START__ = LOADADDR(.data);
+    __FIRMWARE_DATA_SIZE__  = SIZEOF(.data);
+
+    __FIRMWARE_BSS_START__ = LOADADDR(.bss);
+    __FIRMWARE_BSS_SIZE__  = SIZEOF(.bss);
+
+    __FIRMWARE_RAM_STACKS_START__ = LOADADDR(FIRMWARE_RAM_STACKS);
+    __FIRMWARE_RAM_STACKS_SIZE__  = SIZEOF(FIRMWARE_RAM_STACKS);
+    __FIRMWARE_RAM_COHERENT_START__ = LOADADDR(FIRMWARE_RAM_COHERENT);
+    __FIRMWARE_RAM_COHERENT_SIZE__  = SIZEOF(FIRMWARE_RAM_COHERENT);
+}
diff --git a/bl1/bl1.mk b/bl1/bl1.mk
new file mode 100644 (file)
index 0000000..b159fd9
--- /dev/null
@@ -0,0 +1,46 @@
+#
+# Copyright (c) 2013, ARM Limited. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# Neither the name of ARM nor the names of its contributors may be used
+# to endorse or promote products derived from this software without specific
+# prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+# POSSIBILITY OF SUCH DAMAGE.
+#
+
+vpath                  %.c     drivers/arm/interconnect/cci-400/ plat/fvp                      \
+                               plat/fvp/${ARCH} drivers/arm/peripherals/pl011 common/ lib/ \
+                               lib/semihosting arch/aarch64/ lib/non-semihosting
+
+vpath                  %.S     arch/${ARCH}/cpu plat/common/aarch64                            \
+                               plat/fvp/${ARCH} lib/semihosting/aarch64                        \
+                               include/ lib/arch/aarch64
+
+BL1_ASM_OBJS           :=      bl1_entrypoint.o bl1_plat_helpers.o cpu_helpers.o
+BL1_C_OBJS             :=      bl1_main.o cci400.o bl1_plat_setup.o bl1_arch_setup.o   \
+                               fvp_common.o fvp_helpers.o early_exceptions.o
+BL1_ENTRY_POINT                :=      reset_handler
+BL1_MAPFILE            :=      bl1.map
+BL1_LINKERFILE         :=      bl1.ld
+
+BL1_OBJS               :=      $(BL1_C_OBJS) $(BL1_ASM_OBJS)
diff --git a/bl1/bl1_main.c b/bl1/bl1_main.c
new file mode 100644 (file)
index 0000000..badda64
--- /dev/null
@@ -0,0 +1,132 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <platform.h>
+#include <semihosting.h>
+#include <bl1.h>
+
+void bl1_arch_next_el_setup(void);
+
+/*******************************************************************************
+ * Function to perform late architectural and platform specific initialization.
+ * It also locates and loads the BL2 raw binary image in the trusted DRAM. Only
+ * called by the primary cpu after a cold boot.
+ * TODO: Add support for alternative image load mechanism e.g using virtio/elf
+ * loader etc.
+  ******************************************************************************/
+void bl1_main(void)
+{
+       unsigned long sctlr_el3 = read_sctlr();
+       unsigned long bl2_base;
+       unsigned int load_type = TOP_LOAD, spsr;
+       meminfo bl1_tzram_layout, *bl2_tzram_layout = 0x0;
+
+       /*
+        * Ensure that MMU/Caches and coherency are turned on
+        */
+       assert(sctlr_el3 | SCTLR_M_BIT);
+       assert(sctlr_el3 | SCTLR_C_BIT);
+       assert(sctlr_el3 | SCTLR_I_BIT);
+
+       /* Perform remaining generic architectural setup from EL3 */
+       bl1_arch_setup();
+
+       /* Perform platform setup in BL1. */
+       bl1_platform_setup();
+
+       /* Announce our arrival */
+       printf(FIRMWARE_WELCOME_STR);
+       printf("Built : %s, %s\n\r", __TIME__, __DATE__);
+
+       /*
+        * Find out how much free trusted ram remains after BL1 load
+        * & load the BL2 image at its top
+        */
+       bl1_tzram_layout = bl1_get_sec_mem_layout();
+       bl2_base = load_image(&bl1_tzram_layout,
+                             (const char *) BL2_IMAGE_NAME,
+                             load_type, BL2_BASE);
+
+       /*
+        * Create a new layout of memory for BL2 as seen by BL1 i.e.
+        * tell it the amount of total and free memory available.
+        * This layout is created at the first free address visible
+        * to BL2. BL2 will read the memory layout before using its
+        * memory for other purposes.
+        */
+       bl2_tzram_layout = (meminfo *) bl1_tzram_layout.free_base;
+       init_bl2_mem_layout(&bl1_tzram_layout,
+                           bl2_tzram_layout,
+                           load_type,
+                           bl2_base);
+
+       if (bl2_base) {
+               bl1_arch_next_el_setup();
+               spsr = make_spsr(MODE_EL1, MODE_SP_ELX, MODE_RW_64);
+               printf("Booting trusted firmware boot loader stage 2\n\r");
+#if DEBUG
+               printf("BL2 address = 0x%llx \n\r", (unsigned long long) bl2_base);
+               printf("BL2 cpsr = 0x%x \n\r", spsr);
+               printf("BL2 memory layout address = 0x%llx \n\r",
+                      (unsigned long long) bl2_tzram_layout);
+#endif
+               run_image(bl2_base, spsr, SECURE, bl2_tzram_layout, 0);
+       }
+
+       /*
+        * TODO: print failure to load BL2 but also add a tzwdog timer
+        * which will reset the system eventually.
+        */
+       printf("Failed to load boot loader stage 2 (BL2) firmware.\n\r");
+       return;
+}
+
+/*******************************************************************************
+ * Temporary function to print the fact that BL2 has done its job and BL31 is
+ * about to be loaded. This is needed as long as printfs cannot be used
+ ******************************************************************************/
+void display_boot_progress(unsigned long entrypoint,
+                          unsigned long spsr,
+                          unsigned long mem_layout,
+                          unsigned long ns_image_info)
+{
+       printf("Booting trusted firmware boot loader stage 3\n\r");
+#if DEBUG
+       printf("BL31 address = 0x%llx \n\r", (unsigned long long) entrypoint);
+       printf("BL31 cpsr = 0x%llx \n\r", (unsigned long long)spsr);
+       printf("BL31 memory layout address = 0x%llx \n\r", (unsigned long long)mem_layout);
+       printf("BL31 non-trusted image info address = 0x%llx\n\r", (unsigned long long)ns_image_info);
+#endif
+       return;
+}
diff --git a/bl2/aarch64/bl2_arch_setup.c b/bl2/aarch64/bl2_arch_setup.c
new file mode 100644 (file)
index 0000000..ed457ee
--- /dev/null
@@ -0,0 +1,42 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+#include <platform.h>
+
+/*******************************************************************************
+ * Place holder function to perform any S-EL1 specific architectural setup. At
+ * the moment there is nothing to do.
+ ******************************************************************************/
+void bl2_arch_setup(void)
+{
+       /* Give access to FP/SIMD registers */
+       write_cpacr(CPACR_EL1_FPEN(CPACR_EL1_FP_TRAP_NONE));
+}
diff --git a/bl2/aarch64/bl2_entrypoint.S b/bl2/aarch64/bl2_entrypoint.S
new file mode 100644 (file)
index 0000000..bade099
--- /dev/null
@@ -0,0 +1,94 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <bl_common.h>
+
+
+       .globl  bl2_entrypoint
+
+
+       .section        entry_code, "ax"; .align 3
+
+
+bl2_entrypoint:; .type bl2_entrypoint, %function
+       /*---------------------------------------------
+        * Store the extents of the tzram available to
+        * BL2 for future use. Use the opcode param to
+        * allow implement other functions if needed.
+        * ---------------------------------------------
+        */
+       mov     x20, x0
+       mov     x21, x1
+       mov     x22, x2
+
+       /* ---------------------------------------------
+        * This is BL2 which is expected to be executed
+        * only by the primary cpu (at least for now).
+        * So, make sure no secondary has lost its way.
+        * ---------------------------------------------
+        */
+       bl      read_mpidr
+       mov     x19, x0
+       bl      platform_is_primary_cpu
+       cbz     x0, _panic
+
+       /* --------------------------------------------
+        * Give ourselves a small coherent stack to
+        * ease the pain of initializing the MMU
+        * --------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_set_coherent_stack
+
+       /* ---------------------------------------------
+        * Perform early platform setup & platform
+        * specific early arch. setup e.g. mmu setup
+        * ---------------------------------------------
+        */
+       mov     x0, x21
+       mov     x1, x22
+       bl      bl2_early_platform_setup
+       bl      bl2_plat_arch_setup
+
+       /* ---------------------------------------------
+        * Give ourselves a stack allocated in Normal
+        * -IS-WBWA memory
+        * ---------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_set_stack
+
+       /* ---------------------------------------------
+        * Jump to main function.
+        * ---------------------------------------------
+        */
+       bl      bl2_main
+_panic:
+       b       _panic
diff --git a/bl2/bl2.ld.S b/bl2/bl2.ld.S
new file mode 100644 (file)
index 0000000..8a8ed35
--- /dev/null
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <platform.h>
+
+OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
+OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
+
+MEMORY {
+    /* RAM is read/write and Initialised */
+    RAM (rwx): ORIGIN = TZRAM_BASE, LENGTH = TZRAM_SIZE
+}
+
+
+SECTIONS
+{
+    . = BL2_BASE;
+
+    BL2_RO NEXT (4096): {
+        *(entry_code)
+        *(.text .rodata)
+    } >RAM
+
+    BL2_STACKS NEXT (4096): {
+        *(tzfw_normal_stacks)
+    } >RAM
+
+    BL2_COHERENT_RAM NEXT (4096): {
+        *(tzfw_coherent_mem)
+        /*       . += 0x1000;*/
+        /* Do we need to ensure at least 4k here? */
+         . = NEXT(4096);
+    } >RAM
+
+    __BL2_DATA_START__ = .;
+    .bss NEXT (4096): {
+        *(SORT_BY_ALIGNMENT(.bss))
+        *(COMMON)
+    } >RAM
+
+    .data : {
+        *(.data)
+    } >RAM
+    __BL2_DATA_STOP__ = .;
+
+
+    __BL2_RO_BASE__ = LOADADDR(BL2_RO);
+    __BL2_RO_SIZE__ = SIZEOF(BL2_RO);
+
+    __BL2_STACKS_BASE__ = LOADADDR(BL2_STACKS);
+    __BL2_STACKS_SIZE__ = SIZEOF(BL2_STACKS);
+
+    __BL2_COHERENT_RAM_BASE__ = LOADADDR(BL2_COHERENT_RAM);
+    __BL2_COHERENT_RAM_SIZE__ = SIZEOF(BL2_COHERENT_RAM);
+
+    __BL2_RW_BASE__ = __BL2_DATA_START__;
+    __BL2_RW_SIZE__ = __BL2_DATA_STOP__ - __BL2_DATA_START__;
+}
diff --git a/bl2/bl2.mk b/bl2/bl2.mk
new file mode 100644 (file)
index 0000000..212aa92
--- /dev/null
@@ -0,0 +1,48 @@
+#
+# Copyright (c) 2013, ARM Limited. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# Neither the name of ARM nor the names of its contributors may be used
+# to endorse or promote products derived from this software without specific
+# prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+# POSSIBILITY OF SUCH DAMAGE.
+#
+
+vpath                  %.c     common/ drivers/arm/interconnect/cci-400/               \
+                               drivers/arm/peripherals/pl011 common/ lib/              \
+                               plat/fvp plat/fvp/${ARCH} lib/semihosting arch/aarch64/ \
+                               lib/non-semihosting
+
+vpath                  %.S     lib/arch/aarch64                                        \
+                               lib/semihosting/aarch64                                 \
+                               include lib/sync/locks/exclusive
+
+BL2_ASM_OBJS           :=      bl2_entrypoint.o spinlock.o
+BL2_C_OBJS             :=      bl2_main.o bl2_plat_setup.o bl2_arch_setup.o fvp_common.o       \
+                               early_exceptions.o
+BL2_ENTRY_POINT                :=      bl2_entrypoint
+BL2_MAPFILE            :=      bl2.map
+BL2_LINKERFILE         :=      bl2.ld
+
+BL2_OBJS               :=      $(BL2_C_OBJS) $(BL2_ASM_OBJS)
+CFLAGS                 +=      $(DEFINES)
diff --git a/bl2/bl2_main.c b/bl2/bl2_main.c
new file mode 100644 (file)
index 0000000..aae67b4
--- /dev/null
@@ -0,0 +1,143 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <semihosting.h>
+#include <bl_common.h>
+#include <bl2.h>
+
+/*******************************************************************************
+ * The only thing to do in BL2 is to load further images and pass control to
+ * BL31. The memory occupied by BL2 will be reclaimed by BL3_x stages. BL2 runs
+ * entirely in S-EL1. Since arm standard c libraries are not PIC, printf et al
+ * are not available. We rely on assertions to signal error conditions
+ ******************************************************************************/
+void bl2_main(void)
+{
+       meminfo bl2_tzram_layout, *bl31_tzram_layout;
+       el_change_info *ns_image_info;
+       unsigned long bl31_base, el_status;
+       unsigned int bl2_load, bl31_load, mode;
+
+       /* Perform remaining generic architectural setup in S-El1 */
+       bl2_arch_setup();
+
+       /* Perform platform setup in BL1 */
+       bl2_platform_setup();
+
+#if defined (__GNUC__)
+       printf("BL2 Built : %s, %s\n\r", __TIME__, __DATE__);
+#endif
+
+       /* Find out how much free trusted ram remains after BL2 load */
+       bl2_tzram_layout = bl2_get_sec_mem_layout();
+
+       /*
+        * Load BL31. BL1 tells BL2 whether it has been TOP or BOTTOM loaded.
+        * To avoid fragmentation of trusted SRAM memory, BL31 is always
+        * loaded opposite to BL2. This allows BL31 to reclaim BL2 memory
+        * while maintaining its free space in one contiguous chunk.
+        */
+       bl2_load = bl2_tzram_layout.attr & LOAD_MASK;
+       assert((bl2_load == TOP_LOAD) || (bl2_load == BOT_LOAD));
+       bl31_load = (bl2_load == TOP_LOAD) ? BOT_LOAD : TOP_LOAD;
+       bl31_base = load_image(&bl2_tzram_layout, BL31_IMAGE_NAME,
+                              bl31_load, BL31_BASE);
+
+       /* Assert if it has not been possible to load BL31 */
+       assert(bl31_base != 0);
+
+       /*
+        * Create a new layout of memory for BL31 as seen by BL2. This
+        * will gobble up all the BL2 memory.
+        */
+       bl31_tzram_layout = (meminfo *) get_el_change_mem_ptr();
+       init_bl31_mem_layout(&bl2_tzram_layout, bl31_tzram_layout, bl31_load);
+
+       /*
+        * BL2 also needs to tell BL31 where the non-trusted software image
+        * has been loaded. Place this info right after the BL31 memory layout
+        */
+       ns_image_info = (el_change_info *) ((unsigned char *) bl31_tzram_layout
+                                             + sizeof(meminfo));
+
+       /*
+        * Assume that the non-secure bootloader has already been
+        * loaded to its platform-specific location.
+        */
+       ns_image_info->entrypoint = plat_get_ns_image_entrypoint();
+
+       /* Figure out what mode we enter the non-secure world in */
+       el_status = read_id_aa64pfr0_el1() >> ID_AA64PFR0_EL2_SHIFT;
+       el_status &= ID_AA64PFR0_ELX_MASK;
+
+       if (el_status)
+               mode = MODE_EL2;
+       else
+               mode = MODE_EL1;
+
+       ns_image_info->spsr = make_spsr(mode, MODE_SP_ELX, MODE_RW_64);
+       ns_image_info->security_state = NON_SECURE;
+       flush_dcache_range((unsigned long) ns_image_info,
+                          sizeof(el_change_info));
+
+       /*
+        * Run BL31 via an SMC to BL1. Information on how to pass control to
+        * the non-trusted software image will be passed to BL31 in x2.
+        */
+       if (bl31_base)
+               run_image(bl31_base,
+                         make_spsr(MODE_EL3, MODE_SP_ELX, MODE_RW_64),
+                         SECURE,
+                         bl31_tzram_layout,
+                         (void *) ns_image_info);
+
+       /* There is no valid reason for run_image() to return */
+       assert(0);
+}
+
+/*******************************************************************************
+ * BL1 has this function to print the fact that BL2 has done its job and BL31 is
+ * about to be loaded. Since BL2 re-uses BL1's exception table, it needs to
+ * define this function as well.
+ * TODO: Remove this function from BL2.
+ ******************************************************************************/
+void display_boot_progress(unsigned long entrypoint,
+                          unsigned long spsr,
+                          unsigned long mem_layout,
+                          unsigned long ns_image_info)
+{
+       return;
+}
diff --git a/bl31/aarch64/bl31_arch_setup.c b/bl31/aarch64/bl31_arch_setup.c
new file mode 100644 (file)
index 0000000..f6fa088
--- /dev/null
@@ -0,0 +1,100 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+#include <platform.h>
+#include <assert.h>
+
+/*******************************************************************************
+ * This duplicates what the primary cpu did after a cold boot in BL1. The same
+ * needs to be done when a cpu is hotplugged in. This function could also over-
+ * ride any EL3 setup done by BL1 as this code resides in rw memory.
+ ******************************************************************************/
+void bl31_arch_setup(void)
+{
+       unsigned long tmp_reg = 0;
+       unsigned int counter_base_frequency;
+
+       /* Enable alignment checks and set the exception endianness to LE */
+       tmp_reg = read_sctlr();
+       tmp_reg |= (SCTLR_A_BIT | SCTLR_SA_BIT);
+       tmp_reg &= ~SCTLR_EE_BIT;
+       write_sctlr(tmp_reg);
+
+       /*
+        * Enable HVCs, allow NS to mask CPSR.A, route FIQs to EL3, set the
+        * next EL to be aarch64
+        */
+       tmp_reg = SCR_RES1_BITS | SCR_RW_BIT | SCR_HCE_BIT | SCR_FIQ_BIT;
+       write_scr(tmp_reg);
+
+       /* Do not trap coprocessor accesses from lower ELs to EL3 */
+       write_cptr_el3(0);
+
+       /* Read the frequency from Frequency modes table */
+       counter_base_frequency = mmio_read_32(SYS_CNTCTL_BASE + CNTFID_OFF);
+       /* The first entry of the frequency modes table must not be 0 */
+       assert(counter_base_frequency != 0);
+
+       /* Program the counter frequency */
+       write_cntfrq_el0(counter_base_frequency);
+       return;
+}
+
+/*******************************************************************************
+ * Detect what is the next Non-Secure EL and setup the required architectural
+ * state
+ ******************************************************************************/
+void bl31_arch_next_el_setup(void) {
+       unsigned long id_aa64pfr0 = read_id_aa64pfr0_el1();
+       unsigned long current_sctlr, next_sctlr;
+       unsigned long el_status;
+       unsigned long scr = read_scr();
+
+       /* Use the same endianness than the current BL */
+       current_sctlr = read_sctlr();
+       next_sctlr = (current_sctlr & SCTLR_EE_BIT);
+
+       /* Find out which EL we are going to */
+       el_status = (id_aa64pfr0 >> ID_AA64PFR0_EL2_SHIFT) & ID_AA64PFR0_ELX_MASK;
+
+       /* Check what if EL2 is supported */
+       if (el_status && (scr & SCR_HCE_BIT)) {
+               /* Set SCTLR EL2 */
+               next_sctlr |= SCTLR_EL2_RES1;
+
+               write_sctlr_el2(next_sctlr);
+       } else {
+               /* Set SCTLR Non-Secure EL1 */
+               next_sctlr |= SCTLR_EL1_RES1;
+
+               write_sctlr_el1(next_sctlr);
+       }
+}
diff --git a/bl31/aarch64/bl31_entrypoint.S b/bl31/aarch64/bl31_entrypoint.S
new file mode 100644 (file)
index 0000000..3a850e6
--- /dev/null
@@ -0,0 +1,121 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <bl1.h>
+#include <bl_common.h>
+#include <platform.h>
+
+
+       .globl  bl31_entrypoint
+
+
+       .section        entry_code, "ax"; .align 3
+
+       /* -----------------------------------------------------
+        * bl31_entrypoint() is the cold boot entrypoint,
+        * executed only by the primary cpu.
+        * -----------------------------------------------------
+        */
+
+bl31_entrypoint:; .type bl31_entrypoint, %function
+       /* ---------------------------------------------
+        * BL2 has populated x0,x3,x4 with the opcode
+        * indicating BL31 should be run, memory layout
+        * of the trusted SRAM available to BL31 and
+        * information about running the non-trusted
+        * software already loaded by BL2. Check the
+        * opcode out of paranoia.
+        * ---------------------------------------------
+        */
+       mov     x19, #RUN_IMAGE
+       cmp     x0, x19
+       b.ne    _panic
+       mov     x20, x3
+       mov     x21, x4
+
+       /* ---------------------------------------------
+        * This is BL31 which is expected to be executed
+        * only by the primary cpu (at least for now).
+        * So, make sure no secondary has lost its way.
+        * ---------------------------------------------
+        */
+       bl      read_mpidr
+       mov     x19, x0
+       bl      platform_is_primary_cpu
+       cbz     x0, _panic
+
+       /* --------------------------------------------
+        * Give ourselves a small coherent stack to
+        * ease the pain of initializing the MMU
+        * --------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_set_coherent_stack
+
+       /* ---------------------------------------------
+        * Perform platform specific early arch. setup
+        * ---------------------------------------------
+        */
+       mov     x0, x20
+       mov     x1, x21
+       mov     x2, x19
+       bl      bl31_early_platform_setup
+       bl      bl31_plat_arch_setup
+
+       /* ---------------------------------------------
+        * Give ourselves a stack allocated in Normal
+        * -IS-WBWA memory
+        * ---------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_set_stack
+
+       /* ---------------------------------------------
+        * Use SP_EL0 to initialize BL31. It allows us
+        * to jump to the next image without having to
+        * come back here to ensure all of the stack's
+        * been popped out. run_image() is not nice
+        * enough to reset the stack pointer before
+        * handing control to the next stage.
+        * ---------------------------------------------
+        */
+       mov     x0, sp
+       msr     sp_el0, x0
+       msr     spsel, #0
+       isb
+
+       /* ---------------------------------------------
+        * Jump to main function.
+        * ---------------------------------------------
+        */
+       bl      bl31_main
+
+_panic:
+       b       _panic
diff --git a/bl31/aarch64/exception_handlers.c b/bl31/aarch64/exception_handlers.c
new file mode 100644 (file)
index 0000000..860d8eb
--- /dev/null
@@ -0,0 +1,184 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+#include <platform.h>
+#include <bl_common.h>
+#include <bl31.h>
+#include <psci.h>
+#include <assert.h>
+#include <runtime_svc.h>
+
+/*******************************************************************************
+ * This function checks whether this is a valid smc e.g.
+ * the function id is correct, top word of args are zeroed
+ * when aarch64 makes an aarch32 call etc.
+ ******************************************************************************/
+int validate_smc(gp_regs *regs)
+{
+       unsigned int rw = GET_RW(regs->spsr);
+       unsigned int cc = GET_SMC_CC(regs->x0);
+
+       /* Check if there is a difference in the caller RW and SMC CC */
+       if (rw == cc) {
+
+               /* Check whether the caller has chosen the right func. id */
+               if (cc == SMC_64) {
+                       regs->x0 = SMC_UNK;
+                       return SMC_UNK;
+               }
+
+               /*
+                * Paranoid check to zero the top word of passed args
+                * irrespective of caller's register width.
+                *
+                * TODO: Check if this needed if the caller is aarch32
+                */
+               regs->x0 &= (unsigned int) 0xFFFFFFFF;
+               regs->x1 &= (unsigned int) 0xFFFFFFFF;
+               regs->x2 &= (unsigned int) 0xFFFFFFFF;
+               regs->x3 &= (unsigned int) 0xFFFFFFFF;
+               regs->x4 &= (unsigned int) 0xFFFFFFFF;
+               regs->x5 &= (unsigned int) 0xFFFFFFFF;
+               regs->x6 &= (unsigned int) 0xFFFFFFFF;
+       }
+
+       return 0;
+}
+
+/* TODO: Break down the SMC handler into fast and standard SMC handlers. */
+void smc_handler(unsigned type, unsigned long esr, gp_regs *regs)
+{
+       /* Check if the SMC has been correctly called */
+       if (validate_smc(regs) != 0)
+               return;
+
+       switch (regs->x0) {
+       case PSCI_VERSION:
+               regs->x0 = psci_version();
+               break;
+
+       case PSCI_CPU_OFF:
+               regs->x0 = __psci_cpu_off();
+               break;
+
+       case PSCI_CPU_SUSPEND_AARCH64:
+       case PSCI_CPU_SUSPEND_AARCH32:
+               regs->x0 = __psci_cpu_suspend(regs->x1, regs->x2, regs->x3);
+               break;
+
+       case PSCI_CPU_ON_AARCH64:
+       case PSCI_CPU_ON_AARCH32:
+               regs->x0 = psci_cpu_on(regs->x1, regs->x2, regs->x3);
+               break;
+
+       case PSCI_AFFINITY_INFO_AARCH32:
+       case PSCI_AFFINITY_INFO_AARCH64:
+               regs->x0 = psci_affinity_info(regs->x1, regs->x2);
+               break;
+
+       default:
+               regs->x0 = SMC_UNK;
+       }
+
+       return;
+}
+
+void irq_handler(unsigned type, unsigned long esr, gp_regs *regs)
+{
+       plat_report_exception(type);
+       assert(0);
+}
+
+void fiq_handler(unsigned type, unsigned long esr, gp_regs *regs)
+{
+       plat_report_exception(type);
+       assert(0);
+}
+
+void serror_handler(unsigned type, unsigned long esr, gp_regs *regs)
+{
+       plat_report_exception(type);
+       assert(0);
+}
+
+void sync_exception_handler(unsigned type, gp_regs *regs)
+{
+       unsigned long esr = read_esr();
+       unsigned int ec = EC_BITS(esr);
+
+       switch (ec) {
+
+       case EC_AARCH32_SMC:
+       case EC_AARCH64_SMC:
+               smc_handler(type, esr, regs);
+               break;
+
+       default:
+               plat_report_exception(type);
+               assert(0);
+       }
+       return;
+}
+
+void async_exception_handler(unsigned type, gp_regs *regs)
+{
+       unsigned long esr = read_esr();
+
+       switch (type) {
+
+       case IRQ_SP_EL0:
+       case IRQ_SP_ELX:
+       case IRQ_AARCH64:
+       case IRQ_AARCH32:
+               irq_handler(type, esr, regs);
+               break;
+
+       case FIQ_SP_EL0:
+       case FIQ_SP_ELX:
+       case FIQ_AARCH64:
+       case FIQ_AARCH32:
+               fiq_handler(type, esr, regs);
+               break;
+
+       case SERROR_SP_EL0:
+       case SERROR_SP_ELX:
+       case SERROR_AARCH64:
+       case SERROR_AARCH32:
+               serror_handler(type, esr, regs);
+               break;
+
+       default:
+               plat_report_exception(type);
+               assert(0);
+       }
+
+       return;
+}
diff --git a/bl31/aarch64/runtime_exceptions.S b/bl31/aarch64/runtime_exceptions.S
new file mode 100644 (file)
index 0000000..21976ad
--- /dev/null
@@ -0,0 +1,248 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+#include <runtime_svc.h>
+
+       .globl  runtime_exceptions
+
+
+#include <asm_macros.S>
+
+
+       .section        aarch64_code, "ax"; .align 11
+       
+       .align  7
+runtime_exceptions:
+       /* -----------------------------------------------------
+        * Current EL with _sp_el0 : 0x0 - 0x180
+        * -----------------------------------------------------
+        */
+sync_exception_sp_el0:
+       exception_entry save_regs
+       mov     x0, #SYNC_EXCEPTION_SP_EL0
+       mov     x1, sp
+       bl      sync_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+irq_sp_el0:
+       exception_entry save_regs
+       mov     x0, #IRQ_SP_EL0
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+fiq_sp_el0:
+       exception_entry save_regs
+       mov     x0, #FIQ_SP_EL0
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+serror_sp_el0:
+       exception_entry save_regs
+       mov     x0, #SERROR_SP_EL0
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       /* -----------------------------------------------------
+        * Current EL with SPx: 0x200 - 0x380
+        * -----------------------------------------------------
+        */
+       .align  7
+sync_exception_sp_elx:
+       exception_entry save_regs
+       mov     x0, #SYNC_EXCEPTION_SP_ELX
+       mov     x1, sp
+       bl      sync_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+irq_sp_elx:
+       exception_entry save_regs
+       mov     x0, #IRQ_SP_ELX
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+fiq_sp_elx:
+       exception_entry save_regs
+       mov     x0, #FIQ_SP_ELX
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+serror_sp_elx:
+       exception_entry save_regs
+       mov     x0, #SERROR_SP_ELX
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       /* -----------------------------------------------------
+        * Lower EL using AArch64 : 0x400 - 0x580
+        * -----------------------------------------------------
+        */
+       .align  7
+sync_exception_aarch64:
+       exception_entry save_regs
+       mov     x0, #SYNC_EXCEPTION_AARCH64
+       mov     x1, sp
+       bl      sync_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+irq_aarch64:
+       exception_entry save_regs
+       mov     x0, #IRQ_AARCH64
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+fiq_aarch64:
+       exception_entry save_regs
+       mov     x0, #FIQ_AARCH64
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+serror_aarch64:
+       exception_entry save_regs
+       mov     x0, #IRQ_AARCH32
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       /* -----------------------------------------------------
+        * Lower EL using AArch32 : 0x600 - 0x780
+        * -----------------------------------------------------
+        */
+       .align  7
+sync_exception_aarch32:
+       exception_entry save_regs
+       mov     x0, #SYNC_EXCEPTION_AARCH32
+       mov     x1, sp
+       bl      sync_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+irq_aarch32:
+       exception_entry save_regs
+       mov     x0, #IRQ_AARCH32
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+fiq_aarch32:
+       exception_entry save_regs
+       mov     x0, #FIQ_AARCH32
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+serror_aarch32:
+       exception_entry save_regs
+       mov     x0, #SERROR_AARCH32
+       mov     x1, sp
+       bl      async_exception_handler
+       exception_exit restore_regs
+       eret
+
+       .align  7
+
+save_regs:; .type save_regs, %function
+       sub     sp, sp, #0x100
+       stp     x0, x1, [sp, #0x0]
+       stp     x2, x3, [sp, #0x10]
+       stp     x4, x5, [sp, #0x20]
+       stp     x6, x7, [sp, #0x30]
+       stp     x8, x9, [sp, #0x40]
+       stp     x10, x11, [sp, #0x50]
+       stp     x12, x13, [sp, #0x60]
+       stp     x14, x15, [sp, #0x70]
+       stp     x16, x17, [sp, #0x80]
+       stp     x18, x19, [sp, #0x90]
+       stp     x20, x21, [sp, #0xa0]
+       stp     x22, x23, [sp, #0xb0]
+       stp     x24, x25, [sp, #0xc0]
+       stp     x26, x27, [sp, #0xd0]
+       mrs     x0, sp_el0
+       stp     x28, x0, [sp, #0xe0]
+       mrs     x0, spsr_el3
+       str     x0, [sp, #0xf0]
+       ret
+
+
+restore_regs:; .type restore_regs, %function
+       ldr     x9, [sp, #0xf0]
+       msr     spsr_el3, x9
+       ldp     x28, x9, [sp, #0xe0]
+       msr     sp_el0, x9
+       ldp     x26, x27, [sp, #0xd0]
+       ldp     x24, x25, [sp, #0xc0]
+       ldp     x22, x23, [sp, #0xb0]
+       ldp     x20, x21, [sp, #0xa0]
+       ldp     x18, x19, [sp, #0x90]
+       ldp     x16, x17, [sp, #0x80]
+       ldp     x14, x15, [sp, #0x70]
+       ldp     x12, x13, [sp, #0x60]
+       ldp     x10, x11, [sp, #0x50]
+       ldp     x8, x9, [sp, #0x40]
+       ldp     x6, x7, [sp, #0x30]
+       ldp     x4, x5, [sp, #0x20]
+       ldp     x2, x3, [sp, #0x10]
+       ldp     x0, x1, [sp, #0x0]
+       add     sp, sp, #0x100
+       ret
diff --git a/bl31/bl31.ld.S b/bl31/bl31.ld.S
new file mode 100644 (file)
index 0000000..5ad8648
--- /dev/null
@@ -0,0 +1,88 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <platform.h>
+
+OUTPUT_FORMAT(PLATFORM_LINKER_FORMAT)
+OUTPUT_ARCH(PLATFORM_LINKER_ARCH)
+
+
+MEMORY {
+    /* RAM is read/write and Initialised */
+    RAM (rwx): ORIGIN = TZRAM_BASE, LENGTH = TZRAM_SIZE
+}
+
+
+SECTIONS
+{
+   . = BL31_BASE;
+
+    BL31_RO ALIGN (4096): {
+        *(entry_code)
+        *(.text)
+        *(.rodata)
+    } >RAM
+
+    BL31_STACKS ALIGN (4096): {
+        . += 0x1000;
+        *(tzfw_normal_stacks)
+    } >RAM
+
+    BL31_COHERENT_RAM ALIGN (4096): {
+        *(tzfw_coherent_mem)
+        /*       . += 0x1000;*/
+        /* Do we need to ensure at least 4k here? */
+         . = ALIGN(4096);
+    } >RAM
+
+    __BL31_DATA_START__ = .;
+    .bss  ALIGN (4096): {
+        *(.bss)
+        *(COMMON)
+    } >RAM
+
+    .data : {
+        *(.data)
+    } >RAM
+    __BL31_DATA_STOP__ = .;
+
+
+    __BL31_RO_BASE__ = LOADADDR(BL31_RO);
+    __BL31_RO_SIZE__ = SIZEOF(BL31_RO);
+
+    __BL31_STACKS_BASE__ = LOADADDR(BL31_STACKS);
+    __BL31_STACKS_SIZE__ = SIZEOF(BL31_STACKS);
+
+    __BL31_COHERENT_RAM_BASE__ = LOADADDR(BL31_COHERENT_RAM);
+    __BL31_COHERENT_RAM_SIZE__ = SIZEOF(BL31_COHERENT_RAM);
+
+    __BL31_RW_BASE__ = __BL31_DATA_START__;
+    __BL31_RW_SIZE__ = __BL31_DATA_STOP__ - __BL31_DATA_START__;
+}
diff --git a/bl31/bl31.mk b/bl31/bl31.mk
new file mode 100644 (file)
index 0000000..dcf78bc
--- /dev/null
@@ -0,0 +1,55 @@
+#
+# Copyright (c) 2013, ARM Limited. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+#
+# Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# Neither the name of ARM nor the names of its contributors may be used
+# to endorse or promote products derived from this software without specific
+# prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+# POSSIBILITY OF SUCH DAMAGE.
+#
+
+vpath                  %.c     drivers/arm/interconnect/cci-400/ common/ lib/                  \
+                               drivers/arm/peripherals/pl011 plat/fvp common/psci              \
+                               lib/semihosting arch/aarch64/ lib/non-semihosting               \
+                               lib/sync/locks/bakery/ drivers/power/ arch/system/gic/          \
+                               plat/fvp/aarch64/
+
+vpath                  %.S     lib/arch/aarch64 common/psci                                    \
+                               lib/semihosting/aarch64 include/ plat/fvp/${ARCH}               \
+                               lib/sync/locks/exclusive plat/common/aarch64/                   \
+                               arch/system/gic/${ARCH}
+
+BL31_ASM_OBJS          :=      bl31_entrypoint.o runtime_exceptions.o psci_entry.o             \
+                               spinlock.o gic_v3_sysregs.o fvp_helpers.o
+BL31_C_OBJS            :=      bl31_main.o bl31_plat_setup.o bl31_arch_setup.o \
+                               exception_handlers.o bakery_lock.o cci400.o     \
+                               fvp_common.o fvp_pm.o fvp_pwrc.o fvp_topology.o \
+                               runtime_svc.o gic_v3.o gic_v2.o psci_setup.o    \
+                               psci_common.o psci_afflvl_on.o psci_main.o      \
+                               psci_afflvl_off.o psci_afflvl_suspend.o
+
+BL31_ENTRY_POINT       :=      bl31_entrypoint
+BL31_MAPFILE           :=      bl31.map
+BL31_LINKERFILE                :=      bl31.ld
+
+BL31_OBJS              :=      $(BL31_C_OBJS) $(BL31_ASM_OBJS)
diff --git a/bl31/bl31_main.c b/bl31/bl31_main.c
new file mode 100644 (file)
index 0000000..e8fa2f8
--- /dev/null
@@ -0,0 +1,76 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <semihosting.h>
+#include <bl_common.h>
+#include <bl31.h>
+#include <runtime_svc.h>
+
+void bl31_arch_next_el_setup(void);
+
+/*******************************************************************************
+ * BL31 is responsible for setting up the runtime services for the primary cpu
+ * before passing control to the bootloader (UEFI) or Linux.
+ ******************************************************************************/
+void bl31_main(void)
+{
+       el_change_info *image_info;
+       unsigned long mpidr = read_mpidr();
+
+       /* Perform remaining generic architectural setup from EL3 */
+       bl31_arch_setup();
+
+       /* Perform platform setup in BL1 */
+       bl31_platform_setup();
+
+#if defined (__GNUC__)
+       printf("BL31 Built : %s, %s\n\r", __TIME__, __DATE__);
+#endif
+
+
+       /* Initialize the runtime services e.g. psci */
+       runtime_svc_init(mpidr);
+
+       /* Clean caches before re-entering normal world */
+       dcsw_op_all(DCCSW);
+
+       image_info = bl31_get_next_image_info(mpidr);
+       bl31_arch_next_el_setup();
+       change_el(image_info);
+
+       /* There is no valid reason for change_el() to return */
+       assert(0);
+}
diff --git a/common/bl_common.c b/common/bl_common.c
new file mode 100644 (file)
index 0000000..d125786
--- /dev/null
@@ -0,0 +1,516 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <errno.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <semihosting.h>
+#include <bl_common.h>
+#include <bl1.h>
+
+/***********************************************************
+ * Memory for sharing data while changing exception levels.
+ * Only used by the primary core.
+ **********************************************************/
+unsigned char bl2_el_change_mem_ptr[EL_CHANGE_MEM_SIZE];
+
+unsigned long *get_el_change_mem_ptr(void)
+{
+       return (unsigned long *) bl2_el_change_mem_ptr;
+}
+
+unsigned long page_align(unsigned long value, unsigned dir)
+{
+       unsigned long page_size = 1 << FOUR_KB_SHIFT;
+
+       /* Round up the limit to the next page boundary */
+       if (value & (page_size - 1)) {
+               value &= ~(page_size - 1);
+               if (dir == UP)
+                       value += page_size;
+       }
+
+       return value;
+}
+
+static inline unsigned int is_page_aligned (unsigned long addr) {
+       const unsigned long page_size = 1 << FOUR_KB_SHIFT;
+
+       return (addr & (page_size - 1)) == 0;
+}
+
+void change_security_state(unsigned int target_security_state)
+{
+       unsigned long scr = read_scr();
+
+       if (target_security_state == SECURE)
+               scr &= ~SCR_NS_BIT;
+       else if (target_security_state == NON_SECURE)
+               scr |= SCR_NS_BIT;
+       else
+               assert(0);
+
+       write_scr(scr);
+}
+
+int drop_el(aapcs64_params *args,
+           unsigned long spsr,
+           unsigned long entrypoint)
+{
+       write_spsr(spsr);
+       write_elr(entrypoint);
+       eret(args->arg0,
+            args->arg1,
+            args->arg2,
+            args->arg3,
+            args->arg4,
+            args->arg5,
+            args->arg6,
+            args->arg7);
+       return -EINVAL;
+}
+
+long raise_el(aapcs64_params *args)
+{
+       return smc(args->arg0,
+                  args->arg1,
+                  args->arg2,
+                  args->arg3,
+                  args->arg4,
+                  args->arg5,
+                  args->arg6,
+                  args->arg7);
+}
+
+/*
+ * TODO: If we are not EL3 then currently we only issue an SMC.
+ * Add support for dropping into EL0 etc. Consider adding support
+ * for switching from S-EL1 to S-EL0/1 etc.
+ */
+long change_el(el_change_info *info)
+{
+       unsigned long current_el = read_current_el();
+
+       if (GET_EL(current_el) == MODE_EL3) {
+               /*
+                * We can go anywhere from EL3. So find where.
+                * TODO: Lots to do if we are going non-secure.
+                * Flip the NS bit. Restore NS registers etc.
+                * Just doing the bare minimal for now.
+                */
+
+               if (info->security_state == NON_SECURE)
+                       change_security_state(info->security_state);
+
+               return drop_el(&info->args, info->spsr, info->entrypoint);
+       } else
+               return raise_el(&info->args);
+}
+
+/* TODO: add a parameter for DAIF. not needed right now */
+unsigned long make_spsr(unsigned long target_el,
+                       unsigned long target_sp,
+                       unsigned long target_rw)
+{
+       unsigned long spsr;
+
+       /* Disable all exceptions & setup the EL */
+       spsr = (DAIF_FIQ_BIT | DAIF_IRQ_BIT | DAIF_ABT_BIT | DAIF_DBG_BIT)
+               << PSR_DAIF_SHIFT;
+       spsr |= PSR_MODE(target_rw, target_el, target_sp);
+
+       return spsr;
+}
+
+/*******************************************************************************
+ * The next two functions are the weak definitions. Platform specific
+ * code can override them if it wishes to.
+ ******************************************************************************/
+
+/*******************************************************************************
+ * Function that takes a memory layout into which BL31 has been either top or
+ * bottom loaded. Using this information, it populates bl31_mem_layout to tell
+ * BL31 how much memory it has access to and how much is available for use. It
+ * does not need the address where BL31 has been loaded as BL31 will reclaim
+ * all the memory used by BL2.
+ * TODO: Revisit if this and init_bl2_mem_layout can be replaced by a single
+ * routine.
+ ******************************************************************************/
+void init_bl31_mem_layout(const meminfo *bl2_mem_layout,
+                         meminfo *bl31_mem_layout,
+                         unsigned int load_type)
+{
+       if (load_type == BOT_LOAD) {
+               /*
+                * ------------                             ^
+                * |   BL2    |                             |
+                * |----------|                 ^           |  BL2
+                * |          |                 | BL2 free  |  total
+                * |          |                 |   size    |  size
+                * |----------| BL2 free base   v           |
+                * |   BL31   |                             |
+                * ------------ BL2 total base              v
+                */
+               unsigned long bl31_size;
+
+               bl31_mem_layout->free_base = bl2_mem_layout->free_base;
+
+               bl31_size = bl2_mem_layout->free_base - bl2_mem_layout->total_base;
+               bl31_mem_layout->free_size = bl2_mem_layout->total_size - bl31_size;
+       } else {
+               /*
+                * ------------                             ^
+                * |   BL31   |                             |
+                * |----------|                 ^           |  BL2
+                * |          |                 | BL2 free  |  total
+                * |          |                 |   size    |  size
+                * |----------| BL2 free base   v           |
+                * |   BL2    |                             |
+                * ------------ BL2 total base              v
+                */
+               unsigned long bl2_size;
+
+               bl31_mem_layout->free_base = bl2_mem_layout->total_base;
+
+               bl2_size = bl2_mem_layout->free_base - bl2_mem_layout->total_base;
+               bl31_mem_layout->free_size = bl2_mem_layout->free_size + bl2_size;
+       }
+
+       bl31_mem_layout->total_base = bl2_mem_layout->total_base;
+       bl31_mem_layout->total_size = bl2_mem_layout->total_size;
+       bl31_mem_layout->attr = load_type;
+
+       flush_dcache_range((unsigned long) bl31_mem_layout, sizeof(meminfo));
+       return;
+}
+
+/*******************************************************************************
+ * Function that takes a memory layout into which BL2 has been either top or
+ * bottom loaded along with the address where BL2 has been loaded in it. Using
+ * this information, it populates bl2_mem_layout to tell BL2 how much memory
+ * it has access to and how much is available for use.
+ ******************************************************************************/
+void init_bl2_mem_layout(meminfo *bl1_mem_layout,
+                        meminfo *bl2_mem_layout,
+                        unsigned int load_type,
+                        unsigned long bl2_base)
+{
+       unsigned tmp;
+
+       if (load_type == BOT_LOAD) {
+               bl2_mem_layout->total_base = bl2_base;
+               tmp = bl1_mem_layout->free_base - bl2_base;
+               bl2_mem_layout->total_size = bl1_mem_layout->free_size + tmp;
+
+       } else {
+               bl2_mem_layout->total_base = bl1_mem_layout->free_base;
+               tmp = bl1_mem_layout->total_base + bl1_mem_layout->total_size;
+               bl2_mem_layout->total_size = tmp - bl1_mem_layout->free_base;
+       }
+
+       bl2_mem_layout->free_base = bl1_mem_layout->free_base;
+       bl2_mem_layout->free_size = bl1_mem_layout->free_size;
+       bl2_mem_layout->attr = load_type;
+
+       flush_dcache_range((unsigned long) bl2_mem_layout, sizeof(meminfo));
+       return;
+}
+
+static void dump_load_info(unsigned long image_load_addr,
+                          unsigned long image_size,
+                          const meminfo *mem_layout)
+{
+#if DEBUG
+       printf("Trying to load image at address 0x%lx, size = 0x%lx\r\n",
+               image_load_addr, image_size);
+       printf("Current memory layout:\r\n");
+       printf("  total region = [0x%lx, 0x%lx]\r\n", mem_layout->total_base,
+                       mem_layout->total_base + mem_layout->total_size);
+       printf("  free region = [0x%lx, 0x%lx]\r\n", mem_layout->free_base,
+                       mem_layout->free_base + mem_layout->free_size);
+#endif
+}
+
+/*******************************************************************************
+ * Generic function to load an image into the trusted RAM using semihosting
+ * given a name, extents of free memory & whether the image should be loaded at
+ * the bottom or top of the free memory. It updates the memory layout if the
+ * load is successful.
+ ******************************************************************************/
+unsigned long load_image(meminfo *mem_layout,
+                        const char *image_name,
+                        unsigned int load_type,
+                        unsigned long fixed_addr)
+{
+       unsigned long temp_image_base, image_base;
+       long offset;
+       int image_flen;
+
+       /* Find the size of the image */
+       image_flen = semihosting_get_flen(image_name);
+       if (image_flen < 0) {
+               printf("ERROR: Cannot access '%s' file (%i).\r\n",
+                       image_name, image_flen);
+               return 0;
+       }
+
+       /* See if we have enough space */
+       if (image_flen > mem_layout->free_size) {
+               printf("ERROR: Cannot load '%s' file: Not enough space.\r\n",
+                       image_name);
+               dump_load_info(0, image_flen, mem_layout);
+               return 0;
+       }
+
+       switch (load_type) {
+
+       case TOP_LOAD:
+
+         /* Load the image in the top of free memory */
+         temp_image_base = mem_layout->free_base + mem_layout->free_size;
+         temp_image_base -= image_flen;
+
+         /* Page align base address and check whether the image still fits */
+         image_base = page_align(temp_image_base, DOWN);
+         assert(image_base <= temp_image_base);
+
+         if (image_base < mem_layout->free_base) {
+                 printf("ERROR: Cannot load '%s' file: Not enough space.\r\n",
+                         image_name);
+                 dump_load_info(image_base, image_flen, mem_layout);
+                 return 0;
+         }
+
+         /* Calculate the amount of extra memory used due to alignment */
+         offset = temp_image_base - image_base;
+
+         break;
+
+       case BOT_LOAD:
+
+         /* Load the BL2 image in the bottom of free memory */
+         temp_image_base = mem_layout->free_base;
+         image_base = page_align(temp_image_base, UP);
+         assert(image_base >= temp_image_base);
+
+         /* Page align base address and check whether the image still fits */
+         if (image_base + image_flen >
+             mem_layout->free_base + mem_layout->free_size) {
+                 printf("ERROR: Cannot load '%s' file: Not enough space.\r\n",
+                         image_name);
+                 dump_load_info(image_base, image_flen, mem_layout);
+                 return 0;
+         }
+
+         /* Calculate the amount of extra memory used due to alignment */
+         offset = image_base - temp_image_base;
+
+         break;
+
+       default:
+         assert(0);
+
+       }
+
+       /*
+        * Some images must be loaded at a fixed address, not a dynamic one.
+        *
+        * This has been implemented as a hack on top of the existing dynamic
+        * loading mechanism, for the time being.  If the 'fixed_addr' function
+        * argument is different from zero, then it will force the load address.
+        * So we still have this principle of top/bottom loading but the code
+        * determining the load address is bypassed and the load address is
+        * forced to the fixed one.
+        *
+        * This can result in quite a lot of wasted space because we still use
+        * 1 sole meminfo structure to represent the extents of free memory,
+        * where we should use some sort of linked list.
+        *
+        * E.g. we want to load BL2 at address 0x04020000, the resulting memory
+        *      layout should look as follows:
+        * ------------ 0x04040000
+        * |          |  <- Free space (1)
+        * |----------|
+        * |   BL2    |
+        * |----------| 0x04020000
+        * |          |  <- Free space (2)
+        * |----------|
+        * |   BL1    |
+        * ------------ 0x04000000
+        *
+        * But in the current hacky implementation, we'll need to specify
+        * whether BL2 is loaded at the top or bottom of the free memory.
+        * E.g. if BL2 is considered as top-loaded, the meminfo structure
+        * will give the following view of the memory, hiding the chunk of
+        * free memory above BL2:
+        * ------------ 0x04040000
+        * |          |
+        * |          |
+        * |   BL2    |
+        * |----------| 0x04020000
+        * |          |  <- Free space (2)
+        * |----------|
+        * |   BL1    |
+        * ------------ 0x04000000
+        */
+       if (fixed_addr != 0) {
+               /* Load the image at the given address. */
+               image_base = fixed_addr;
+
+               /* Check whether the image fits. */
+               if ((image_base < mem_layout->free_base) ||
+                   (image_base + image_flen >
+                      mem_layout->free_base + mem_layout->free_size)) {
+                       printf("ERROR: Cannot load '%s' file: Not enough space.\r\n",
+                               image_name);
+                       dump_load_info(image_base, image_flen, mem_layout);
+                       return 0;
+               }
+
+               /* Check whether the fixed load address is page-aligned. */
+               if (!is_page_aligned(image_base)) {
+                       printf("ERROR: Cannot load '%s' file at unaligned address 0x%lx.\r\n",
+                               image_name, fixed_addr);
+                       return 0;
+               }
+
+               /*
+                * Calculate the amount of extra memory used due to fixed
+                * loading.
+                */
+               if (load_type == TOP_LOAD) {
+                       unsigned long max_addr, space_used;
+                       /*
+                        * ------------ max_addr
+                        * | /wasted/ |                 | offset
+                        * |..........|..............................
+                        * |  image   |                 | image_flen
+                        * |----------| fixed_addr
+                        * |          |
+                        * |          |
+                        * ------------ total_base
+                        */
+                       max_addr = mem_layout->total_base + mem_layout->total_size;
+                       /*
+                        * Compute the amount of memory used by the image.
+                        * Corresponds to all space above the image load
+                        * address.
+                        */
+                       space_used = max_addr - fixed_addr;
+                       /*
+                        * Calculate the amount of wasted memory within the
+                        * amount of memory used by the image.
+                        */
+                       offset = space_used - image_flen;
+               } else /* BOT_LOAD */
+                       /*
+                        * ------------
+                        * |          |
+                        * |          |
+                        * |----------|
+                        * |  image   |
+                        * |..........| fixed_addr
+                        * | /wasted/ |                 | offset
+                        * ------------ total_base
+                        */
+                       offset = fixed_addr - mem_layout->total_base;
+       }
+
+       /* We have enough space so load the image now */
+       image_flen = semihosting_download_file(image_name,
+                                              image_flen,
+                                              (void *) image_base);
+       if (image_flen <= 0) {
+               printf("ERROR: Failed to load '%s' file from semihosting (%i).\r\n",
+                       image_name, image_flen);
+               return 0;
+       }
+
+       /*
+        * File has been successfully loaded. Update the free memory
+        * data structure & flush the contents of the TZRAM so that
+        * the next EL can see it.
+        */
+       /* Update the memory contents */
+       flush_dcache_range(image_base, image_flen);
+
+       mem_layout->free_size -= image_flen + offset;
+
+       /* Update the base of free memory since its moved up */
+       if (load_type == BOT_LOAD)
+               mem_layout->free_base += offset + image_flen;
+
+       return image_base;
+}
+
+/*******************************************************************************
+ * Run a loaded image from the given entry point. This could result in either
+ * dropping into a lower exception level or jumping to a higher exception level.
+ * The only way of doing the latter is through an SMC. In either case, setup the
+ * parameters for the EL change request correctly.
+ ******************************************************************************/
+int run_image(unsigned long entrypoint,
+             unsigned long spsr,
+             unsigned long target_security_state,
+             meminfo *mem_layout,
+             void *data)
+{
+       el_change_info run_image_info;
+       unsigned long current_el = read_current_el();
+
+       /* Tell next EL what we want done */
+       run_image_info.args.arg0 = RUN_IMAGE;
+       run_image_info.entrypoint = entrypoint;
+       run_image_info.spsr = spsr;
+       run_image_info.security_state = target_security_state;
+       run_image_info.next = 0;
+
+       /*
+        * If we are EL3 then only an eret can take us to the desired
+        * exception level. Else for the time being assume that we have
+        * to jump to a higher EL and issue an SMC. Contents of argY
+        * will go into the general purpose register xY e.g. arg0->x0
+        */
+       if (GET_EL(current_el) == MODE_EL3) {
+               run_image_info.args.arg1 = (unsigned long) mem_layout;
+               run_image_info.args.arg2 = (unsigned long) data;
+       } else {
+               run_image_info.args.arg1 = entrypoint;
+               run_image_info.args.arg2 = spsr;
+               run_image_info.args.arg3 = (unsigned long) mem_layout;
+               run_image_info.args.arg4 = (unsigned long) data;
+       }
+
+       return change_el(&run_image_info);
+}
diff --git a/common/psci/psci_afflvl_off.c b/common/psci/psci_afflvl_off.c
new file mode 100644 (file)
index 0000000..937ba9d
--- /dev/null
@@ -0,0 +1,265 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <psci.h>
+#include <psci_private.h>
+
+typedef int (*afflvl_off_handler)(unsigned long, aff_map_node *);
+
+/*******************************************************************************
+ * The next three functions implement a handler for each supported affinity
+ * level which is called when that affinity level is turned off.
+ ******************************************************************************/
+static int psci_afflvl0_off(unsigned long mpidr, aff_map_node *cpu_node)
+{
+       unsigned int index, plat_state;
+       int rc = PSCI_E_SUCCESS;
+       unsigned long sctlr = read_sctlr();
+
+       assert(cpu_node->level == MPIDR_AFFLVL0);
+
+       /*
+        * Generic management: Get the index for clearing any
+        * lingering re-entry information
+        */
+       index = cpu_node->data;
+       memset(&psci_ns_entry_info[index], 0, sizeof(psci_ns_entry_info[index]));
+
+       /*
+        * Arch. management. Perform the necessary steps to flush all
+        * cpu caches.
+        *
+        * TODO: This power down sequence varies across cpus so it needs to be
+        * abstracted out on the basis of the MIDR like in cpu_reset_handler().
+        * Do the bare minimal for the time being. Fix this before porting to
+        * Cortex models.
+        */
+       sctlr &= ~SCTLR_C_BIT;
+       write_sctlr(sctlr);
+
+       /*
+        * CAUTION: This flush to the level of unification makes an assumption
+        * about the cache hierarchy at affinity level 0 (cpu) in the platform.
+        * Ideally the platform should tell psci which levels to flush to exit
+        * coherency.
+        */
+       dcsw_op_louis(DCCISW);
+
+       /*
+        * Plat. management: Perform platform specific actions to turn this
+        * cpu off e.g. exit cpu coherency, program the power controller etc.
+        */
+       if (psci_plat_pm_ops->affinst_off) {
+
+               /* Get the current physical state of this cpu */
+               plat_state = psci_get_aff_phys_state(cpu_node);
+               rc = psci_plat_pm_ops->affinst_off(mpidr,
+                                                  cpu_node->level,
+                                                  plat_state);
+       }
+
+       /*
+        * The only error cpu_off can return is E_DENIED. So check if that's
+        * indeed the case. The caller will simply 'eret' in case of an error.
+        */
+       if (rc != PSCI_E_SUCCESS)
+               assert(rc == PSCI_E_DENIED);
+
+       return rc;
+}
+
+static int psci_afflvl1_off(unsigned long mpidr, aff_map_node *cluster_node)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+
+       /* Sanity check the cluster level */
+       assert(cluster_node->level == MPIDR_AFFLVL1);
+
+       /*
+        * Keep the physical state of this cluster handy to decide
+        * what action needs to be taken
+        */
+       plat_state = psci_get_aff_phys_state(cluster_node);
+
+       /*
+        * Arch. Management. Flush all levels of caches to PoC if
+        * the cluster is to be shutdown
+        */
+       if (plat_state == PSCI_STATE_OFF)
+               dcsw_op_all(DCCISW);
+
+       /*
+        * Plat. Management. Allow the platform to do it's cluster
+        * specific bookeeping e.g. turn off interconnect coherency,
+        * program the power controller etc.
+        */
+       if (psci_plat_pm_ops->affinst_off)
+               rc = psci_plat_pm_ops->affinst_off(mpidr,
+                                                  cluster_node->level,
+                                                  plat_state);
+
+       return rc;
+}
+
+static int psci_afflvl2_off(unsigned long mpidr, aff_map_node *system_node)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+
+       /* Cannot go beyond this level */
+       assert(system_node->level == MPIDR_AFFLVL2);
+
+       /*
+        * Keep the physical state of the system handy to decide what
+        * action needs to be taken
+        */
+       plat_state = psci_get_aff_phys_state(system_node);
+
+       /* No arch. and generic bookeeping to do here currently */
+
+       /*
+        * Plat. Management : Allow the platform to do it's bookeeping
+        * at this affinity level
+        */
+       if (psci_plat_pm_ops->affinst_off)
+               rc = psci_plat_pm_ops->affinst_off(mpidr,
+                                                  system_node->level,
+                                                  plat_state);
+       return rc;
+}
+
+static const afflvl_off_handler psci_afflvl_off_handlers[] = {
+       psci_afflvl0_off,
+       psci_afflvl1_off,
+       psci_afflvl2_off,
+};
+
+/*******************************************************************************
+ * This function implements the core of the processing required to turn a cpu
+ * off. It's assumed that along with turning the cpu off, higher affinity levels
+ * will be turned off as far as possible. We first need to determine the new
+ * state off all the affinity instances in the mpidr corresponding to the target
+ * cpu. Action will be taken on the basis of this new state. To do the state
+ * change we first need to acquire the locks for all the implemented affinity
+ * level to be able to snapshot the system state. Then we need to start turning
+ * affinity levels off from the lowest to the highest (e.g. a cpu needs to be
+ * off before a cluster can be turned off). To achieve this flow, we start
+ * acquiring the locks from the highest to the lowest affinity level. Once we
+ * reach affinity level 0, we do the state change followed by the actions
+ * corresponding to the new state for affinity level 0. Actions as per the
+ * updated state for higher affinity levels are performed as we unwind back to
+ * highest affinity level.
+ ******************************************************************************/
+int psci_afflvl_off(unsigned long mpidr,
+                   int cur_afflvl,
+                   int tgt_afflvl)
+{
+       int rc = PSCI_E_SUCCESS, level;
+       unsigned int next_state, prev_state;
+       aff_map_node *aff_node;
+
+       mpidr &= MPIDR_AFFINITY_MASK;;
+
+       /*
+        * Some affinity instances at levels between the current and
+        * target levels could be absent in the mpidr. Skip them and
+        * start from the first present instance.
+        */
+       level = psci_get_first_present_afflvl(mpidr,
+                                             cur_afflvl,
+                                             tgt_afflvl,
+                                             &aff_node);
+       /*
+        * Return if there are no more affinity instances beyond this
+        * level to process. Else ensure that the returned affinity
+        * node makes sense.
+        */
+       if (aff_node == NULL)
+               return rc;
+
+       assert(level == aff_node->level);
+
+       /*
+        * This function acquires the lock corresponding to each
+        * affinity level so that state management can be done safely.
+        */
+       bakery_lock_get(mpidr, &aff_node->lock);
+
+       /* Keep the old state and the next one handy */
+       prev_state = psci_get_state(aff_node->state);
+       next_state = PSCI_STATE_OFF;
+
+       /*
+        * We start from the highest affinity level and work our way
+        * downwards to the lowest i.e. MPIDR_AFFLVL0.
+        */
+       if (aff_node->level == tgt_afflvl) {
+               psci_change_state(mpidr,
+                                 tgt_afflvl,
+                                 get_max_afflvl(),
+                                 next_state);
+       } else {
+               rc = psci_afflvl_off(mpidr, level - 1, tgt_afflvl);
+               if (rc != PSCI_E_SUCCESS) {
+                       psci_set_state(aff_node->state, prev_state);
+                       goto exit;
+               }
+       }
+
+       /*
+        * Perform generic, architecture and platform specific
+        * handling
+        */
+       rc = psci_afflvl_off_handlers[level](mpidr, aff_node);
+       if (rc != PSCI_E_SUCCESS) {
+               psci_set_state(aff_node->state, prev_state);
+               goto exit;
+       }
+
+       /*
+        * If all has gone as per plan then this cpu should be
+        * marked as OFF
+        */
+       if (level == MPIDR_AFFLVL0) {
+               next_state = psci_get_state(aff_node->state);
+               assert(next_state == PSCI_STATE_OFF);
+       }
+
+exit:
+       bakery_lock_release(mpidr, &aff_node->lock);
+       return rc;
+}
diff --git a/common/psci/psci_afflvl_on.c b/common/psci/psci_afflvl_on.c
new file mode 100644 (file)
index 0000000..b0de063
--- /dev/null
@@ -0,0 +1,416 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <psci.h>
+#include <psci_private.h>
+
+typedef int (*afflvl_on_handler)(unsigned long,
+                                aff_map_node *,
+                                unsigned long,
+                                unsigned long);
+
+/*******************************************************************************
+ * This function checks whether a cpu which has been requested to be turned on
+ * is OFF to begin with.
+ ******************************************************************************/
+static int cpu_on_validate_state(unsigned int state)
+{
+       unsigned int psci_state;
+
+       /* Get the raw psci state */
+       psci_state = psci_get_state(state);
+
+       if (psci_state == PSCI_STATE_ON || psci_state == PSCI_STATE_SUSPEND)
+               return PSCI_E_ALREADY_ON;
+
+       if (psci_state == PSCI_STATE_ON_PENDING)
+               return PSCI_E_ON_PENDING;
+
+       assert(psci_state == PSCI_STATE_OFF);
+       return PSCI_E_SUCCESS;
+}
+
+/*******************************************************************************
+ * Handler routine to turn a cpu on. It takes care of any generic, architectural
+ * or platform specific setup required.
+ * TODO: Split this code across separate handlers for each type of setup?
+ ******************************************************************************/
+static int psci_afflvl0_on(unsigned long target_cpu,
+                          aff_map_node *cpu_node,
+                          unsigned long ns_entrypoint,
+                          unsigned long context_id)
+{
+       unsigned int index, plat_state;
+       unsigned long psci_entrypoint;
+       int rc;
+
+       /* Sanity check to safeguard against data corruption */
+       assert(cpu_node->level == MPIDR_AFFLVL0);
+
+       /*
+        * Generic management: Ensure that the cpu is off to be
+        * turned on
+        */
+       rc = cpu_on_validate_state(cpu_node->state);
+       if (rc != PSCI_E_SUCCESS)
+               return rc;
+
+       /*
+        * Arch. management: Derive the re-entry information for
+        * the non-secure world from the non-secure state from
+        * where this call originated.
+        */
+       index = cpu_node->data;
+       rc = psci_set_ns_entry_info(index, ns_entrypoint, context_id);
+       if (rc != PSCI_E_SUCCESS)
+               return rc;
+
+       /* Set the secure world (EL3) re-entry point after BL1 */
+       psci_entrypoint = (unsigned long) psci_aff_on_finish_entry;
+
+       /*
+        * Plat. management: Give the platform the current state
+        * of the target cpu to allow it to perform the necessary
+        * steps to power on.
+        */
+       if (psci_plat_pm_ops->affinst_on) {
+
+               /* Get the current physical state of this cpu */
+               plat_state = psci_get_aff_phys_state(cpu_node);
+               rc = psci_plat_pm_ops->affinst_on(target_cpu,
+                                                 psci_entrypoint,
+                                                 ns_entrypoint,
+                                                 cpu_node->level,
+                                                 plat_state);
+       }
+
+       return rc;
+}
+
+/*******************************************************************************
+ * Handler routine to turn a cluster on. It takes care or any generic, arch.
+ * or platform specific setup required.
+ * TODO: Split this code across separate handlers for each type of setup?
+ ******************************************************************************/
+static int psci_afflvl1_on(unsigned long target_cpu,
+                          aff_map_node *cluster_node,
+                          unsigned long ns_entrypoint,
+                          unsigned long context_id)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+       unsigned long psci_entrypoint;
+
+       assert(cluster_node->level == MPIDR_AFFLVL1);
+
+       /*
+        * There is no generic and arch. specific cluster
+        * management required
+        */
+
+       /*
+        * Plat. management: Give the platform the current state
+        * of the target cpu to allow it to perform the necessary
+        * steps to power on.
+        */
+       if (psci_plat_pm_ops->affinst_on) {
+               plat_state = psci_get_aff_phys_state(cluster_node);
+               psci_entrypoint = (unsigned long) psci_aff_on_finish_entry;
+               rc = psci_plat_pm_ops->affinst_on(target_cpu,
+                                                 psci_entrypoint,
+                                                 ns_entrypoint,
+                                                 cluster_node->level,
+                                                 plat_state);
+       }
+
+       return rc;
+}
+
+/*******************************************************************************
+ * Handler routine to turn a cluster of clusters on. It takes care or any
+ * generic, arch. or platform specific setup required.
+ * TODO: Split this code across separate handlers for each type of setup?
+ ******************************************************************************/
+static int psci_afflvl2_on(unsigned long target_cpu,
+                          aff_map_node *system_node,
+                          unsigned long ns_entrypoint,
+                          unsigned long context_id)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+       unsigned long psci_entrypoint;
+
+       /* Cannot go beyond affinity level 2 in this psci imp. */
+       assert(system_node->level == MPIDR_AFFLVL2);
+
+       /*
+        * There is no generic and arch. specific system management
+        * required
+        */
+
+       /*
+        * Plat. management: Give the platform the current state
+        * of the target cpu to allow it to perform the necessary
+        * steps to power on.
+        */
+       if (psci_plat_pm_ops->affinst_on) {
+               plat_state = psci_get_aff_phys_state(system_node);
+               psci_entrypoint = (unsigned long) psci_aff_on_finish_entry;
+               rc = psci_plat_pm_ops->affinst_on(target_cpu,
+                                                 psci_entrypoint,
+                                                 ns_entrypoint,
+                                                 system_node->level,
+                                                 plat_state);
+       }
+
+       return rc;
+}
+
+/* Private data structure to make this handlers accessible through indexing */
+static const afflvl_on_handler psci_afflvl_on_handlers[] = {
+       psci_afflvl0_on,
+       psci_afflvl1_on,
+       psci_afflvl2_on,
+};
+
+/*******************************************************************************
+ * This function implements the core of the processing required to turn a cpu
+ * on. It avoids recursion to traverse from the lowest to the highest affinity
+ * level unlike the off/suspend/pon_finisher functions. It does ensure that the
+ * locks are picked in the same order as the order routines to avoid deadlocks.
+ * The flow is: Take all the locks until the highest affinity level, Call the
+ * handlers for turning an affinity level on & finally change the state of the
+ * affinity level.
+ ******************************************************************************/
+int psci_afflvl_on(unsigned long target_cpu,
+                  unsigned long entrypoint,
+                  unsigned long context_id,
+                  int current_afflvl,
+                  int target_afflvl)
+{
+       unsigned int prev_state, next_state;
+       int rc = PSCI_E_SUCCESS, level;
+       aff_map_node *aff_node;
+       unsigned long mpidr = read_mpidr() & MPIDR_AFFINITY_MASK;
+
+       /*
+        * This loop acquires the lock corresponding to each
+        * affinity level so that by the time we hit the lowest
+        * affinity level, the system topology is snapshot and
+        * state management can be done safely.
+        */
+       for (level = current_afflvl; level >= target_afflvl; level--) {
+               aff_node = psci_get_aff_map_node(target_cpu, level);
+               if (aff_node)
+                       bakery_lock_get(mpidr, &aff_node->lock);
+       }
+
+       /*
+        * Perform generic, architecture and platform specific
+        * handling
+        */
+       for (level = current_afflvl; level >= target_afflvl; level--) {
+
+               /* Grab the node for each affinity level once again */
+               aff_node = psci_get_aff_map_node(target_cpu, level);
+               if (aff_node) {
+
+                       /* Keep the old state and the next one handy */
+                       prev_state = psci_get_state(aff_node->state);
+                       rc = psci_afflvl_on_handlers[level](target_cpu,
+                                                           aff_node,
+                                                           entrypoint,
+                                                           context_id);
+                       if (rc != PSCI_E_SUCCESS) {
+                               psci_set_state(aff_node->state, prev_state);
+                               goto exit;
+                       }
+               }
+       }
+
+       /*
+        * State management: Update the states since this is the
+        * target affinity level requested.
+        */
+       psci_change_state(target_cpu,
+                         target_afflvl,
+                         get_max_afflvl(),
+                         PSCI_STATE_ON_PENDING);
+
+exit:
+       /*
+        * This loop releases the lock corresponding to each affinity level
+        * in the reverse order. It also checks the final state of the cpu.
+        */
+       for (level = target_afflvl; level <= current_afflvl; level++) {
+               aff_node = psci_get_aff_map_node(target_cpu, level);
+               if (aff_node) {
+                       if (level == MPIDR_AFFLVL0) {
+                               next_state = psci_get_state(aff_node->state);
+                               assert(next_state == PSCI_STATE_ON_PENDING);
+                       }
+                       bakery_lock_release(mpidr, &aff_node->lock);
+               }
+       }
+
+       return rc;
+}
+
+/*******************************************************************************
+ * The following functions finish an earlier affinity power on request. They
+ * are called by the common finisher routine in psci_common.c.
+ ******************************************************************************/
+static unsigned int psci_afflvl0_on_finish(unsigned long mpidr,
+                                          aff_map_node *cpu_node,
+                                          unsigned int prev_state)
+{
+       unsigned int index, plat_state, rc = PSCI_E_SUCCESS;
+
+       assert(cpu_node->level == MPIDR_AFFLVL0);
+
+       /*
+        * Plat. management: Perform the platform specific actions
+        * for this cpu e.g. enabling the gic or zeroing the mailbox
+        * register. The actual state of this cpu has already been
+        * changed.
+        */
+       if (psci_plat_pm_ops->affinst_on_finish) {
+
+               /* Get the previous physical state of this cpu */
+               plat_state = psci_get_phys_state(prev_state);
+               rc = psci_plat_pm_ops->affinst_on_finish(mpidr,
+                                                        cpu_node->level,
+                                                        plat_state);
+               assert(rc == PSCI_E_SUCCESS);
+       }
+
+       /*
+        * Arch. management: Turn on mmu & restore architectural state
+        */
+       write_vbar((unsigned long) runtime_exceptions);
+       enable_mmu();
+
+       /*
+        * All the platform specific actions for turning this cpu
+        * on have completed. Perform enough arch.initialization
+        * to run in the non-secure address space.
+        */
+       bl31_arch_setup();
+
+       /*
+        * Generic management: Now we just need to retrieve the
+        * information that we had stashed away during the cpu_on
+        * call to set this cpu on it's way. First get the index
+        * for restoring the re-entry info
+        */
+       index = cpu_node->data;
+       rc = psci_get_ns_entry_info(index);
+
+       /* Clean caches before re-entering normal world */
+       dcsw_op_louis(DCCSW);
+
+       return rc;
+}
+
+static unsigned int psci_afflvl1_on_finish(unsigned long mpidr,
+                                          aff_map_node *cluster_node,
+                                          unsigned int prev_state)
+{
+       unsigned int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+
+       assert(cluster_node->level == MPIDR_AFFLVL1);
+
+       /*
+        * Plat. management: Perform the platform specific actions
+        * as per the old state of the cluster e.g. enabling
+        * coherency at the interconnect depends upon the state with
+        * which this cluster was powered up. If anything goes wrong
+        * then assert as there is no way to recover from this
+        * situation.
+        */
+       if (psci_plat_pm_ops->affinst_on_finish) {
+               plat_state = psci_get_phys_state(prev_state);
+               rc = psci_plat_pm_ops->affinst_on_finish(mpidr,
+                                                        cluster_node->level,
+                                                        plat_state);
+               assert(rc == PSCI_E_SUCCESS);
+       }
+
+       return rc;
+}
+
+
+static unsigned int psci_afflvl2_on_finish(unsigned long mpidr,
+                                          aff_map_node *system_node,
+                                          unsigned int prev_state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+
+       /* Cannot go beyond this affinity level */
+       assert(system_node->level == MPIDR_AFFLVL2);
+
+       /*
+        * Currently, there are no architectural actions to perform
+        * at the system level.
+        */
+
+       /*
+        * Plat. management: Perform the platform specific actions
+        * as per the old state of the cluster e.g. enabling
+        * coherency at the interconnect depends upon the state with
+        * which this cluster was powered up. If anything goes wrong
+        * then assert as there is no way to recover from this
+        * situation.
+        */
+       if (psci_plat_pm_ops->affinst_on_finish) {
+               plat_state = psci_get_phys_state(system_node->state);
+               rc = psci_plat_pm_ops->affinst_on_finish(mpidr,
+                                                        system_node->level,
+                                                        plat_state);
+               assert(rc == PSCI_E_SUCCESS);
+       }
+
+       return rc;
+}
+
+const afflvl_power_on_finisher psci_afflvl_on_finishers[] = {
+       psci_afflvl0_on_finish,
+       psci_afflvl1_on_finish,
+       psci_afflvl2_on_finish,
+};
+
diff --git a/common/psci/psci_afflvl_suspend.c b/common/psci/psci_afflvl_suspend.c
new file mode 100644 (file)
index 0000000..030f15d
--- /dev/null
@@ -0,0 +1,465 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <psci.h>
+#include <psci_private.h>
+
+typedef int (*afflvl_suspend_handler)(unsigned long,
+                                     aff_map_node *,
+                                     unsigned long,
+                                     unsigned long,
+                                     unsigned int);
+
+/*******************************************************************************
+ * The next three functions implement a handler for each supported affinity
+ * level which is called when that affinity level is about to be suspended.
+ ******************************************************************************/
+static int psci_afflvl0_suspend(unsigned long mpidr,
+                               aff_map_node *cpu_node,
+                               unsigned long ns_entrypoint,
+                               unsigned long context_id,
+                               unsigned int power_state)
+{
+       unsigned int index, plat_state;
+       unsigned long psci_entrypoint, sctlr = read_sctlr();
+       int rc = PSCI_E_SUCCESS;
+
+       /* Sanity check to safeguard against data corruption */
+       assert(cpu_node->level == MPIDR_AFFLVL0);
+
+       /*
+        * Generic management: Store the re-entry information for the
+        * non-secure world
+        */
+       index = cpu_node->data;
+       rc = psci_set_ns_entry_info(index, ns_entrypoint, context_id);
+       if (rc != PSCI_E_SUCCESS)
+               return rc;
+
+       /*
+        * Arch. management: Save the secure context, flush the
+        * L1 caches and exit intra-cluster coherency et al
+        */
+       psci_secure_context[index].sctlr = read_sctlr();
+       psci_secure_context[index].scr = read_scr();
+       psci_secure_context[index].cptr = read_cptr();
+       psci_secure_context[index].cpacr = read_cpacr();
+       psci_secure_context[index].cntfrq = read_cntfrq_el0();
+       psci_secure_context[index].mair = read_mair();
+       psci_secure_context[index].tcr = read_tcr();
+       psci_secure_context[index].ttbr = read_ttbr0();
+       psci_secure_context[index].vbar = read_vbar();
+
+       /* Set the secure world (EL3) re-entry point after BL1 */
+       psci_entrypoint = (unsigned long) psci_aff_suspend_finish_entry;
+
+       /*
+        * Arch. management. Perform the necessary steps to flush all
+        * cpu caches.
+        *
+        * TODO: This power down sequence varies across cpus so it needs to be
+        * abstracted out on the basis of the MIDR like in cpu_reset_handler().
+        * Do the bare minimal for the time being. Fix this before porting to
+        * Cortex models.
+        */
+       sctlr &= ~SCTLR_C_BIT;
+       write_sctlr(sctlr);
+
+       /*
+        * CAUTION: This flush to the level of unification makes an assumption
+        * about the cache hierarchy at affinity level 0 (cpu) in the platform.
+        * Ideally the platform should tell psci which levels to flush to exit
+        * coherency.
+        */
+       dcsw_op_louis(DCCISW);
+
+       /*
+        * Plat. management: Allow the platform to perform the
+        * necessary actions to turn off this cpu e.g. set the
+        * platform defined mailbox with the psci entrypoint,
+        * program the power controller etc.
+        */
+       if (psci_plat_pm_ops->affinst_suspend) {
+               plat_state = psci_get_aff_phys_state(cpu_node);
+               rc = psci_plat_pm_ops->affinst_suspend(mpidr,
+                                                      psci_entrypoint,
+                                                      ns_entrypoint,
+                                                      cpu_node->level,
+                                                      plat_state);
+       }
+
+       return rc;
+}
+
+static int psci_afflvl1_suspend(unsigned long mpidr,
+                               aff_map_node *cluster_node,
+                               unsigned long ns_entrypoint,
+                               unsigned long context_id,
+                               unsigned int power_state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+       unsigned long psci_entrypoint;
+
+       /* Sanity check the cluster level */
+       assert(cluster_node->level == MPIDR_AFFLVL1);
+
+       /*
+        * Keep the physical state of this cluster handy to decide
+        * what action needs to be taken
+        */
+       plat_state = psci_get_aff_phys_state(cluster_node);
+
+       /*
+        * Arch. management: Flush all levels of caches to PoC if the
+        * cluster is to be shutdown
+        */
+       if (plat_state == PSCI_STATE_OFF)
+               dcsw_op_all(DCCISW);
+
+       /*
+        * Plat. Management. Allow the platform to do it's cluster
+        * specific bookeeping e.g. turn off interconnect coherency,
+        * program the power controller etc.
+        */
+       if (psci_plat_pm_ops->affinst_suspend) {
+
+               /*
+                * Sending the psci entrypoint is currently redundant
+                * beyond affinity level 0 but one never knows what a
+                * platform might do. Also it allows us to keep the
+                * platform handler prototype the same.
+                */
+               psci_entrypoint = (unsigned long) psci_aff_suspend_finish_entry;
+
+               rc = psci_plat_pm_ops->affinst_suspend(mpidr,
+                                                      psci_entrypoint,
+                                                      ns_entrypoint,
+                                                      cluster_node->level,
+                                                      plat_state);
+       }
+
+       return rc;
+}
+
+
+static int psci_afflvl2_suspend(unsigned long mpidr,
+                               aff_map_node *system_node,
+                               unsigned long ns_entrypoint,
+                               unsigned long context_id,
+                               unsigned int power_state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+       unsigned long psci_entrypoint;
+
+       /* Cannot go beyond this */
+       assert(system_node->level == MPIDR_AFFLVL2);
+
+       /*
+        * Keep the physical state of the system handy to decide what
+        * action needs to be taken
+        */
+       plat_state = psci_get_aff_phys_state(system_node);
+
+       /*
+        * Plat. Management : Allow the platform to do it's bookeeping
+        * at this affinity level
+        */
+       if (psci_plat_pm_ops->affinst_suspend) {
+
+               /*
+                * Sending the psci entrypoint is currently redundant
+                * beyond affinity level 0 but one never knows what a
+                * platform might do. Also it allows us to keep the
+                * platform handler prototype the same.
+                */
+               psci_entrypoint = (unsigned long) psci_aff_suspend_finish_entry;
+
+               rc = psci_plat_pm_ops->affinst_suspend(mpidr,
+                                                      psci_entrypoint,
+                                                      ns_entrypoint,
+                                                      system_node->level,
+                                                      plat_state);
+       }
+
+       return rc;
+}
+
+static const afflvl_suspend_handler psci_afflvl_suspend_handlers[] = {
+       psci_afflvl0_suspend,
+       psci_afflvl1_suspend,
+       psci_afflvl2_suspend,
+};
+
+/*******************************************************************************
+ * This function implements the core of the processing required to suspend a cpu
+ * It'S assumed that along with suspending the cpu, higher affinity levels will
+ * be suspended as far as possible. Suspending a cpu is equivalent to physically
+ * powering it down, but the cpu is still available to the OS for scheduling.
+ * We first need to determine the new state off all the affinity instances in
+ * the mpidr corresponding to the target cpu. Action will be taken on the basis
+ * of this new state. To do the state change we first need to acquire the locks
+ * for all the implemented affinity level to be able to snapshot the system
+ * state. Then we need to start suspending affinity levels from the lowest to
+ * the highest (e.g. a cpu needs to be suspended before a cluster can be). To
+ * achieve this flow, we start acquiring the locks from the highest to the
+ * lowest affinity level. Once we reach affinity level 0, we do the state change
+ * followed by the actions corresponding to the new state for affinity level 0.
+ * Actions as per the updated state for higher affinity levels are performed as
+ * we unwind back to highest affinity level.
+ ******************************************************************************/
+int psci_afflvl_suspend(unsigned long mpidr,
+                       unsigned long entrypoint,
+                       unsigned long context_id,
+                       unsigned int power_state,
+                       int cur_afflvl,
+                       int tgt_afflvl)
+{
+       int rc = PSCI_E_SUCCESS, level;
+       unsigned int prev_state, next_state;
+       aff_map_node *aff_node;
+
+       mpidr &= MPIDR_AFFINITY_MASK;
+
+       /*
+        * Some affinity instances at levels between the current and
+        * target levels could be absent in the mpidr. Skip them and
+        * start from the first present instance.
+        */
+       level = psci_get_first_present_afflvl(mpidr,
+                                             cur_afflvl,
+                                             tgt_afflvl,
+                                             &aff_node);
+
+       /*
+        * Return if there are no more affinity instances beyond this
+        * level to process. Else ensure that the returned affinity
+        * node makes sense.
+        */
+       if (aff_node == NULL)
+               return rc;
+
+       assert(level == aff_node->level);
+
+       /*
+        * This function acquires the lock corresponding to each
+        * affinity level so that state management can be done safely.
+        */
+       bakery_lock_get(mpidr, &aff_node->lock);
+
+       /* Keep the old state and the next one handy */
+       prev_state = psci_get_state(aff_node->state);
+       next_state = PSCI_STATE_SUSPEND;
+
+       /*
+        * We start from the highest affinity level and work our way
+        * downwards to the lowest i.e. MPIDR_AFFLVL0.
+        */
+       if (aff_node->level == tgt_afflvl) {
+               psci_change_state(mpidr,
+                                 tgt_afflvl,
+                                 get_max_afflvl(),
+                                 next_state);
+       } else {
+               rc = psci_afflvl_suspend(mpidr,
+                                        entrypoint,
+                                        context_id,
+                                        power_state,
+                                        level - 1,
+                                        tgt_afflvl);
+               if (rc != PSCI_E_SUCCESS) {
+                       psci_set_state(aff_node->state, prev_state);
+                       goto exit;
+               }
+       }
+
+       /*
+        * Perform generic, architecture and platform specific
+        * handling
+        */
+       rc = psci_afflvl_suspend_handlers[level](mpidr,
+                                                aff_node,
+                                                entrypoint,
+                                                context_id,
+                                                power_state);
+       if (rc != PSCI_E_SUCCESS) {
+               psci_set_state(aff_node->state, prev_state);
+               goto exit;
+       }
+
+       /*
+        * If all has gone as per plan then this cpu should be
+        * marked as OFF
+        */
+       if (level == MPIDR_AFFLVL0) {
+               next_state = psci_get_state(aff_node->state);
+               assert(next_state == PSCI_STATE_SUSPEND);
+       }
+
+exit:
+       bakery_lock_release(mpidr, &aff_node->lock);
+       return rc;
+}
+
+/*******************************************************************************
+ * The following functions finish an earlier affinity suspend request. They
+ * are called by the common finisher routine in psci_common.c.
+ ******************************************************************************/
+static unsigned int psci_afflvl0_suspend_finish(unsigned long mpidr,
+                                               aff_map_node *cpu_node,
+                                               unsigned int prev_state)
+{
+       unsigned int index, plat_state, rc = 0;
+
+       assert(cpu_node->level == MPIDR_AFFLVL0);
+
+       /*
+        * Plat. management: Perform the platform specific actions
+        * before we change the state of the cpu e.g. enabling the
+        * gic or zeroing the mailbox register. If anything goes
+        * wrong then assert as there is no way to recover from this
+        * situation.
+        */
+       if (psci_plat_pm_ops->affinst_suspend_finish) {
+               plat_state = psci_get_phys_state(prev_state);
+               rc = psci_plat_pm_ops->affinst_suspend_finish(mpidr,
+                                                             cpu_node->level,
+                                                             plat_state);
+               assert(rc == PSCI_E_SUCCESS);
+       }
+
+       /* Get the index for restoring the re-entry information */
+       index = cpu_node->data;
+
+       /*
+        * Arch. management: Restore the stashed secure architectural
+        * context in the right order.
+        */
+       write_vbar(psci_secure_context[index].vbar);
+       write_mair(psci_secure_context[index].mair);
+       write_tcr(psci_secure_context[index].tcr);
+       write_ttbr0(psci_secure_context[index].ttbr);
+       write_sctlr(psci_secure_context[index].sctlr);
+
+       /* MMU and coherency should be enabled by now */
+       write_scr(psci_secure_context[index].scr);
+       write_cptr(psci_secure_context[index].cptr);
+       write_cpacr(psci_secure_context[index].cpacr);
+       write_cntfrq_el0(psci_secure_context[index].cntfrq);
+
+       /*
+        * Generic management: Now we just need to retrieve the
+        * information that we had stashed away during the suspend
+        * call to set this cpu on it's way.
+        */
+       rc = psci_get_ns_entry_info(index);
+
+       /* Clean caches before re-entering normal world */
+       dcsw_op_louis(DCCSW);
+
+       return rc;
+}
+
+static unsigned int psci_afflvl1_suspend_finish(unsigned long mpidr,
+                                               aff_map_node *cluster_node,
+                                               unsigned int prev_state)
+{
+       unsigned int rc = 0;
+       unsigned int plat_state;
+
+       assert(cluster_node->level == MPIDR_AFFLVL1);
+
+       /*
+        * Plat. management: Perform the platform specific actions
+        * as per the old state of the cluster e.g. enabling
+        * coherency at the interconnect depends upon the state with
+        * which this cluster was powered up. If anything goes wrong
+        * then assert as there is no way to recover from this
+        * situation.
+        */
+       if (psci_plat_pm_ops->affinst_suspend_finish) {
+               plat_state = psci_get_phys_state(prev_state);
+               rc = psci_plat_pm_ops->affinst_suspend_finish(mpidr,
+                                                             cluster_node->level,
+                                                             plat_state);
+               assert(rc == PSCI_E_SUCCESS);
+       }
+
+       return rc;
+}
+
+
+static unsigned int psci_afflvl2_suspend_finish(unsigned long mpidr,
+                                               aff_map_node *system_node,
+                                               unsigned int target_afflvl)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int plat_state;
+
+       /* Cannot go beyond this affinity level */
+       assert(system_node->level == MPIDR_AFFLVL2);
+
+       /*
+        * Currently, there are no architectural actions to perform
+        * at the system level.
+        */
+
+       /*
+        * Plat. management: Perform the platform specific actions
+        * as per the old state of the cluster e.g. enabling
+        * coherency at the interconnect depends upon the state with
+        * which this cluster was powered up. If anything goes wrong
+        * then assert as there is no way to recover from this
+        * situation.
+        */
+       if (psci_plat_pm_ops->affinst_suspend_finish) {
+               plat_state = psci_get_phys_state(system_node->state);
+               rc = psci_plat_pm_ops->affinst_suspend_finish(mpidr,
+                                                             system_node->level,
+                                                             plat_state);
+               assert(rc == PSCI_E_SUCCESS);
+       }
+
+       return rc;
+}
+
+const afflvl_power_on_finisher psci_afflvl_suspend_finishers[] = {
+       psci_afflvl0_suspend_finish,
+       psci_afflvl1_suspend_finish,
+       psci_afflvl2_suspend_finish,
+};
+
diff --git a/common/psci/psci_common.c b/common/psci/psci_common.c
new file mode 100644 (file)
index 0000000..6b07c53
--- /dev/null
@@ -0,0 +1,520 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <psci.h>
+#include <psci_private.h>
+
+/*******************************************************************************
+ * Arrays that contains information needs to resume a cpu's execution when woken
+ * out of suspend or off states. 'psci_ns_einfo_idx' keeps track of the next
+ * free index in the 'psci_ns_entry_info' & 'psci_secure_context' arrays. Each
+ * cpu is allocated a single entry in each array during startup.
+ ******************************************************************************/
+secure_context psci_secure_context[PSCI_NUM_AFFS];
+ns_entry_info psci_ns_entry_info[PSCI_NUM_AFFS];
+unsigned int psci_ns_einfo_idx;
+
+/*******************************************************************************
+ * Grand array that holds the platform's topology information for state
+ * management of affinity instances. Each node (aff_map_node) in the array
+ * corresponds to an affinity instance e.g. cluster, cpu within an mpidr
+ ******************************************************************************/
+aff_map_node psci_aff_map[PSCI_NUM_AFFS]
+__attribute__ ((section("tzfw_coherent_mem")));
+
+/*******************************************************************************
+ * In a system, a certain number of affinity instances are present at an
+ * affinity level. The cumulative number of instances across all levels are
+ * stored in 'psci_aff_map'. The topology tree has been flattenned into this
+ * array. To retrieve nodes, information about the extents of each affinity
+ * level i.e. start index and end index needs to be present. 'psci_aff_limits'
+ * stores this information.
+ ******************************************************************************/
+aff_limits_node psci_aff_limits[MPIDR_MAX_AFFLVL + 1];
+
+/*******************************************************************************
+ * Pointer to functions exported by the platform to complete power mgmt. ops
+ ******************************************************************************/
+plat_pm_ops *psci_plat_pm_ops;
+
+/*******************************************************************************
+ * Simple routine to retrieve the maximum affinity level supported by the
+ * platform and check that it makes sense.
+ ******************************************************************************/
+int get_max_afflvl()
+{
+       int aff_lvl;
+
+       aff_lvl = plat_get_max_afflvl();
+       assert(aff_lvl <= MPIDR_MAX_AFFLVL && aff_lvl >= MPIDR_AFFLVL0);
+
+       return aff_lvl;
+}
+
+/*******************************************************************************
+ * Simple routine to set the id of an affinity instance at a given level in the
+ * mpidr.
+ ******************************************************************************/
+unsigned long mpidr_set_aff_inst(unsigned long mpidr,
+                                unsigned char aff_inst,
+                                int aff_lvl)
+{
+       unsigned long aff_shift;
+
+       assert(aff_lvl <= MPIDR_AFFLVL3);
+
+       /*
+        * Decide the number of bits to shift by depending upon
+        * the affinity level
+        */
+       aff_shift = get_afflvl_shift(aff_lvl);
+
+       /* Clear the existing affinity instance & set the new one*/
+       mpidr &= ~(MPIDR_AFFLVL_MASK << aff_shift);
+       mpidr |= aff_inst << aff_shift;
+
+       return mpidr;
+}
+
+/*******************************************************************************
+ * Simple routine to determine whether an affinity instance at a given level
+ * in an mpidr exists or not.
+ ******************************************************************************/
+int psci_validate_mpidr(unsigned long mpidr, int level)
+{
+       aff_map_node *node;
+
+       node = psci_get_aff_map_node(mpidr, level);
+       if (node && (node->state & PSCI_AFF_PRESENT))
+               return PSCI_E_SUCCESS;
+       else
+               return PSCI_E_INVALID_PARAMS;
+}
+
+/*******************************************************************************
+ * Simple routine to determine the first affinity level instance that is present
+ * between the start and end affinity levels. This helps to skip handling of
+ * absent affinity levels while performing psci operations.
+ * The start level can be > or <= to the end level depending upon whether this
+ * routine is expected to search top down or bottom up.
+ ******************************************************************************/
+int psci_get_first_present_afflvl(unsigned long mpidr,
+                                 int start_afflvl,
+                                 int end_afflvl,
+                                 aff_map_node **node)
+{
+       int level;
+
+       /* Check whether we have to search up or down */
+       if (start_afflvl <= end_afflvl) {
+               for (level = start_afflvl; level <= end_afflvl; level++) {
+                       *node = psci_get_aff_map_node(mpidr, level);
+                       if (*node && ((*node)->state & PSCI_AFF_PRESENT))
+                               break;
+               }
+       } else {
+               for (level = start_afflvl; level >= end_afflvl; level--) {
+                       *node = psci_get_aff_map_node(mpidr, level);
+                       if (*node && ((*node)->state & PSCI_AFF_PRESENT))
+                               break;
+               }
+       }
+
+       return level;
+}
+
+/*******************************************************************************
+ * Recursively change the affinity state between the current and target affinity
+ * levels. The target state matters only if we are starting from affinity level
+ * 0 i.e. a cpu otherwise the state depends upon the state of the lower affinity
+ * levels.
+ ******************************************************************************/
+int psci_change_state(unsigned long mpidr,
+                     int cur_afflvl,
+                     int tgt_afflvl,
+                     unsigned int tgt_state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int state;
+       aff_map_node *aff_node;
+
+       /* Sanity check the affinity levels */
+       assert(tgt_afflvl >= cur_afflvl);
+
+       aff_node = psci_get_aff_map_node(mpidr, cur_afflvl);
+       assert(aff_node);
+
+       /* TODO: Check whether the affinity level is present or absent*/
+
+       if (cur_afflvl == MPIDR_AFFLVL0) {
+               psci_set_state(aff_node->state, tgt_state);
+       } else {
+               state = psci_calculate_affinity_state(aff_node);
+               psci_set_state(aff_node->state, state);
+       }
+
+       if (cur_afflvl != tgt_afflvl)
+               psci_change_state(mpidr, cur_afflvl + 1, tgt_afflvl, tgt_state);
+
+       return rc;
+}
+
+/*******************************************************************************
+ * This routine does the heavy lifting for psci_change_state(). It examines the
+ * state of each affinity instance at the next lower affinity level and decides
+ * it's final state accordingly. If a lower affinity instance is ON then the
+ * higher affinity instance is ON. If all the lower affinity instances are OFF
+ * then the higher affinity instance is OFF. If atleast one lower affinity
+ * instance is SUSPENDED then the higher affinity instance is SUSPENDED. If only
+ * a single lower affinity instance is ON_PENDING then the higher affinity
+ * instance in ON_PENDING as well.
+ ******************************************************************************/
+unsigned int psci_calculate_affinity_state(aff_map_node *aff_node)
+{
+       int ctr;
+       unsigned int aff_count, hi_aff_state;
+       unsigned long tempidr;
+       aff_map_node *lo_aff_node;
+
+       /* Cannot calculate lowest affinity state. It's simply assigned */
+       assert(aff_node->level > MPIDR_AFFLVL0);
+
+       /*
+        * Find the number of affinity instances at level X-1 e.g. number of
+        * cpus in a cluster. The level X state depends upon the state of each
+        * instance at level X-1
+        */
+       hi_aff_state = PSCI_STATE_OFF;
+       aff_count = plat_get_aff_count(aff_node->level - 1, aff_node->mpidr);
+       for (ctr = 0; ctr < aff_count; ctr++) {
+
+               /*
+                * Create a mpidr for each lower affinity level (X-1). Use their
+                * states to influence the higher affinity state (X).
+                */
+               tempidr = mpidr_set_aff_inst(aff_node->mpidr,
+                                            ctr,
+                                            aff_node->level - 1);
+               lo_aff_node = psci_get_aff_map_node(tempidr,
+                                                   aff_node->level - 1);
+               assert(lo_aff_node);
+
+               /* Continue only if the cpu exists within the cluster */
+               if (!(lo_aff_node->state & PSCI_AFF_PRESENT))
+                       continue;
+
+               switch (psci_get_state(lo_aff_node->state)) {
+
+               /*
+                * If any lower affinity is on within the cluster, then
+                * the higher affinity is on.
+                */
+               case PSCI_STATE_ON:
+                       return PSCI_STATE_ON;
+
+               /*
+                * At least one X-1 needs to be suspended for X to be suspended
+                * but it's effectively on for the affinity_info call.
+                * SUSPEND > ON_PENDING > OFF.
+                */
+               case PSCI_STATE_SUSPEND:
+                       hi_aff_state = PSCI_STATE_SUSPEND;
+                       continue;
+
+               /*
+                * Atleast one X-1 needs to be on_pending & the rest off for X
+                * to be on_pending. ON_PENDING > OFF.
+                */
+               case PSCI_STATE_ON_PENDING:
+                       if (hi_aff_state != PSCI_STATE_SUSPEND)
+                               hi_aff_state = PSCI_STATE_ON_PENDING;
+                       continue;
+
+               /* Higher affinity is off if all lower affinities are off. */
+               case PSCI_STATE_OFF:
+                       continue;
+
+               default:
+                       assert(0);
+               }
+       }
+
+       return hi_aff_state;
+}
+
+/*******************************************************************************
+ * This function retrieves all the stashed information needed to correctly
+ * resume a cpu's execution in the non-secure state after it has been physically
+ * powered on i.e. turned ON or resumed from SUSPEND
+ ******************************************************************************/
+unsigned int psci_get_ns_entry_info(unsigned int index)
+{
+       unsigned long sctlr = 0, scr, el_status, id_aa64pfr0;
+
+       scr = read_scr();
+
+       /* Switch to the non-secure view of the registers */
+       write_scr(scr | SCR_NS_BIT);
+
+       /* Find out which EL we are going to */
+       id_aa64pfr0 = read_id_aa64pfr0_el1();
+       el_status = (id_aa64pfr0 >> ID_AA64PFR0_EL2_SHIFT) &
+               ID_AA64PFR0_ELX_MASK;
+
+       /* Restore endianess */
+       if (psci_ns_entry_info[index].sctlr & SCTLR_EE_BIT)
+               sctlr |= SCTLR_EE_BIT;
+       else
+               sctlr &= ~SCTLR_EE_BIT;
+
+       /* Turn off MMU and Caching */
+       sctlr &= ~(SCTLR_M_BIT | SCTLR_C_BIT | SCTLR_M_BIT);
+
+       /* Set the register width */
+       if (psci_ns_entry_info[index].scr & SCR_RW_BIT)
+               scr |= SCR_RW_BIT;
+       else
+               scr &= ~SCR_RW_BIT;
+
+       scr |= SCR_NS_BIT;
+
+       if (el_status)
+               write_sctlr_el2(sctlr);
+       else
+               write_sctlr_el1(sctlr);
+
+       /* Fulfill the cpu_on entry reqs. as per the psci spec */
+       write_scr(scr);
+       write_spsr(psci_ns_entry_info[index].eret_info.spsr);
+       write_elr(psci_ns_entry_info[index].eret_info.entrypoint);
+
+       return psci_ns_entry_info[index].context_id;
+}
+
+/*******************************************************************************
+ * This function retrieves and stashes all the information needed to correctly
+ * resume a cpu's execution in the non-secure state after it has been physically
+ * powered on i.e. turned ON or resumed from SUSPEND. This is done prior to
+ * turning it on or before suspending it.
+ ******************************************************************************/
+int psci_set_ns_entry_info(unsigned int index,
+                          unsigned long entrypoint,
+                          unsigned long context_id)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int rw, mode, ee, spsr = 0;
+       unsigned long id_aa64pfr0 = read_id_aa64pfr0_el1(), scr = read_scr();
+       unsigned long el_status;
+
+       /* Figure out what mode do we enter the non-secure world in */
+       el_status = (id_aa64pfr0 >> ID_AA64PFR0_EL2_SHIFT) &
+               ID_AA64PFR0_ELX_MASK;
+
+       /*
+        * Figure out whether the cpu enters the non-secure address space
+        * in aarch32 or aarch64
+        */
+       rw = scr & SCR_RW_BIT;
+       if (rw) {
+
+               /*
+                * Check whether a Thumb entry point has been provided for an
+                * aarch64 EL
+                */
+               if (entrypoint & 0x1)
+                       return PSCI_E_INVALID_PARAMS;
+
+               if (el_status && (scr & SCR_HCE_BIT)) {
+                       mode = MODE_EL2;
+                       ee = read_sctlr_el2() & SCTLR_EE_BIT;
+               } else {
+                       mode = MODE_EL1;
+                       ee = read_sctlr_el1() & SCTLR_EE_BIT;
+               }
+
+               spsr = DAIF_DBG_BIT | DAIF_ABT_BIT;
+               spsr |= DAIF_IRQ_BIT | DAIF_FIQ_BIT;
+               spsr <<= PSR_DAIF_SHIFT;
+               spsr |= make_spsr(mode, MODE_SP_ELX, !rw);
+
+               psci_ns_entry_info[index].sctlr |= ee;
+               psci_ns_entry_info[index].scr |= SCR_RW_BIT;
+       } else {
+
+               /* Check whether aarch32 has to be entered in Thumb mode */
+               if (entrypoint & 0x1)
+                       spsr = SPSR32_T_BIT;
+
+               if (el_status && (scr & SCR_HCE_BIT)) {
+                       mode = AARCH32_MODE_HYP;
+                       ee = read_sctlr_el2() & SCTLR_EE_BIT;
+               } else {
+                       mode = AARCH32_MODE_SVC;
+                       ee = read_sctlr_el1() & SCTLR_EE_BIT;
+               }
+
+               /*
+                * TODO: Choose async. exception bits if HYP mode is not
+                * implemented according to the values of SCR.{AW, FW} bits
+                */
+               spsr |= DAIF_ABT_BIT | DAIF_IRQ_BIT | DAIF_FIQ_BIT;
+               spsr <<= PSR_DAIF_SHIFT;
+               if(ee)
+                       spsr |= SPSR32_EE_BIT;
+               spsr |= mode;
+
+               /* Ensure that the CSPR.E and SCTLR.EE bits match */
+               psci_ns_entry_info[index].sctlr |= ee;
+               psci_ns_entry_info[index].scr &= ~SCR_RW_BIT;
+       }
+
+       psci_ns_entry_info[index].eret_info.entrypoint = entrypoint;
+       psci_ns_entry_info[index].eret_info.spsr = spsr;
+       psci_ns_entry_info[index].context_id = context_id;
+
+       return rc;
+}
+
+/*******************************************************************************
+ * An affinity level could be on, on_pending, suspended or off. These are the
+ * logical states it can be in. Physically either it's off or on. When it's in
+ * the state on_pending then it's about to be turned on. It's not possible to
+ * tell whether that's actually happenned or not. So we err on the side of
+ * caution & treat the affinity level as being turned off.
+ ******************************************************************************/
+inline unsigned int psci_get_phys_state(unsigned int aff_state)
+{
+       return (aff_state != PSCI_STATE_ON ? PSCI_STATE_OFF : PSCI_STATE_ON);
+}
+
+unsigned int psci_get_aff_phys_state(aff_map_node *aff_node)
+{
+       unsigned int aff_state;
+
+       aff_state = psci_get_state(aff_node->state);
+       return psci_get_phys_state(aff_state);
+}
+
+/*******************************************************************************
+ * Generic handler which is called when a cpu is physically powered on. It
+ * recurses through all the affinity levels performing generic, architectural,
+ * platform setup and state management e.g. for a cluster that's been powered
+ * on, it will call the platform specific code which will enable coherency at
+ * the interconnect level. For a cpu it could mean turning on the MMU etc.
+ *
+ * This function traverses from the lowest to the highest affinity level
+ * implemented by the platform. Since it's recursive, for each call the
+ * 'cur_afflvl' & 'tgt_afflvl' parameters keep track of which level we are at
+ * and which level we need to get to respectively. Locks are picked up along the
+ * way so that when the lowest affinity level is hit, state management can be
+ * safely done. Prior to this, each affinity level does it's bookeeping as per
+ * the state out of reset.
+ *
+ * CAUTION: This function is called with coherent stacks so that coherency and
+ * the mmu can be turned on safely.
+ ******************************************************************************/
+unsigned int psci_afflvl_power_on_finish(unsigned long mpidr,
+                                        int cur_afflvl,
+                                        int tgt_afflvl,
+                                        afflvl_power_on_finisher *pon_handlers)
+{
+       unsigned int prev_state, next_state, rc = PSCI_E_SUCCESS;
+       aff_map_node *aff_node;
+       int level;
+
+       mpidr &= MPIDR_AFFINITY_MASK;;
+
+       /*
+        * Some affinity instances at levels between the current and
+        * target levels could be absent in the mpidr. Skip them and
+        * start from the first present instance.
+        */
+       level = psci_get_first_present_afflvl(mpidr,
+                                             cur_afflvl,
+                                             tgt_afflvl,
+                                             &aff_node);
+       /*
+        * Return if there are no more affinity instances beyond this
+        * level to process. Else ensure that the returned affinity
+        * node makes sense.
+        */
+       if (aff_node == NULL)
+               return rc;
+
+       assert(level == aff_node->level);
+
+       /*
+        * This function acquires the lock corresponding to each
+        * affinity level so that by the time we hit the highest
+        * affinity level, the system topology is snapshot and state
+        * management can be done safely.
+        */
+       bakery_lock_get(mpidr, &aff_node->lock);
+
+       /* Keep the old and new state handy */
+       prev_state = psci_get_state(aff_node->state);
+       next_state = PSCI_STATE_ON;
+
+       /* Perform generic, architecture and platform specific handling */
+       rc = pon_handlers[level](mpidr, aff_node, prev_state);
+       if (rc != PSCI_E_SUCCESS) {
+               psci_set_state(aff_node->state, prev_state);
+               goto exit;
+       }
+
+       /*
+        * State management: Update the states if this is the highest
+        * affinity level requested else pass the job to the next level.
+        */
+       if (aff_node->level != tgt_afflvl) {
+               rc = psci_afflvl_power_on_finish(mpidr,
+                                                level + 1,
+                                                tgt_afflvl,
+                                                pon_handlers);
+       } else {
+               psci_change_state(mpidr, MPIDR_AFFLVL0, tgt_afflvl, next_state);
+       }
+
+       /* If all has gone as per plan then this cpu should be marked as ON */
+       if (level == MPIDR_AFFLVL0) {
+               next_state = psci_get_state(aff_node->state);
+               assert(next_state == PSCI_STATE_ON);
+       }
+
+exit:
+       bakery_lock_release(mpidr, &aff_node->lock);
+       return rc;
+}
diff --git a/common/psci/psci_entry.S b/common/psci/psci_entry.S
new file mode 100644 (file)
index 0000000..4ea74c5
--- /dev/null
@@ -0,0 +1,159 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+#include <platform.h>
+#include <psci.h>
+#include <psci_private.h>
+#include <asm_macros.S>
+
+       .globl  psci_aff_on_finish_entry
+       .globl  psci_aff_suspend_finish_entry
+       .globl  __psci_cpu_off
+       .globl  __psci_cpu_suspend
+
+       .section        platform_code, "ax"; .align 3
+
+       /* -----------------------------------------------------
+        * This cpu has been physically powered up. Depending
+        * upon whether it was resumed from suspend or simply
+        * turned on, call the common power on finisher with
+        * the handlers (chosen depending upon original state).
+        * For ease, the finisher is called with coherent
+        * stacks. This allows the cluster/cpu finishers to
+        * enter coherency and enable the mmu without running
+        * into issues. We switch back to normal stacks once
+        * all this is done.
+        * -----------------------------------------------------
+        */
+psci_aff_on_finish_entry:
+       adr     x23, psci_afflvl_on_finishers
+       b       psci_aff_common_finish_entry
+
+psci_aff_suspend_finish_entry:
+       adr     x23, psci_afflvl_suspend_finishers
+
+psci_aff_common_finish_entry:
+       adr     x22, psci_afflvl_power_on_finish
+       bl      read_mpidr
+       mov     x19, x0
+       bl      platform_set_coherent_stack
+
+       /* ---------------------------------------------
+        * Call the finishers starting from affinity
+        * level 0.
+        * ---------------------------------------------
+        */
+       bl      get_max_afflvl
+       mov     x3, x23
+       mov     x2, x0
+       mov     x0, x19
+       mov     x1, #MPIDR_AFFLVL0
+       blr     x22
+       mov     x21, x0
+
+       /* --------------------------------------------
+        * Give ourselves a stack allocated in Normal
+        * -IS-WBWA memory
+        * --------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_set_stack
+
+       /* --------------------------------------------
+        * Restore the context id. value
+        * --------------------------------------------
+        */
+       mov     x0, x21
+
+       /* --------------------------------------------
+        * Jump back to the non-secure world assuming
+        * that the elr and spsr setup has been done
+        * by the finishers
+        * --------------------------------------------
+        */
+       eret
+_panic:
+       b       _panic
+
+       /* -----------------------------------------------------
+        * The following two stubs give the calling cpu a
+        * coherent stack to allow flushing of caches without
+        * suffering from stack coherency issues
+        * -----------------------------------------------------
+        */
+__psci_cpu_off:
+       func_prologue
+       sub     sp, sp, #0x10
+       stp     x19, x20, [sp, #0]
+       mov     x19, sp
+       bl      read_mpidr
+       bl      platform_set_coherent_stack
+       bl      psci_cpu_off
+       mov     x1, #PSCI_E_SUCCESS
+       cmp     x0, x1
+       b.eq    final_wfi
+       mov     sp, x19
+       ldp     x19, x20, [sp,#0]
+       add     sp, sp, #0x10
+       func_epilogue
+       ret
+
+__psci_cpu_suspend:
+       func_prologue
+       sub     sp, sp, #0x20
+       stp     x19, x20, [sp, #0]
+       stp     x21, x22, [sp, #0x10]
+       mov     x19, sp
+       mov     x20, x0
+       mov     x21, x1
+       mov     x22, x2
+       bl      read_mpidr
+       bl      platform_set_coherent_stack
+       mov     x0, x20
+       mov     x1, x21
+       mov     x2, x22
+       bl      psci_cpu_suspend
+       mov     x1, #PSCI_E_SUCCESS
+       cmp     x0, x1
+       b.eq    final_wfi
+       mov     sp, x19
+       ldp     x21, x22, [sp,#0x10]
+       ldp     x19, x20, [sp,#0]
+       add     sp, sp, #0x20
+       func_epilogue
+       ret
+
+final_wfi:
+       dsb     sy
+       wfi
+wfi_spill:
+       b       wfi_spill
+
diff --git a/common/psci/psci_main.c b/common/psci/psci_main.c
new file mode 100644 (file)
index 0000000..eca2dec
--- /dev/null
@@ -0,0 +1,190 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <psci_private.h>
+
+/*******************************************************************************
+ * PSCI frontend api for servicing SMCs. Described in the PSCI spec.
+ ******************************************************************************/
+int psci_cpu_on(unsigned long target_cpu,
+               unsigned long entrypoint,
+               unsigned long context_id)
+
+{
+       int rc;
+       unsigned int start_afflvl, target_afflvl;
+
+       /* Determine if the cpu exists of not */
+       rc = psci_validate_mpidr(target_cpu, MPIDR_AFFLVL0);
+       if (rc != PSCI_E_SUCCESS) {
+               goto exit;
+       }
+
+       start_afflvl = get_max_afflvl();
+       target_afflvl = MPIDR_AFFLVL0;
+       rc = psci_afflvl_on(target_cpu,
+                           entrypoint,
+                           context_id,
+                           start_afflvl,
+                           target_afflvl);
+
+exit:
+       return rc;
+}
+
+unsigned int psci_version(void)
+{
+       return PSCI_MAJOR_VER | PSCI_MINOR_VER;
+}
+
+int psci_cpu_suspend(unsigned int power_state,
+                    unsigned long entrypoint,
+                    unsigned long context_id)
+{
+       int rc;
+       unsigned long mpidr;
+       unsigned int tgt_afflvl, pstate_type;
+
+       /* TODO: Standby states are not supported at the moment */
+       pstate_type = psci_get_pstate_type(power_state);
+       if (pstate_type == 0) {
+               rc = PSCI_E_INVALID_PARAMS;
+               goto exit;
+       }
+
+       /* Sanity check the requested state */
+       tgt_afflvl = psci_get_pstate_afflvl(power_state);
+       if (tgt_afflvl > MPIDR_MAX_AFFLVL) {
+               rc = PSCI_E_INVALID_PARAMS;
+               goto exit;
+       }
+
+       mpidr = read_mpidr();
+       rc = psci_afflvl_suspend(mpidr,
+                                entrypoint,
+                                context_id,
+                                power_state,
+                                tgt_afflvl,
+                                MPIDR_AFFLVL0);
+
+exit:
+       if (rc != PSCI_E_SUCCESS)
+               assert(rc == PSCI_E_INVALID_PARAMS);
+       return rc;
+}
+
+int psci_cpu_off(void)
+{
+       int rc;
+       unsigned long mpidr;
+       int target_afflvl = get_max_afflvl();
+
+       mpidr = read_mpidr();
+
+       /*
+        * Traverse from the highest to the lowest affinity level. When the
+        * lowest affinity level is hit, all the locks are acquired. State
+        * management is done immediately followed by cpu, cluster ...
+        * ..target_afflvl specific actions as this function unwinds back.
+        */
+       rc = psci_afflvl_off(mpidr, target_afflvl, MPIDR_AFFLVL0);
+
+       if (rc != PSCI_E_SUCCESS) {
+               assert(rc == PSCI_E_DENIED);
+       }
+
+       return rc;
+}
+
+int psci_affinity_info(unsigned long target_affinity,
+                      unsigned int lowest_affinity_level)
+{
+       int rc = PSCI_E_INVALID_PARAMS;
+       unsigned int aff_state;
+       aff_map_node *node;
+
+       if (lowest_affinity_level > get_max_afflvl()) {
+               goto exit;
+       }
+
+       node = psci_get_aff_map_node(target_affinity, lowest_affinity_level);
+       if (node && (node->state & PSCI_AFF_PRESENT)) {
+               aff_state = psci_get_state(node->state);
+
+               /* A suspended cpu is available & on for the OS */
+               if (aff_state == PSCI_STATE_SUSPEND) {
+                       aff_state = PSCI_STATE_ON;
+               }
+
+               rc = aff_state;
+       }
+exit:
+       return rc;
+}
+
+/* Unimplemented */
+int psci_migrate(unsigned int target_cpu)
+{
+       return PSCI_E_NOT_SUPPORTED;
+}
+
+/* Unimplemented */
+unsigned int psci_migrate_info_type(void)
+{
+       return PSCI_TOS_NOT_PRESENT;
+}
+
+unsigned long psci_migrate_info_up_cpu(void)
+{
+       /*
+        * Return value of this currently unsupported call depends upon
+        * what psci_migrate_info_type() returns.
+        */
+       return PSCI_E_SUCCESS;
+}
+
+/* Unimplemented */
+void psci_system_off(void)
+{
+       assert(0);
+}
+
+/* Unimplemented */
+void psci_system_reset(void)
+{
+       assert(0);
+}
+
diff --git a/common/psci/psci_private.h b/common/psci/psci_private.h
new file mode 100644 (file)
index 0000000..48d40d0
--- /dev/null
@@ -0,0 +1,147 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PSCI_PRIVATE_H__
+#define __PSCI_PRIVATE_H__
+
+#include <bakery_lock.h>
+
+#ifndef __ASSEMBLY__
+/*******************************************************************************
+ * The following two data structures hold the generic information to bringup
+ * a suspended/hotplugged out cpu
+ ******************************************************************************/
+typedef struct {
+       unsigned long entrypoint;
+       unsigned long spsr;
+} eret_params;
+
+typedef struct {
+       eret_params eret_info;
+       unsigned long context_id;
+       unsigned int scr;
+       unsigned int sctlr;
+} ns_entry_info;
+
+/*******************************************************************************
+ *
+ *
+ ******************************************************************************/
+typedef struct {
+       unsigned long sctlr;
+       unsigned long scr;
+       unsigned long cptr;
+       unsigned long cpacr;
+       unsigned long cntfrq;
+       unsigned long mair;
+       unsigned long tcr;
+       unsigned long ttbr;
+       unsigned long vbar;
+} secure_context;
+
+/*******************************************************************************
+ * The following two data structures hold the topology tree which in turn tracks
+ * the state of the all the affinity instances supported by the platform.
+ ******************************************************************************/
+typedef struct {
+       unsigned long mpidr;
+       unsigned char state;
+       char level;
+       unsigned int data;
+       bakery_lock lock;
+} aff_map_node;
+
+typedef struct {
+       int min;
+       int max;
+} aff_limits_node;
+
+typedef unsigned int (*afflvl_power_on_finisher)(unsigned long,
+                                                aff_map_node *,
+                                                unsigned int);
+
+/*******************************************************************************
+ * Data prototypes
+ ******************************************************************************/
+extern secure_context psci_secure_context[PSCI_NUM_AFFS];
+extern ns_entry_info psci_ns_entry_info[PSCI_NUM_AFFS];
+extern unsigned int psci_ns_einfo_idx;
+extern aff_limits_node psci_aff_limits[MPIDR_MAX_AFFLVL + 1];
+extern plat_pm_ops *psci_plat_pm_ops;
+extern aff_map_node psci_aff_map[PSCI_NUM_AFFS];
+extern afflvl_power_on_finisher psci_afflvl_off_finish_handlers[];
+extern afflvl_power_on_finisher psci_afflvl_sus_finish_handlers[];
+
+/*******************************************************************************
+ * Function prototypes
+ ******************************************************************************/
+/* Private exported functions from psci_common.c */
+extern int get_max_afflvl(void);
+extern unsigned int psci_get_phys_state(unsigned int);
+extern unsigned int psci_get_aff_phys_state(aff_map_node *);
+extern unsigned int psci_calculate_affinity_state(aff_map_node *);
+extern unsigned int psci_get_ns_entry_info(unsigned int index);
+extern unsigned long mpidr_set_aff_inst(unsigned long,unsigned char, int);
+extern int psci_change_state(unsigned long, int, int, unsigned int);
+extern int psci_validate_mpidr(unsigned long, int);
+extern unsigned int psci_afflvl_power_on_finish(unsigned long,
+                                               int,
+                                               int,
+                                               afflvl_power_on_finisher *);
+extern int psci_set_ns_entry_info(unsigned int index,
+                                 unsigned long entrypoint,
+                                 unsigned long context_id);
+extern int psci_get_first_present_afflvl(unsigned long,
+                                        int, int,
+                                        aff_map_node **);
+/* Private exported functions from psci_setup.c */
+extern aff_map_node *psci_get_aff_map_node(unsigned long, int);
+
+/* Private exported functions from psci_affinity_on.c */
+extern int psci_afflvl_on(unsigned long,
+                         unsigned long,
+                         unsigned long,
+                         int,
+                         int);
+
+/* Private exported functions from psci_affinity_off.c */
+extern int psci_afflvl_off(unsigned long, int, int);
+
+/* Private exported functions from psci_affinity_suspend.c */
+extern int psci_afflvl_suspend(unsigned long,
+                              unsigned long,
+                              unsigned long,
+                              unsigned int,
+                              int,
+                              int);
+extern unsigned int psci_afflvl_suspend_finish(unsigned long, int, int);
+#endif /*__ASSEMBLY__*/
+
+#endif /* __PSCI_PRIVATE_H__ */
diff --git a/common/psci/psci_setup.c b/common/psci/psci_setup.c
new file mode 100644 (file)
index 0000000..9095e75
--- /dev/null
@@ -0,0 +1,268 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <psci_private.h>
+
+/*******************************************************************************
+ * Routines for retrieving the node corresponding to an affinity level instance
+ * in the mpidr. The first one uses binary search to find the node corresponding
+ * to the mpidr (key) at a particular affinity level. The second routine decides
+ * extents of the binary search at each affinity level.
+ ******************************************************************************/
+static int psci_aff_map_get_idx(unsigned long key,
+                               int min_idx,
+                               int max_idx)
+{
+       int mid;
+
+       /*
+        * Terminating condition: If the max and min indices have crossed paths
+        * during the binary search then the key has not been found.
+        */
+       if (max_idx < min_idx)
+               return PSCI_E_INVALID_PARAMS;
+
+       /*
+        * Bisect the array around 'mid' and then recurse into the array chunk
+        * where the key is likely to be found. The mpidrs in each node in the
+        * 'psci_aff_map' for a given affinity level are stored in an ascending
+        * order which makes the binary search possible.
+        */
+       mid = min_idx + ((max_idx - min_idx) >> 1);     /* Divide by 2 */
+       if (psci_aff_map[mid].mpidr > key)
+               return psci_aff_map_get_idx(key, min_idx, mid - 1);
+       else if (psci_aff_map[mid].mpidr < key)
+               return psci_aff_map_get_idx(key, mid + 1, max_idx);
+       else
+               return mid;
+}
+
+aff_map_node *psci_get_aff_map_node(unsigned long mpidr, int aff_lvl)
+{
+       int rc;
+
+       /* Right shift the mpidr to the required affinity level */
+       mpidr = mpidr_mask_lower_afflvls(mpidr, aff_lvl);
+
+       rc = psci_aff_map_get_idx(mpidr,
+                                 psci_aff_limits[aff_lvl].min,
+                                 psci_aff_limits[aff_lvl].max);
+       if (rc >= 0)
+               return &psci_aff_map[rc];
+       else
+               return NULL;
+}
+
+/*******************************************************************************
+ * Function which initializes the 'aff_map_node' corresponding to an affinity
+ * level instance. Each node has a unique mpidr, level and bakery lock. The data
+ * field is opaque and holds affinity level specific data e.g. for affinity
+ * level 0 it contains the index into arrays that hold the secure/non-secure
+ * state for a cpu that's been turned on/off
+ ******************************************************************************/
+static void psci_init_aff_map_node(unsigned long mpidr,
+                                  int level,
+                                  unsigned int idx)
+{
+       unsigned char state;
+       psci_aff_map[idx].mpidr = mpidr;
+       psci_aff_map[idx].level = level;
+       bakery_lock_init(&psci_aff_map[idx].lock);
+
+       /*
+        * If an affinity instance is present then mark it as OFF to begin with.
+        */
+       state = plat_get_aff_state(level, mpidr);
+       psci_aff_map[idx].state = state;
+       if (state & PSCI_AFF_PRESENT) {
+               psci_set_state(psci_aff_map[idx].state, PSCI_STATE_OFF);
+       }
+
+       if (level == MPIDR_AFFLVL0) {
+               /* Ensure that we have not overflowed the psci_ns_einfo array */
+               assert(psci_ns_einfo_idx < PSCI_NUM_AFFS);
+
+               psci_aff_map[idx].data = psci_ns_einfo_idx;
+               psci_ns_einfo_idx++;
+       }
+
+       return;
+}
+
+/*******************************************************************************
+ * Core routine used by the Breadth-First-Search algorithm to populate the
+ * affinity tree. Each level in the tree corresponds to an affinity level. This
+ * routine's aim is to traverse to the target affinity level and populate nodes
+ * in the 'psci_aff_map' for all the siblings at that level. It uses the current
+ * affinity level to keep track of how many levels from the root of the tree
+ * have been traversed. If the current affinity level != target affinity level,
+ * then the platform is asked to return the number of children that each
+ * affinity instance has at the current affinity level. Traversal is then done
+ * for each child at the next lower level i.e. current affinity level - 1.
+ *
+ * CAUTION: This routine assumes that affinity instance ids are allocated in a
+ * monotonically increasing manner at each affinity level in a mpidr starting
+ * from 0. If the platform breaks this assumption then this code will have to
+ * be reworked accordingly.
+ ******************************************************************************/
+static unsigned int psci_init_aff_map(unsigned long mpidr,
+                                     unsigned int affmap_idx,
+                                     int cur_afflvl,
+                                     int tgt_afflvl)
+{
+       unsigned int ctr, aff_count;
+
+       assert(cur_afflvl >= tgt_afflvl);
+
+       /*
+        * Find the number of siblings at the current affinity level &
+        * assert if there are none 'cause then we have been invoked with
+        * an invalid mpidr.
+        */
+       aff_count = plat_get_aff_count(cur_afflvl, mpidr);
+       assert(aff_count);
+
+       if (tgt_afflvl < cur_afflvl) {
+               for (ctr = 0; ctr < aff_count; ctr++) {
+                       mpidr = mpidr_set_aff_inst(mpidr, ctr, cur_afflvl);
+                       affmap_idx = psci_init_aff_map(mpidr,
+                                                      affmap_idx,
+                                                      cur_afflvl - 1,
+                                                      tgt_afflvl);
+               }
+       } else {
+               for (ctr = 0; ctr < aff_count; ctr++, affmap_idx++) {
+                       mpidr = mpidr_set_aff_inst(mpidr, ctr, cur_afflvl);
+                       psci_init_aff_map_node(mpidr, cur_afflvl, affmap_idx);
+               }
+
+               /* affmap_idx is 1 greater than the max index of cur_afflvl */
+               psci_aff_limits[cur_afflvl].max = affmap_idx - 1;
+       }
+
+       return affmap_idx;
+}
+
+/*******************************************************************************
+ * This function initializes the topology tree by querying the platform. To do
+ * so, it's helper routines implement a Breadth-First-Search. At each affinity
+ * level the platform conveys the number of affinity instances that exist i.e.
+ * the affinity count. The algorithm populates the psci_aff_map recursively
+ * using this information. On a platform that implements two clusters of 4 cpus
+ * each, the populated aff_map_array would look like this:
+ *
+ *            <- cpus cluster0 -><- cpus cluster1 ->
+ * ---------------------------------------------------
+ * | 0  | 1  | 0  | 1  | 2  | 3  | 0  | 1  | 2  | 3  |
+ * ---------------------------------------------------
+ *           ^                                       ^
+ * cluster __|                                 cpu __|
+ * limit                                      limit
+ *
+ * The first 2 entries are of the cluster nodes. The next 4 entries are of cpus
+ * within cluster 0. The last 4 entries are of cpus within cluster 1.
+ * The 'psci_aff_limits' array contains the max & min index of each affinity
+ * level within the 'psci_aff_map' array. This allows restricting search of a
+ * node at an affinity level between the indices in the limits array.
+ ******************************************************************************/
+void psci_setup(unsigned long mpidr)
+{
+       int afflvl, affmap_idx, rc, max_afflvl;
+       aff_map_node *node;
+
+       /* Initialize psci's internal state */
+       memset(psci_aff_map, 0, sizeof(psci_aff_map));
+       memset(psci_aff_limits, 0, sizeof(psci_aff_limits));
+       memset(psci_ns_entry_info, 0, sizeof(psci_ns_entry_info));
+       psci_ns_einfo_idx = 0;
+       psci_plat_pm_ops = NULL;
+
+       /* Find out the maximum affinity level that the platform implements */
+       max_afflvl = get_max_afflvl();
+       assert(max_afflvl <= MPIDR_MAX_AFFLVL);
+
+       /*
+        * This call traverses the topology tree with help from the platform and
+        * populates the affinity map using a breadth-first-search recursively.
+        * We assume that the platform allocates affinity instance ids from 0
+        * onwards at each affinity level in the mpidr. FIRST_MPIDR = 0.0.0.0
+        */
+       affmap_idx = 0;
+       for (afflvl = max_afflvl; afflvl >= MPIDR_AFFLVL0; afflvl--) {
+               affmap_idx = psci_init_aff_map(FIRST_MPIDR,
+                                              affmap_idx,
+                                              max_afflvl,
+                                              afflvl);
+       }
+
+       /*
+        * Set the bounds for the affinity counts of each level in the map. Also
+        * flush out the entire array so that it's visible to subsequent power
+        * management operations. The 'psci_aff_map' array is allocated in
+        * coherent memory so does not need flushing. The 'psci_aff_limits'
+        * array is allocated in normal memory. It will be accessed when the mmu
+        * is off e.g. after reset. Hence it needs to be flushed.
+        */
+       for (afflvl = MPIDR_AFFLVL0; afflvl < max_afflvl; afflvl++) {
+               psci_aff_limits[afflvl].min =
+                       psci_aff_limits[afflvl + 1].max + 1;
+       }
+
+       flush_dcache_range((unsigned long) psci_aff_limits,
+                          sizeof(psci_aff_limits));
+
+       /*
+        * Mark the affinity instances in our mpidr as ON. No need to lock as
+        * this is the primary cpu.
+        */
+       mpidr &= MPIDR_AFFINITY_MASK;
+       for (afflvl = max_afflvl; afflvl >= MPIDR_AFFLVL0; afflvl--) {
+
+               node = psci_get_aff_map_node(mpidr, afflvl);
+               assert(node);
+
+               /* Mark each present node as ON. */
+               if (node->state & PSCI_AFF_PRESENT) {
+                       psci_set_state(node->state, PSCI_STATE_ON);
+               }
+       }
+
+       rc = platform_setup_pm(&psci_plat_pm_ops);
+       assert(rc == 0);
+       assert(psci_plat_pm_ops);
+
+       return;
+}
diff --git a/common/runtime_svc.c b/common/runtime_svc.c
new file mode 100644 (file)
index 0000000..ed1225f
--- /dev/null
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <errno.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <semihosting.h>
+#include <bl_common.h>
+#include <psci.h>
+
+/*******************************************************************************
+ * Perform initialization of runtime services possibly across exception levels
+ * in the secure address space e.g. psci & interrupt handling.
+ ******************************************************************************/
+void runtime_svc_init(unsigned long mpidr)
+{
+       psci_setup(mpidr);
+       return;
+}
diff --git a/docs/change-log.md b/docs/change-log.md
new file mode 100644 (file)
index 0000000..3a9e5cd
--- /dev/null
@@ -0,0 +1,76 @@
+ARM Trusted Firmware - version 0.2
+==================================
+
+New features
+------------
+
+*   First source release.
+
+*   Code for the PSCI suspend feature is supplied, although this is not enabled
+    by default since there are known issues (see below).
+
+
+Issues resolved since last release
+----------------------------------
+
+*   The "psci" nodes in the FDTs provided in this release now fully comply
+    with the recommendations made in the PSCI specification.
+
+
+Known issues
+------------
+
+The following is a list of issues which are expected to be fixed in the future
+releases of the ARM Trusted Firmware.
+
+*   The TrustZone Address Space Controller (TZC-400) is not being programmed
+    yet. Use of model parameter `-C bp.secure_memory=1` is not supported.
+
+*   No support yet for secure world interrupt handling or for switching context
+    between secure and normal worlds in EL3.
+
+*   GICv3 support is experimental. The Linux kernel patches to support this are
+    not widely available. There are known issues with GICv3 initialization in
+    the ARM Trusted Firmware.
+
+*   Dynamic image loading is not available yet. The current image loader
+    implementation (used to load BL2 and all subsequent images) has some
+    limitations. Changing BL2 or BL3-1 load addresses in certain ways can lead
+    to loading errors, even if the images should theoretically fit in memory.
+
+*   Although support for PSCI `CPU_SUSPEND` is present, it is not yet stable
+    and ready for use.
+
+*   PSCI api calls `AFFINITY_INFO` & `PSCI_VERSION` are implemented but have not
+    been tested.
+
+*   The ARM Trusted Firmware make files result in all build artifacts being
+    placed in the root of the project. These should be placed in appropriate
+    sub-directories.
+
+*   The compilation of ARM Trusted Firmware is not free from compilation
+    warnings. Some of these warnings have not been investigated yet so they
+    could mask real bugs.
+
+*   The ARM Trusted Firmware currently uses toolchain/system include files like
+    stdio.h. It should provide versions of these within the project to maintain
+    compatibility between toolchains/systems.
+
+*   The PSCI code takes some locks in an incorrect sequence. This may cause
+    problems with suspend and hotplug in certain conditions.
+
+*   The Linux kernel used in this release is based on version 3.12-rc4. Using
+    this kernel with the ARM Trusted Firmware fails to start the file-system as
+    a RAM-disk. It fails to execute user-space `init` from the RAM-disk. As an
+    alternative, the VirtioBlock mechanism can be used to provide a file-system
+    to the kernel.
+
+
+Detailed changes since last release
+-----------------------------------
+
+First source release â€“ not applicable.
+
+- - - - - - - - - - - - - - - - - - - - - - - - - -
+
+_Copyright (c) 2013 ARM Ltd. All rights reserved._
diff --git a/docs/porting-guide.md b/docs/porting-guide.md
new file mode 100644 (file)
index 0000000..ae77c55
--- /dev/null
@@ -0,0 +1,939 @@
+ARM Trusted Firmware Porting Guide
+==================================
+
+Contents
+--------
+
+1.  Introduction
+2.  Common Modifications
+    *   Common mandatory modifications
+    *   Common optional modifications
+3.  Boot Loader stage specific modifications
+    *   Boot Loader stage 1 (BL1)
+    *   Boot Loader stage 2 (BL2)
+    *   Boot Loader stage 3-1 (BL3-1)
+    *   PSCI implementation (in BL3-1)
+
+- - - - - - - - - - - - - - - - - -
+
+1.  Introduction
+----------------
+
+Porting the ARM Trusted Firmware to a new platform involves making some
+mandatory and optional modifications for both the cold and warm boot paths.
+Modifications consist of:
+
+*   Implementing a platform-specific function or variable,
+*   Setting up the execution context in a certain way, or
+*   Defining certain constants (for example #defines).
+
+The firmware provides a default implementation of variables and functions to
+fulfill the optional requirements. These implementations are all weakly defined;
+they are provided to ease the porting effort. Each platform port can override
+them with its own implementation if the default implementation is inadequate.
+
+Some modifications are common to all Boot Loader (BL) stages. Section 2
+discusses these in detail. The subsequent sections discuss the remaining
+modifications for each BL stage in detail.
+
+This document should be read in conjunction with the ARM Trusted Firmware
+[User Guide].
+
+
+2.  Common modifications
+------------------------
+
+This section covers the modifications that should be made by the platform for
+each BL stage to correctly port the firmware stack. They are categorized as
+either mandatory or optional.
+
+
+2.1 Common mandatory modifications
+----------------------------------
+A platform port must enable the Memory Management Unit (MMU) with identity
+mapped page tables, and enable both the instruction and data caches for each BL
+stage. In the ARM FVP port, each BL stage configures the MMU in its platform-
+specific architecture setup function, for example `blX_plat_arch_setup()`.
+
+Each platform must allocate a block of identity mapped secure memory with
+Device-nGnRE attributes aligned to page boundary (4K) for each BL stage. This
+memory is identified by the section name `tzfw_coherent_mem` so that its
+possible for the firmware to place variables in it using the following C code
+directive:
+
+    __attribute__ ((section("tzfw_coherent_mem")))
+
+Or alternatively the following assembler code directive:
+
+    .section tzfw_coherent_mem
+
+The `tzfw_coherent_mem` section is used to allocate any data structures that are
+accessed both when a CPU is executing with its MMU and caches enabled, and when
+it's running with its MMU and caches disabled. Examples are given below.
+
+The following variables, functions and constants must be defined by the platform
+for the firmware to work correctly.
+
+
+### File : platform.h [mandatory]
+
+Each platform must export a header file of this name with the following
+constants defined. In the ARM FVP port, this file is found in
+[../plat/fvp/platform.h].
+
+*   ** #define : PLATFORM_LINKER_FORMAT **
+
+    Defines the linker format used by the platform, for example
+    `elf64-littleaarch64` used by the FVP.
+
+*   ** #define : PLATFORM_LINKER_ARCH **
+
+    Defines the processor architecture for the linker by the platform, for
+    example `aarch64` used by the FVP.
+
+*   ** #define : PLATFORM_STACK_SIZE **
+
+    Defines the normal stack memory available to each CPU. This constant is used
+    by `platform_set_stack()`.
+
+*   ** #define : FIRMWARE_WELCOME_STR **
+
+    Defines the character string printed by BL1 upon entry into the `bl1_main()`
+    function.
+
+*   ** #define : BL2_IMAGE_NAME **
+
+    Name of the BL2 binary image on the host file-system. This name is used by
+    BL1 to load BL2 into secure memory using semi-hosting.
+
+*   ** #define : PLATFORM_CACHE_LINE_SIZE **
+
+    Defines the size (in bytes) of the largest cache line across all the cache
+    levels in the platform.
+
+*   ** #define : PLATFORM_CLUSTER_COUNT **
+
+    Defines the total number of clusters implemented by the platform in the
+    system.
+
+*   ** #define : PLATFORM_CORE_COUNT **
+
+    Defines the total number of CPUs implemented by the platform across all
+    clusters in the system.
+
+*   ** #define : PLATFORM_MAX_CPUS_PER_CLUSTER **
+
+    Defines the maximum number of CPUs that can be implemented within a cluster
+    on the platform.
+
+*   ** #define : PRIMARY_CPU **
+
+    Defines the `MPIDR` of the primary CPU on the platform. This value is used
+    after a cold boot to distinguish between primary and secondary CPUs.
+
+*   ** #define : TZROM_BASE **
+
+    Defines the base address of secure ROM on the platform, where the BL1 binary
+    is loaded. This constant is used by the linker scripts to ensure that the
+    BL1 image fits into the available memory.
+
+*   ** #define : TZROM_SIZE **
+
+    Defines the size of secure ROM on the platform. This constant is used by the
+    linker scripts to ensure that the BL1 image fits into the available memory.
+
+*   ** #define : TZRAM_BASE **
+
+    Defines the base address of the secure RAM on platform, where the data
+    section of the BL1 binary is loaded. The BL2 and BL3-1 images are also
+    loaded in this secure RAM region. This constant is used by the linker
+    scripts to ensure that the BL1 data section and BL2/BL3-1 binary images fit
+    into the available memory.
+
+*   ** #define : TZRAM_SIZE **
+
+    Defines the size of the secure RAM on the platform. This constant is used by
+    the linker scripts to ensure that the BL1 data section and BL2/BL3-1 binary
+    images fit into the available memory.
+
+*   ** #define : SYS_CNTCTL_BASE **
+
+    Defines the base address of the `CNTCTLBase` frame of the memory mapped
+    counter and timer in the system level implementation of the generic timer.
+
+*   ** #define : BL2_BASE **
+
+    Defines the base address in secure RAM where BL1 loads the BL2 binary image.
+
+*   ** #define : BL31_BASE **
+
+    Defines the base address in secure RAM where BL2 loads the BL3-1 binary
+    image.
+
+
+### Other mandatory modifications
+
+The following following mandatory modifications may be implemented in any file
+the implementer chooses. In the ARM FVP port, they are implemented in
+[../plat/fvp/aarch64/fvp_common.c].
+
+*   ** Variable : unsigned char platform_normal_stacks[X][Y] **
+
+        where  X = PLATFORM_STACK_SIZE
+          and  Y = PLATFORM_CORE_COUNT
+
+    Each platform must allocate a block of memory with Normal Cacheable, Write
+    back, Write allocate and Inner Shareable attributes aligned to the size (in
+    bytes) of the largest cache line amongst all caches implemented in the
+    system. A pointer to this memory should be exported with the name
+    `platform_normal_stacks`. This pointer is used by the common platform helper
+    function `platform_set_stack()` to allocate a stack to each CPU in the
+    platform (see [../plat/common/aarch64/platform_helpers.S]).
+
+
+2.2 Common optional modifications
+---------------------------------
+
+The following are helper functions implemented by the firmware that perform
+common platform-specific tasks. A platform may choose to override these
+definitions.
+
+
+### Function : platform_get_core_pos()
+
+    Argument : unsigned long
+    Return   : int
+
+A platform may need to convert the `MPIDR` of a CPU to an absolute number, which
+can be used as a CPU-specific linear index into blocks of memory (for example
+while allocating per-CPU stacks). This routine contains a simple mechanism
+to perform this conversion, using the assumption that each cluster contains a
+maximum of 4 CPUs:
+
+    linear index = cpu_id + (cluster_id * 4)
+
+    cpu_id = 8-bit value in MPIDR at affinity level 0
+    cluster_id = 8-bit value in MPIDR at affinity level 1
+
+
+### Function : platform_set_coherent_stack()
+
+    Argument : unsigned long
+    Return   : void
+
+A platform may need stack memory that is coherent with main memory to perform
+certain operations like:
+
+*   Turning the MMU on, or
+*   Flushing caches prior to powering down a CPU or cluster.
+
+Each BL stage allocates this coherent stack memory for each CPU in the
+`tzfw_coherent_mem` section. A pointer to this memory (`pcpu_dv_mem_stack`) is
+used by this function to allocate a coherent stack for each CPU. A CPU is
+identified by its `MPIDR`, which is passed as an argument to this function.
+
+The size of the stack allocated to each CPU is specified by the constant
+`PCPU_DV_MEM_STACK_SIZE`.
+
+
+### Function : platform_is_primary_cpu()
+
+    Argument : unsigned long
+    Return   : unsigned int
+
+This function identifies a CPU by its `MPIDR`, which is passed as the argument,
+to determine whether this CPU is the primary CPU or a secondary CPU. A return
+value of zero indicates that the CPU is not the primary CPU, while a non-zero
+return value indicates that the CPU is the primary CPU.
+
+
+### Function : platform_set_stack()
+
+    Argument : unsigned long
+    Return   : void
+
+This function uses the `platform_normal_stacks` pointer variable to allocate
+stacks to each CPU. Further details are given in the description of the
+`platform_normal_stacks` variable below. A CPU is identified by its `MPIDR`,
+which is passed as the argument.
+
+The size of the stack allocated to each CPU is specified by the platform defined
+constant `PLATFORM_STACK_SIZE`.
+
+
+### Function : plat_report_exception()
+
+    Argument : unsigned int
+    Return   : void
+
+A platform may need to report various information about its status when an
+exception is taken, for example the current exception level, the CPU security
+state (secure/non-secure), the exception type, and so on. This function is
+called in the following circumstances:
+
+*   In BL1, whenever an exception is taken.
+*   In BL2, whenever an exception is taken.
+*   In BL3-1, whenever an asynchronous exception or a synchronous exception
+    other than an SMC32/SMC64 exception is taken.
+
+The default implementation doesn't do anything, to avoid making assumptions
+about the way the platform displays its status information.
+
+This function receives the exception type as its argument. Possible values for
+exceptions types are listed in the [../include/runtime_svc.h] header file. Note
+that these constants are not related to any architectural exception code; they
+are just an ARM Trusted Firmware convention.
+
+
+3.  Modifications specific to a Boot Loader stage
+-------------------------------------------------
+
+3.1 Boot Loader Stage 1 (BL1)
+-----------------------------
+
+BL1 implements the reset vector where execution starts from after a cold or
+warm boot. For each CPU, BL1 is responsible for the following tasks:
+
+1.  Distinguishing between a cold boot and a warm boot.
+
+2.  In the case of a cold boot and the CPU being the primary CPU, ensuring that
+    only this CPU executes the remaining BL1 code, including loading and passing
+    control to the BL2 stage.
+
+3.  In the case of a cold boot and the CPU being a secondary CPU, ensuring that
+    the CPU is placed in a platform-specific state until the primary CPU
+    performs the necessary steps to remove it from this state.
+
+4.  In the case of a warm boot, ensuring that the CPU jumps to a platform-
+    specific address in the BL3-1 image in the same processor mode as it was
+    when released from reset.
+
+5.  Loading the BL2 image in secure memory using semi-hosting at the
+    address specified by the platform defined constant `BL2_BASE`.
+
+6.  Populating a `meminfo` structure with the following information in memory,
+    accessible by BL2 immediately upon entry.
+
+        meminfo.total_base = Base address of secure RAM visible to BL2
+        meminfo.total_size = Size of secure RAM visible to BL2
+        meminfo.free_base  = Base address of secure RAM available for
+                             allocation to BL2
+        meminfo.free_size  = Size of secure RAM available for allocation to BL2
+
+    BL1 places this `meminfo` structure at the beginning of the free memory
+    available for its use. Since BL1 cannot allocate memory dynamically at the
+    moment, its free memory will be available for BL2's use as-is. However, this
+    means that BL2 must read the `meminfo` structure before it starts using its
+    free memory (this is discussed in Section 3.2).
+
+    In future releases of the ARM Trusted Firmware it will be possible for
+    the platform to decide where it wants to place the `meminfo` structure for
+    BL2.
+
+    BL1 implements the `init_bl2_mem_layout()` function to populate the
+    BL2 `meminfo` structure. The platform may override this implementation, for
+    example if the platform wants to restrict the amount of memory visible to
+    BL2. Details of how to do this are given below.
+
+The following functions need to be implemented by the platform port to enable
+BL1 to perform the above tasks.
+
+
+### Function : platform_get_entrypoint() [mandatory]
+
+    Argument : unsigned long
+    Return   : unsigned int
+
+This function is called with the `SCTLR.M` and `SCTLR.C` bits disabled. The CPU
+is identified by its `MPIDR`, which is passed as the argument. The function is
+responsible for distinguishing between a warm and cold reset using platform-
+specific means. If it's a warm reset then it returns the entrypoint into the
+BL3-1 image that the CPU must jump to. If it's a cold reset then this function
+must return zero.
+
+This function is also responsible for implementing a platform-specific mechanism
+to handle the condition where the CPU has been warm reset but there is no
+entrypoint to jump to.
+
+This function does not follow the Procedure Call Standard used by the
+Application Binary Interface for the ARM 64-bit architecture. The caller should
+not assume that callee saved registers are preserved across a call to this
+function.
+
+This function fulfills requirement 1 listed above.
+
+
+### Function : plat_secondary_cold_boot_setup() [mandatory]
+
+    Argument : void
+    Return   : void
+
+This function is called with the MMU and data caches disabled. It is responsible
+for placing the executing secondary CPU in a platform-specific state until the
+primary CPU performs the necessary actions to bring it out of that state and
+allow entry into the OS.
+
+In the ARM FVP port, each secondary CPU powers itself off. The primary CPU is
+responsible for powering up the secondary CPU when normal world software
+requires them.
+
+This function fulfills requirement 3 above.
+
+
+### Function : platform_cold_boot_init() [mandatory]
+
+    Argument : unsigned long
+    Return   : unsigned int
+
+This function executes with the MMU and data caches disabled. It is only called
+by the primary CPU. The argument to this function is the address of the
+`bl1_main()` routine where the generic BL1-specific actions are performed.
+This function performs any platform-specific and architectural setup that the
+platform requires to make execution of `bl1_main()` possible.
+
+The platform must enable the MMU with identity mapped page tables and enable
+caches by setting the `SCTLR.I` and `SCTLR.C` bits.
+
+Platform-specific setup might include configuration of memory controllers,
+configuration of the interconnect to allow the cluster to service cache snoop
+requests from another cluster, zeroing of the ZI section, and so on.
+
+In the ARM FVP port, this function enables CCI snoops into the cluster that the
+primary CPU is part of. It also enables the MMU and initializes the ZI section
+in the BL1 image through the use of linker defined symbols.
+
+This function helps fulfill requirement 2 above.
+
+
+### Function : bl1_platform_setup() [mandatory]
+
+    Argument : void
+    Return   : void
+
+This function executes with the MMU and data caches enabled. It is responsible
+for performing any remaining platform-specific setup that can occur after the
+MMU and data cache have been enabled.
+
+In the ARM FVP port, it zeros out the ZI section, enables the system level
+implementation of the generic timer counter and initializes the console.
+
+This function helps fulfill requirement 5 above.
+
+
+### Function : bl1_get_sec_mem_layout() [mandatory]
+
+    Argument : void
+    Return   : meminfo
+
+This function executes with the MMU and data caches enabled. The `meminfo`
+structure returned by this function must contain the extents and availability of
+secure RAM for the BL1 stage.
+
+    meminfo.total_base = Base address of secure RAM visible to BL1
+    meminfo.total_size = Size of secure RAM visible to BL1
+    meminfo.free_base  = Base address of secure RAM available for allocation
+                         to BL1
+    meminfo.free_size  = Size of secure RAM available for allocation to BL1
+
+This information is used by BL1 to load the BL2 image in secure RAM. BL1 also
+populates a similar structure to tell BL2 the extents of memory available for
+its own use.
+
+This function helps fulfill requirement 5 above.
+
+
+### Function : init_bl2_mem_layout() [optional]
+
+    Argument : meminfo *, meminfo *, unsigned int, unsigned long
+    Return   : void
+
+Each BL stage needs to tell the next stage the amount of secure RAM available
+for it to use. For example, as part of handing control to BL2, BL1 informs BL2
+of the extents of secure RAM available for BL2 to use. BL2 must do the same when
+passing control to BL3-1. This information is populated in a `meminfo`
+structure.
+
+Depending upon where BL2 has been loaded in secure RAM (determined by
+`BL2_BASE`), BL1 calculates the amount of free memory available for BL2 to use.
+BL1 also ensures that its data sections resident in secure RAM are not visible
+to BL2. An illustration of how this is done in the ARM FVP port is given in the
+[User Guide], in the Section "Memory layout on Base FVP".
+
+
+3.2 Boot Loader Stage 2 (BL2)
+-----------------------------
+
+The BL2 stage is executed only by the primary CPU, which is determined in BL1
+using the `platform_is_primary_cpu()` function. BL1 passed control to BL2 at
+`BL2_BASE`. BL2 executes in Secure EL1 and is responsible for:
+
+1.  Loading the BL3-1 binary image in secure RAM using semi-hosting. To load the
+    BL3-1 image, BL2 makes use of the `meminfo` structure passed to it by BL1.
+    This structure allows BL2 to calculate how much secure RAM is available for
+    its use. The platform also defines the address in secure RAM where BL3-1 is
+    loaded through the constant `BL31_BASE`. BL2 uses this information to
+    determine if there is enough memory to load the BL3-1 image.
+
+2.  Arranging to pass control to a normal world BL image that has been
+    pre-loaded at a platform-specific address. This address is determined using
+    the `plat_get_ns_image_entrypoint()` function described below.
+
+    BL2 populates an `el_change_info` structure in memory provided by the
+    platform with information about how BL3-1 should pass control to the normal
+    world BL image.
+
+3.  Populating a `meminfo` structure with the following information in
+    memory that is accessible by BL3-1 immediately upon entry.
+
+        meminfo.total_base = Base address of secure RAM visible to BL3-1
+        meminfo.total_size = Size of secure RAM visible to BL3-1
+        meminfo.free_base  = Base address of secure RAM available for allocation
+                             to BL3-1
+        meminfo.free_size  = Size of secure RAM available for allocation to
+                             BL3-1
+
+    BL2 places this `meminfo` structure in memory provided by the
+    platform (`bl2_el_change_mem_ptr`). BL2 implements the
+    `init_bl31_mem_layout()` function to populate the BL3-1 meminfo structure
+    described above. The platform may override this implementation, for example
+    if the platform wants to restrict the amount of memory visible to BL3-1.
+    Details of this function are given below.
+
+The following functions must be implemented by the platform port to enable BL2
+to perform the above tasks.
+
+
+### Function : bl2_early_platform_setup() [mandatory]
+
+    Argument : meminfo *, void *
+    Return   : void
+
+This function executes with the MMU and data caches disabled. It is only called
+by the primary CPU. The arguments to this function are:
+
+*   The address of the `meminfo` structure populated by BL1
+*   An opaque pointer that the platform may use as needed.
+
+The platform must copy the contents of the `meminfo` structure into a private
+variable as the original memory may be subsequently overwritten by BL2. The
+copied structure is made available to all BL2 code through the
+`bl2_get_sec_mem_layout()` function.
+
+
+### Function : bl2_plat_arch_setup() [mandatory]
+
+    Argument : void
+    Return   : void
+
+This function executes with the MMU and data caches disabled. It is only called
+by the primary CPU.
+
+The purpose of this function is to perform any architectural initialization
+that varies across platforms, for example enabling the MMU (since the memory
+map differs across platforms).
+
+
+### Function : bl2_platform_setup() [mandatory]
+
+    Argument : void
+    Return   : void
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initialization in `bl2_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+The purpose of this function is to perform any platform initialization specific
+to BL2. This function must initialize a pointer to memory
+(`bl2_el_change_mem_ptr`), which can then be used to populate an
+`el_change_info` structure. The underlying requirement is that the platform must
+initialize this pointer before the `get_el_change_mem_ptr()` function
+accesses it in `bl2_main()`.
+
+The ARM FVP port initializes this pointer to the base address of Secure DRAM
+(`0x06000000`).
+
+
+### Variable : unsigned char bl2_el_change_mem_ptr[EL_CHANGE_MEM_SIZE] [mandatory]
+
+As mentioned in the description of `bl2_platform_setup()`, this pointer is
+initialized by the platform to point to memory where an `el_change_info`
+structure can be populated.
+
+
+### Function : bl2_get_sec_mem_layout() [mandatory]
+
+    Argument : void
+    Return   : meminfo
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initialization in `bl2_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+The purpose of this function is to return a `meminfo` structure populated with
+the extents of secure RAM available for BL2 to use. See
+`bl2_early_platform_setup()` above.
+
+
+### Function : init_bl31_mem_layout() [optional]
+
+    Argument : meminfo *, meminfo *, unsigned int
+    Return   : void
+
+Each BL stage needs to tell the next stage the amount of secure RAM that is
+available for it to use. For example, as part of handing control to BL2, BL1
+must inform BL2 about the extents of secure RAM that is available for BL2 to
+use. BL2 must do the same when passing control to BL3-1. This information is
+populated in a `meminfo` structure.
+
+Depending upon where BL3-1 has been loaded in secure RAM (determined by
+`BL31_BASE`), BL2 calculates the amount of free memory available for BL3-1 to
+use. BL2 also ensures that BL3-1 is able reclaim memory occupied by BL2. This
+is done because BL2 never executes again after passing control to BL3-1.
+An illustration of how this is done in the ARM FVP port is given in the
+[User Guide], in the section "Memory layout on Base FVP".
+
+
+### Function : plat_get_ns_image_entrypoint() [mandatory]
+
+    Argument : void
+    Return   : unsigned long
+
+As previously described, BL2 is responsible for arranging for control to be
+passed to a normal world BL image through BL3-1. This function returns the
+entrypoint of that image, which BL3-1 uses to jump to it.
+
+The ARM FVP port assumes that flash memory has been pre-loaded with the UEFI
+image, and so returns the base address of flash memory.
+
+
+3.2 Boot Loader Stage 3-1 (BL3-1)
+---------------------------------
+
+During cold boot, the BL3-1 stage is executed only by the primary CPU. This is
+determined in BL1 using the `platform_is_primary_cpu()` function. BL1 passes
+control to BL3-1 at `BL31_BASE`. During warm boot, BL3-1 is executed by all
+CPUs. BL3-1 executes at EL3 and is responsible for:
+
+1.  Re-initializing all architectural and platform state. Although BL1 performs
+    some of this initialization, BL3-1 remains resident in EL3 and must ensure
+    that EL3 architectural and platform state is completely initialized. It
+    should make no assumptions about the system state when it receives control.
+
+2.  Passing control to a normal world BL image, pre-loaded at a platform-
+    specific address by BL2. BL3-1 uses the `el_change_info` structure that BL2
+    populated in memory to do this.
+
+3.  Providing runtime firmware services. Currently, BL3-1 only implements a
+    subset of the Power State Coordination Interface (PSCI) API as a runtime
+    service. See Section 3.3 below for details of porting the PSCI
+    implementation.
+
+The following functions must be implemented by the platform port to enable BL3-1
+to perform the above tasks.
+
+
+### Function : bl31_early_platform_setup() [mandatory]
+
+    Argument : meminfo *, void *, unsigned long
+    Return   : void
+
+This function executes with the MMU and data caches disabled. It is only called
+by the primary CPU. The arguments to this function are:
+
+*   The address of the `meminfo` structure populated by BL2.
+*   An opaque pointer that the platform may use as needed.
+*   The `MPIDR` of the primary CPU.
+
+The platform must copy the contents of the `meminfo` structure into a private
+variable as the original memory may be subsequently overwritten by BL3-1. The
+copied structure is made available to all BL3-1 code through the
+`bl31_get_sec_mem_layout()` function.
+
+
+### Function : bl31_plat_arch_setup() [mandatory]
+
+    Argument : void
+    Return   : void
+
+This function executes with the MMU and data caches disabled. It is only called
+by the primary CPU.
+
+The purpose of this function is to perform any architectural initialization
+that varies across platforms, for example enabling the MMU (since the memory
+map differs across platforms).
+
+
+### Function : bl31_platform_setup() [mandatory]
+
+    Argument : void
+    Return   : void
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initialization in `bl31_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+The purpose of this function is to complete platform initialization so that both
+BL3-1 runtime services and normal world software can function correctly.
+
+The ARM FVP port does the following:
+*   Initializes the generic interrupt controller.
+*   Configures the CLCD controller.
+*   Grants access to the system counter timer module
+*   Initializes the FVP power controller device
+*   Detects the system topology.
+
+
+### Function : bl31_get_next_image_info() [mandatory]
+
+    Argument : unsigned long
+    Return   : el_change_info *
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initializations in `bl31_plat_arch_setup()`.
+
+This function is called by `bl31_main()` to retrieve information provided by
+BL2, so that BL3-1 can pass control to the normal world software image. This
+function must return a pointer to the `el_change_info` structure (that was
+copied during `bl31_early_platform_setup()`).
+
+
+### Function : bl31_get_sec_mem_layout() [mandatory]
+
+    Argument : void
+    Return   : meminfo
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initializations in `bl31_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+The purpose of this function is to return a `meminfo` structure populated with
+the extents of secure RAM available for BL3-1 to use. See
+`bl31_early_platform_setup()` above.
+
+
+3.3 Power State Coordination Interface (in BL3-1)
+------------------------------------------------
+
+The ARM Trusted Firmware's implementation of the PSCI API is based around the
+concept of an _affinity instance_. Each _affinity instance_ can be uniquely
+identified in a system by a CPU ID (the processor `MPIDR` is used in the PSCI
+interface) and an _affinity level_. A processing element (for example, a
+CPU) is at level 0. If the CPUs in the system are described in a tree where the
+node above a CPU is a logical grouping of CPUs that share some state, then
+affinity level 1 is that group of CPUs (for example, a cluster), and affinity
+level 2 is a group of clusters (for example, the system). The implementation
+assumes that the affinity level 1 ID can be computed from the affinity level 0
+ID (for example, a unique cluster ID can be computed from the CPU ID). The
+current implementation computes this on the basis of the recommended use of
+`MPIDR` affinity fields in the ARM Architecture Reference Manual.
+
+BL3-1's platform initialization code exports a pointer to the platform-specific
+power management operations required for the PSCI implementation to function
+correctly. This information is populated in the `plat_pm_ops` structure. The
+PSCI implementation calls members of the `plat_pm_ops` structure for performing
+power management operations for each affinity instance. For example, the target
+CPU is specified by its `MPIDR` in a PSCI `CPU_ON` call. The `affinst_on()`
+handler (if present) is called for each affinity instance as the PSCI
+implementation powers up each affinity level implemented in the `MPIDR` (for
+example, CPU, cluster and system).
+
+The following functions must be implemented to initialize PSCI functionality in
+the ARM Trusted Firmware.
+
+
+### Function : plat_get_aff_count() [mandatory]
+
+    Argument : unsigned int, unsigned long
+    Return   : unsigned int
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initializations in `bl31_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+This function is called by the PSCI initialization code to detect the system
+topology. Its purpose is to return the number of affinity instances implemented
+at a given `affinity level` (specified by the first argument) and a given
+`MPIDR` (specified by the second argument). For example, on a dual-cluster
+system where first cluster implements 2 CPUs and the second cluster implements 4
+CPUs, a call to this function with an `MPIDR` corresponding to the first cluster
+(`0x0`) and affinity level 0, would return 2. A call to this function with an
+`MPIDR` corresponding to the second cluster (`0x100`) and affinity level 0,
+would return 4.
+
+
+### Function : plat_get_aff_state() [mandatory]
+
+    Argument : unsigned int, unsigned long
+    Return   : unsigned int
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initializations in `bl31_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+This function is called by the PSCI initialization code. Its purpose is to
+return the state of an affinity instance. The affinity instance is determined by
+the affinity ID at a given `affinity level` (specified by the first argument)
+and an `MPIDR` (specified by the second argument). The state can be one of
+`PSCI_AFF_PRESENT` or `PSCI_AFF_ABSENT`. The latter state is used to cater for
+system topologies where certain affinity instances are unimplemented. For
+example, consider a platform that implements a single cluster with 4 CPUs and
+another CPU implemented directly on the interconnect with the cluster. The
+`MPIDR`s of the cluster would range from `0x0-0x3`. The `MPIDR` of the single
+CPU would be 0x100 to indicate that it does not belong to cluster 0. Cluster 1
+is missing but needs to be accounted for to reach this single CPU in the
+topology tree. Hence it is marked as `PSCI_AFF_ABSENT`.
+
+
+### Function : plat_get_max_afflvl() [mandatory]
+
+    Argument : void
+    Return   : int
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initializations in `bl31_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+This function is called by the PSCI implementation both during cold and warm
+boot, to determine the maximum affinity level that the power management
+operations should apply to. ARMv8 has support for 4 affinity levels. It is
+likely that hardware will implement fewer affinity levels. This function allows
+the PSCI implementation to consider only those affinity levels in the system
+that the platform implements. For example, the Base AEM FVP implements two
+clusters with a configurable number of CPUs. It reports the maximum affinity
+level as 1, resulting in PSCI power control up to the cluster level.
+
+
+### Function : platform_setup_pm() [mandatory]
+
+    Argument : plat_pm_ops **
+    Return   : int
+
+This function may execute with the MMU and data caches enabled if the platform
+port does the necessary initializations in `bl31_plat_arch_setup()`. It is only
+called by the primary CPU.
+
+This function is called by PSCI initialization code. Its purpose is to export
+handler routines for platform-specific power management actions by populating
+the passed pointer with a pointer to BL3-1's private `plat_pm_ops` structure.
+
+A description of each member of this structure is given below. Please refer to
+the ARM FVP specific implementation of these handlers in [../plat/fvp/fvp_pm.c]
+as an example. A platform port may choose not implement some of the power
+management operations. For example, the ARM FVP port does not implement the
+`affinst_standby()` function.
+
+#### plat_pm_ops.affinst_standby()
+
+Perform the platform-specific setup to enter the standby state indicated by the
+passed argument.
+
+#### plat_pm_ops.affinst_on()
+
+Perform the platform specific setup to power on an affinity instance, specified
+by the `MPIDR` (first argument) and `affinity level` (fourth argument). The
+`state` (fifth argument) contains the current state of that affinity instance
+(ON or OFF). This is useful to determine whether any action must be taken. For
+example, while powering on a CPU, the cluster that contains this CPU might
+already be in the ON state. The platform decides what actions must be taken to
+transition from the current state to the target state (indicated by the power
+management operation).
+
+#### plat_pm_ops.affinst_off()
+
+Perform the platform specific setup to power off an affinity instance in the
+`MPIDR` of the calling CPU. It is called by the PSCI `CPU_OFF` API
+implementation.
+
+The `MPIDR` (first argument), `affinity level` (second argument) and `state`
+(third argument) have a similar meaning as described in the `affinst_on()`
+operation. They are used to identify the affinity instance on which the call
+is made and its current state. This gives the platform port an indication of the
+state transition it must make to perform the requested action. For example, if
+the calling CPU is the last powered on CPU in the cluster, after powering down
+affinity level 0 (CPU), the platform port should power down affinity level 1
+(the cluster) as well.
+
+This function is called with coherent stacks. This allows the PSCI
+implementation to flush caches at a given affinity level without running into
+stale stack state after turning off the caches. On ARMv8 cache hits do not occur
+after the cache has been turned off.
+
+#### plat_pm_ops.affinst_suspend()
+
+Perform the platform specific setup to power off an affinity instance in the
+`MPIDR` of the calling CPU. It is called by the PSCI `CPU_SUSPEND` API
+implementation.
+
+The `MPIDR` (first argument), `affinity level` (third argument) and `state`
+(fifth argument) have a similar meaning as described in the `affinst_on()`
+operation. They are used to identify the affinity instance on which the call
+is made and its current state. This gives the platform port an indication of the
+state transition it must make to perform the requested action. For example, if
+the calling CPU is the last powered on CPU in the cluster, after powering down
+affinity level 0 (CPU), the platform port should power down affinity level 1
+(the cluster) as well.
+
+The difference between turning an affinity instance off versus suspending it
+is that in the former case, the affinity instance is expected to re-initialize
+its state when its next powered on (see `affinst_on_finish()`). In the latter
+case, the affinity instance is expected to save enough state so that it can
+resume execution by restoring this state when its powered on (see
+`affinst_suspend_finish()`).
+
+This function is called with coherent stacks. This allows the PSCI
+implementation to flush caches at a given affinity level without running into
+stale stack state after turning off the caches. On ARMv8 cache hits do not occur
+after the cache has been turned off.
+
+#### plat_pm_ops.affinst_on_finish()
+
+This function is called by the PSCI implementation after the calling CPU is
+powered on and released from reset in response to an earlier PSCI `CPU_ON` call.
+It performs the platform-specific setup required to initialize enough state for
+this CPU to enter the normal world and also provide secure runtime firmware
+services.
+
+The `MPIDR` (first argument), `affinity level` (second argument) and `state`
+(third argument) have a similar meaning as described in the previous operations.
+
+This function is called with coherent stacks. This allows the PSCI
+implementation to flush caches at a given affinity level without running into
+stale stack state after turning off the caches. On ARMv8 cache hits do not occur
+after the cache has been turned off.
+
+#### plat_pm_ops.affinst_on_suspend()
+
+This function is called by the PSCI implementation after the calling CPU is
+powered on and released from reset in response to an asynchronous wakeup
+event, for example a timer interrupt that was programmed by the CPU during the
+`CPU_SUSPEND` call. It performs the platform-specific setup required to
+restore the saved state for this CPU to resume execution in the normal world
+and also provide secure runtime firmware services.
+
+The `MPIDR` (first argument), `affinity level` (second argument) and `state`
+(third argument) have a similar meaning as described in the previous operations.
+
+This function is called with coherent stacks. This allows the PSCI
+implementation to flush caches at a given affinity level without running into
+stale stack state after turning off the caches. On ARMv8 cache hits do not occur
+after the cache has been turned off.
+
+BL3-1 platform initialization code must also detect the system topology and
+the state of each affinity instance in the topology. This information is
+critical for the PSCI runtime service to function correctly. More details are
+provided in the description of the `plat_get_aff_count()` and
+`plat_get_aff_state()` functions above.
+
+
+- - - - - - - - - - - - - - - - - - - - - - - - - -
+
+_Copyright (c) 2013 ARM Ltd. All rights reserved._
+
+
+[User Guide]: user-guide.md
+
+[../plat/common/aarch64/platform_helpers.S]: ../plat/common/aarch64/platform_helpers.S
+[../plat/fvp/platform.h]:                    ../plat/fvp/platform.h
+[../plat/fvp/aarch64/fvp_common.c]:          ../plat/fvp/aarch64/fvp_common.c
+[../plat/fvp/fvp_pm.c]:                      ../plat/fvp/fvp_pm.c
+[../include/runtime_svc.h]:                  ../include/runtime_svc.h
diff --git a/docs/user-guide.md b/docs/user-guide.md
new file mode 100644 (file)
index 0000000..20483e4
--- /dev/null
@@ -0,0 +1,961 @@
+ARM Trusted Firmware User Guide
+===============================
+
+Contents :
+
+1.  Introduction
+2.  Using the Software
+3.  Firmware Design
+4.  References
+
+
+1.  Introduction
+----------------
+
+The ARM Trusted Firmware implements a subset of the Trusted Board Boot
+Requirements (TBBR) Platform Design Document (PDD) [1] for ARM reference
+platforms. The TBB sequence starts when the platform is powered on and runs up
+to the stage where it hands-off control to firmware running in the normal
+world in DRAM. This is the cold boot path.
+
+The ARM Trusted Firmware also implements the Power State Coordination Interface
+([PSCI]) PDD [2] as a runtime service. PSCI is the interface from normal world
+software to firmware implementing power management use-cases (for example,
+secondary CPU boot, hotplug and idle). Normal world software can access ARM
+Trusted Firmware runtime services via the ARM SMC (Secure Monitor Call)
+instruction. The SMC instruction must be used as mandated by the [SMC Calling
+Convention PDD][SMCCC] [3].
+
+
+2.  Using the Software
+----------------------
+
+### Host machine requirements
+
+The minimum recommended machine specification is an Intel Core2Duo clocking at
+2.6GHz or above, and 12GB RAM. For best performance, use a machine with Intel
+Core i7 (SandyBridge) and 16GB of RAM.
+
+
+### Tools
+
+The following tools are required to use the ARM Trusted Firmware:
+
+*   Ubuntu desktop OS. The software has been tested on Ubuntu 12.04.02 (64-bit).
+    The following packages are also needed:
+
+*   `ia32-libs` package.
+
+*   `make` and `uuid-dev` packages for building UEFI.
+
+*   `bc` and `ncurses-dev` packages for building Linux.
+
+*   Baremetal GNU GCC tools. Verified packages can be downloaded from [Linaro]
+    [Linaro Toolchain]. The rest of this document assumes that the
+    `gcc-linaro-aarch64-none-elf-4.8-2013.09-01_linux.tar.xz` tools are used.
+
+        wget http://releases.linaro.org/13.09/components/toolchain/binaries/gcc-linaro-aarch64-none-elf-4.8-2013.09-01_linux.tar.xz
+        tar -xf gcc-linaro-aarch64-none-elf-4.8-2013.09-01_linux.tar.xz
+
+*   The Device Tree Compiler (DTC) included with Linux kernel 3.12-rc4 is used
+    to build the Flattened Device Tree (FDT) source files (`.dts` files)
+    provided with this release.
+
+*   (Optional) For debugging, ARM [Development Studio 5 (DS-5)][DS-5] v5.16.
+
+
+### Building the Trusted Firmware
+
+To build the software for the Base FVPs, follow these steps:
+
+1.  Clone the ARM Trusted Firmware repository from Github:
+
+        git clone https://github.com/ARM-software/arm-trusted-firmware.git
+
+2.  Change to the trusted firmware directory:
+
+        cd arm-trusted-firmware
+
+3.  Set the compiler path and build:
+
+        CROSS_COMPILE=<path/to>/aarch64-none-elf- make
+
+    By default this produces a release version of the build. To produce a debug
+    version instead, refer to the "Debugging options" section below.
+
+    The build creates ELF and raw binary files in the current directory. It
+    generates the following boot loader binary files from the ELF files:
+
+    *   `bl1.bin`
+    *   `bl2.bin`
+    *   `bl31.bin`
+
+4.  Copy the above 3 boot loader binary files to the directory where the FVPs
+    are launched from. Symbolic links of the same names may be created instead.
+
+5.  (Optional) To clean the build directory use
+
+        make distclean
+
+
+#### Debugging options
+
+To compile a debug version and make the build more verbose use
+
+    CROSS_COMPILE=<path/to>/aarch64-none-elf- make DEBUG=1 V=1
+
+AArch64 GCC uses DWARF version 4 debugging symbols by default. Some tools (for
+example DS-5) might not support this and may need an older version of DWARF
+symbols to be emitted by GCC. This can be achieved by using the
+`-gdwarf-<version>` flag, with the version being set to 2 or 3. Setting the
+version to 2 is recommended for DS-5 versions older than 5.16.
+
+When debugging logic problems it might also be useful to disable all compiler
+optimizations by using `-O0`.
+
+NOTE: Using `-O0` could cause output images to be larger and base addresses
+might need to be recalculated (see the later memory layout section).
+
+Extra debug options can be passed to the build system by setting `CFLAGS`:
+
+    CFLAGS='-O0 -gdwarf-2' CROSS_COMPILE=<path/to>/aarch64-none-elf- make DEBUG=1 V=1
+
+
+### Obtaining the normal world software
+
+#### Obtaining UEFI
+
+Download an archive of the [EDK2 (EFI Development Kit 2) source code][EDK2]
+supporting the Base FVPs. EDK2 is an open source implementation of the UEFI
+specification:
+
+    wget http://sourceforge.net/projects/edk2/files/ARM/aarch64-uefi-rev14582.tgz/download -O aarch64-uefi-rev14582.tgz
+    tar -xf aarch64-uefi-rev14582.tgz
+
+To build the software for the Base FVPs, follow these steps:
+
+1.  Change into the unpacked EDK2 source directory
+
+        cd uefi
+
+2.  Copy build config templates to local workspace
+
+        export EDK_TOOLS_PATH=$(pwd)/BaseTools
+        . edksetup.sh $(pwd)/BaseTools/
+
+3.  Rebuild EDK2 host tools
+
+        make -C "$EDK_TOOLS_PATH" clean
+        make -C "$EDK_TOOLS_PATH"
+
+4.  Build the software
+
+        AARCH64GCC_TOOLS_PATH=<full-path-to-aarch64-gcc>/bin/      \
+        build -v -d3 -a AARCH64 -t ARMGCC                          \
+        -p ArmPlatformPkg/ArmVExpressPkg/ArmVExpress-FVP-AArch64.dsc
+
+    The EDK2 binary for use with the ARM Trusted Firmware can then be found
+    here:
+
+        Build/ArmVExpress-FVP-AArch64/DEBUG_ARMGCC/FV/FVP_AARCH64_EFI.fd
+
+This will build EDK2 for the default settings as used by the FVPs.
+
+To boot Linux using a VirtioBlock file-system, the command line passed from EDK2
+to the Linux kernel must be modified as described in the "Obtaining a
+File-system" section below.
+
+If legacy GICv2 locations are used, the EDK2 platform description must be
+updated. This is required as EDK2 does not support probing for the GIC location.
+To do this, open the `ArmPlatformPkg/ArmVExpressPkg/ArmVExpress-FVP-AArch64.dsc`
+file for editing and make the modifications as below. Rebuild EDK2 after doing a
+`clean`.
+
+    gArmTokenSpaceGuid.PcdGicDistributorBase|0x2C001000
+    gArmTokenSpaceGuid.PcdGicInterruptInterfaceBase|0x2C002000
+
+The EDK2 binary `FVP_AARCH64_EFI.fd` should be loaded into FVP FLASH0 via model
+parameters as described in the "Running the Software" section below.
+
+#### Obtaining a Linux kernel
+
+The software has been verified using Linux kernel version 3.12-rc4. Patches
+have been applied to the kernel in order to enable CPU hotplug.
+
+Preparing a Linux kernel for use on the FVPs with hotplug support can
+be done as follows (GICv2 support only):
+
+1.  Clone Linux:
+
+        git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+
+    The CPU hotplug features are not yet included in the mainline kernel. To use
+    these, add the patches from Mark Rutland's kernel, based on Linux 3.12-rc4:
+
+        cd linux
+        git remote add -f --tags markr git://linux-arm.org/linux-mr.git
+        git checkout -b hotplug arm64-cpu-hotplug-20131023
+
+2.  Build with the Linaro GCC tools.
+
+        # in linux/
+        make mrproper
+        make ARCH=arm64 defconfig
+
+        # Enable Hotplug
+        make ARCH=arm64 menuconfig
+        #   Kernel Features ---> [*] Support for hot-pluggable CPUs
+
+        CROSS_COMPILE=/path/to/aarch64-none-elf- make -j6 ARCH=arm64
+
+3.  Copy the Linux image `arch/arm64/boot/Image` to the working directory from
+    where the FVP is launched. A symbolic link may also be created instead.
+
+#### Obtaining the Flattened Device Trees
+
+Depending on the FVP configuration and Linux configuration used, different
+FDT files are required. FDTs for the Base FVP can be found in the Trusted
+Firmware source directory under `fdts`.
+
+*   `fvp-base-gicv2-psci.dtb`
+
+    (Default) For use with both AEMv8 and Cortex-A57-A53 Base FVPs with
+    default memory map configuration.
+
+*   `fvp-base-gicv2legacy-psci.dtb`
+
+    For use with both AEMv8 and Cortex-A57-A53 Base FVPs with legacy GICv2
+    memory map configuration.
+
+*   `fvp-base-gicv3-psci.dtb`
+
+    For use with AEMv8 Base FVP with default memory map configuration and
+    Linux GICv3 support.
+
+Copy the chosen FDT blob as `fdt.dtb` to the directory from which the FVP
+is launched. A symbolic link may also be created instead.
+
+#### Obtaining a File-system
+
+To prepare a Linaro LAMP based Open Embedded file-system, the following
+instructions can be used as a guide. The file-system can be provided to Linux
+via VirtioBlock or as a RAM-disk. Both methods are described below.
+
+##### Prepare VirtioBlock
+
+To prepare a VirtioBlock file-system, do the following:
+
+1.  Download and unpack the disk image.
+
+    NOTE: The unpacked disk image grows to 2 GiB in size.
+
+        wget http://releases.linaro.org/13.09/openembedded/aarch64/vexpress64-openembedded_lamp-armv8_20130927-7.img.gz
+        gunzip vexpress64-openembedded_lamp-armv8_20130927-7.img.gz
+
+2.  Make sure the Linux kernel has Virtio support enabled using
+    `make ARCH=arm64 menuconfig`.
+
+        Device Drivers  ---> Virtio drivers  ---> <*> Platform bus driver for memory mapped virtio devices
+        Device Drivers  ---> [*] Block devices  --->  <*> Virtio block driver
+        File systems    ---> <*> The Extended 4 (ext4) filesystem
+
+    If some of these configurations are missing, enable them, save the kernel
+    configuration, then rebuild the kernel image using the instructions provided
+    in the section "Obtaining a Linux kernel".
+
+3.  Change the Kernel command line to include `root=/dev/vda2`. This can either
+    be done in the EDK2 boot menu or in the platform file. Editing the platform
+    file and rebuilding EDK2 will make the change persist. To do this:
+
+    1.  In EDK, edit the following file:
+
+            ArmPlatformPkg/ArmVExpressPkg/ArmVExpress-FVP-AArch64.dsc
+
+    2.  Add `root=/dev/vda2` to:
+
+            gArmPlatformTokenSpaceGuid.PcdDefaultBootArgument|"<Other default options>"
+
+    3.  Remove the entry:
+
+            gArmPlatformTokenSpaceGuid.PcdDefaultBootInitrdPath|""
+
+    4.  Rebuild EDK2 (see "Obtaining UEFI" section above).
+
+4.  The file-system image file should be provided to the model environment by
+    passing it the correct command line option. In the Base FVP the following
+    option should be provided in addition to the ones described in the
+    "Running the software" section below.
+
+    NOTE: A symbolic link to this file cannot be used with the FVP; the path
+    to the real file must be provided.
+
+        -C bp.virtioblockdevice.image_path="<path/to/>vexpress64-openembedded_lamp-armv8_20130927-7.img"
+
+5.  Ensure that the FVP doesn't output any error messages. If the following
+    error message is displayed:
+
+        ERROR: BlockDevice: Failed to open "vexpress64-openembedded_lamp-armv8_20130927-7.img"!
+
+    then make sure the path to the file-system image in the model parameter is
+    correct and that read permission is correctly set on the file-system image
+    file.
+
+##### Prepare RAM-disk
+
+NOTE: The RAM-disk option does not currently work with the Linux kernel version
+described above; use the VirtioBlock method instead. For further information
+please see the "Known issues" section in the [Change Log].
+
+To Prepare a RAM-disk file-system, do the following:
+
+1.  Download the file-system image:
+
+        wget http://releases.linaro.org/13.09/openembedded/aarch64/linaro-image-lamp-genericarmv8-20130912-487.rootfs.tar.gz
+
+2.  Modify the Linaro image:
+
+        # Prepare for use as RAM-disk. Normally use MMC, NFS or VirtioBlock.
+        # Be careful, otherwise you could damage your host file-system.
+        mkdir tmp; cd tmp
+        sudo sh -c "zcat ../linaro-image-lamp-genericarmv8-20130912-487.rootfs.tar.gz | cpio -id"
+        sudo ln -s sbin/init .
+        sudo ln -s S35mountall.sh etc/rcS.d/S03mountall.sh
+        sudo sh -c "echo 'devtmpfs /dev devtmpfs mode=0755,nosuid 0 0' >> etc/fstab"
+        sudo sh -c "find . | cpio --quiet -H newc -o | gzip -3 -n > ../filesystem.cpio.gz"
+        cd ..
+
+3.  Copy the resultant `filesystem.cpio.gz` to the directory where the FVP is
+    launched from. A symbolic link may also be created instead.
+
+
+### Running the software
+
+This release of the ARM Trusted Firmware has been tested on the following ARM
+FVPs (64-bit versions only).
+
+*   `FVP_Base_AEMv8A-AEMv8A` (Version 5.1 build 8)
+*   `FVP_Base_Cortex-A57x4-A53x4` (Version 5.1 build 8)
+
+Please refer to the FVP documentation for a detailed description of the model
+parameter options. A brief description of the important ones that affect the
+ARM Trusted Firmware and normal world software behavior is provided below.
+
+#### Running on the AEMv8 Base FVP
+
+The following `FVP_Base_AEMv8A-AEMv8A` parameters should be used to boot Linux
+with 8 CPUs using the ARM Trusted Firmware.
+
+NOTE: Using `cache_state_modelled=1` makes booting very slow. The software will
+still work (and run much faster) without this option but this will hide any
+cache maintenance defects in the software.
+
+NOTE: Using the `-C bp.virtioblockdevice.image_path` parameter is not necessary
+if a Linux RAM-disk file-system is used (see the "Obtaining a File-system"
+section above).
+
+    FVP_Base_AEMv8A-AEMv8A                              \
+    -C pctl.startup=0.0.0.0                             \
+    -C bp.secure_memory=0                               \
+    -C cluster0.NUM_CORES=4                             \
+    -C cluster1.NUM_CORES=4                             \
+    -C cache_state_modelled=1                           \
+    -C bp.pl011_uart0.untimed_fifos=1                   \
+    -C bp.secureflashloader.fname=<path to bl1.bin>     \
+    -C bp.flashloader0.fname=<path to UEFI binary>      \
+    -C bp.virtioblockdevice.image_path="<path/to/>vexpress64-openembedded_lamp-armv8_20130927-7.img"
+
+#### Running on the Cortex-A57-A53 Base FVP
+
+The following `FVP_Base_Cortex-A57x4-A53x4` model parameters should be used to
+boot Linux with 8 CPUs using the ARM Trusted Firmware.
+
+NOTE: Using `cache_state_modelled=1` makes booting very slow. The software will
+still work (and run much faster) without this option but this will hide any
+cache maintenance defects in the software.
+
+NOTE: Using the `-C bp.virtioblockdevice.image_path` parameter is not necessary
+if a Linux RAM-disk file-system is used (see the "Obtaining a File-system"
+section above).
+
+    FVP_Base_Cortex-A57x4-A53x4                         \
+    -C pctl.startup=0.0.0.0                             \
+    -C bp.secure_memory=0                               \
+    -C cache_state_modelled=1                           \
+    -C bp.pl011_uart0.untimed_fifos=1                   \
+    -C bp.secureflashloader.fname=<path to bl1.bin>     \
+    -C bp.flashloader0.fname=<path to UEFI binary>      \
+    -C bp.virtioblockdevice.image_path="<path/to/>vexpress64-openembedded_lamp-armv8_20130927-7.img"
+
+### Configuring the GICv2 memory map
+
+The Base FVP models support GICv2 with the default model parameters at the
+following addresses.
+
+    GICv2 Distributor Interface     0x2f000000
+    GICv2 CPU Interface             0x2c000000
+    GICv2 Virtual CPU Interface     0x2c010000
+    GICv2 Hypervisor Interface      0x2c02f000
+
+The models can be configured to support GICv2 at addresses corresponding to the
+legacy (Versatile Express) memory map as follows.
+
+    GICv2 Distributor Interface     0x2c001000
+    GICv2 CPU Interface             0x2c002000
+    GICv2 Virtual CPU Interface     0x2c004000
+    GICv2 Hypervisor Interface      0x2c006000
+
+The choice of memory map is reflected in the build field (bits[15:12]) in the
+`SYS_ID` register (Offset `0x0`) in the Versatile Express System registers
+memory map (`0x1c010000`).
+
+*   `SYS_ID.Build[15:12]`
+
+    `0x1` corresponds to the presence of the default GICv2 memory map. This is
+    the default value.
+
+*   `SYS_ID.Build[15:12]`
+
+    `0x0` corresponds to the presence of the Legacy VE GICv2 memory map. This
+    value can be configured as described in the next section.
+
+NOTE: If the legacy VE GICv2 memory map is used, then the corresponding FDT and
+UEFI images should be used.
+
+#### Configuring AEMv8 Base FVP for legacy VE memory map
+
+The following parameters configure the GICv2 memory map in legacy VE mode:
+
+NOTE: Using the `-C bp.virtioblockdevice.image_path` parameter is not necessary
+if a Linux RAM-disk file-system is used (see the "Obtaining a File-system"
+section above).
+
+    FVP_Base_AEMv8A-AEMv8A                              \
+    -C cluster0.gic.GICD-offset=0x1000                  \
+    -C cluster0.gic.GICC-offset=0x2000                  \
+    -C cluster0.gic.GICH-offset=0x4000                  \
+    -C cluster0.gic.GICH-other-CPU-offset=0x5000        \
+    -C cluster0.gic.GICV-offset=0x6000                  \
+    -C cluster0.gic.PERIPH-size=0x8000                  \
+    -C cluster1.gic.GICD-offset=0x1000                  \
+    -C cluster1.gic.GICC-offset=0x2000                  \
+    -C cluster1.gic.GICH-offset=0x4000                  \
+    -C cluster1.gic.GICH-other-CPU-offset=0x5000        \
+    -C cluster1.gic.GICV-offset=0x6000                  \
+    -C cluster1.gic.PERIPH-size=0x8000                  \
+    -C gic_distributor.GICD-alias=0x2c001000            \
+    -C bp.variant=0x0                                   \
+    -C bp.virtioblockdevice.image_path="<path/to/>vexpress64-openembedded_lamp-armv8_20130927-7.img"
+
+The last parameter sets the build variant field of the `SYS_ID` register to
+`0x0`. This allows the ARM Trusted Firmware to detect the legacy VE memory map
+while configuring the GIC.
+
+#### Configuring Cortex-A57-A53 Base FVP for legacy VE memory map
+
+Configuration of the GICv2 as per the legacy VE memory map is controlled by
+the following parameter. In this case, separate configuration of the `SYS_ID`
+register is not required.
+
+NOTE: Using the `-C bp.virtioblockdevice.image_path` parameter is not necessary
+if a Linux RAM-disk file-system is used (see the "Obtaining a File-system"
+section above).
+
+    FVP_Base_Cortex-A57x4-A53x4                         \
+    -C legacy_gicv2_map=1                               \
+    -C bp.virtioblockdevice.image_path="<path/to/>vexpress64-openembedded_lamp-armv8_20130927-7.img"
+
+3.  Firmware Design
+-------------------
+
+The cold boot path starts when the platform is physically turned on. One of
+the CPUs released from reset is chosen as the primary CPU, and the remaining
+CPUs are considered secondary CPUs. The primary CPU is chosen through
+platform-specific means. The cold boot path is mainly executed by the primary
+CPU, other than essential CPU initialization executed by all CPUs. The
+secondary CPUs are kept in a safe platform-specific state until the primary
+CPU has performed enough initialization to boot them.
+
+The cold boot path in this implementation of the ARM Trusted Firmware is divided
+into three stages (in order of execution):
+
+*   Boot Loader stage 1 (BL1)
+*   Boot Loader stage 2 (BL2)
+*   Boot Loader stage 3 (BL3-1). The '1' distinguishes this from other 3rd level
+    boot loader stages.
+
+The ARM Fixed Virtual Platforms (FVPs) provide trusted ROM, trusted SRAM and
+trusted DRAM regions. Each boot loader stage uses one or more of these
+memories for its code and data.
+
+
+### BL1
+
+This stage begins execution from the platform's reset vector in trusted ROM at
+EL3. BL1 code starts at `0x00000000` (trusted ROM) in the FVP memory map. The
+BL1 data section is placed at the start of trusted SRAM, `0x04000000`. The
+functionality implemented by this stage is as follows.
+
+#### Determination of boot path
+
+Whenever a CPU is released from reset, BL1 needs to distinguish between a warm
+boot and a cold boot. This is done using a platform-specific mechanism. The
+ARM FVPs implement a simple power controller at `0x1c100000`. The `PSYS`
+register (`0x10`) is used to distinguish between a cold and warm boot. This
+information is contained in the `PSYS.WK[25:24]` field. Additionally, a
+per-CPU mailbox is maintained in trusted DRAM (`0x00600000`), to which BL1
+writes an entrypoint. Each CPU jumps to this entrypoint upon warm boot. During
+cold boot, BL1 places the secondary CPUs in a safe platform-specific state while
+the primary CPU executes the remaining cold boot path as described in the
+following sections.
+
+#### Architectural initialization
+
+BL1 performs minimal architectural initialization as follows.
+
+*   Exception vectors
+
+    BL1 sets up simple exception vectors for both synchronous and asynchronous
+    exceptions. The default behavior upon receiving an exception is to set a
+    status code. In the case of the FVP this code is written to the Versatile
+    Express System LED register in the following format:
+
+        SYS_LED[0]   - Security state (Secure=0/Non-Secure=1)
+        SYS_LED[2:1] - Exception Level (EL3=0x3, EL2=0x2, EL1=0x1, EL0=0x0)
+        SYS_LED[7:3] - Exception Class (Sync/Async & origin). The values for
+                       each exception class are:
+
+        0x0 : Synchronous exception from Current EL with SP_EL0
+        0x1 : IRQ exception from Current EL with SP_EL0
+        0x2 : FIQ exception from Current EL with SP_EL0
+        0x3 : System Error exception from Current EL with SP_EL0
+        0x4 : Synchronous exception from Current EL with SP_ELx
+        0x5 : IRQ exception from Current EL with SP_ELx
+        0x6 : FIQ exception from Current EL with SP_ELx
+        0x7 : System Error exception from Current EL with SP_ELx
+        0x8 : Synchronous exception from Lower EL using aarch64
+        0x9 : IRQ exception from Lower EL using aarch64
+        0xa : FIQ exception from Lower EL using aarch64
+        0xb : System Error exception from Lower EL using aarch64
+        0xc : Synchronous exception from Lower EL using aarch32
+        0xd : IRQ exception from Lower EL using aarch32
+        0xe : FIQ exception from Lower EL using aarch32
+        0xf : System Error exception from Lower EL using aarch32
+
+    A write to the LED register reflects in the System LEDs (S6LED0..7) in the
+    CLCD window of the FVP. This behavior is because this boot loader stage
+    does not expect to receive any exceptions other than the SMC exception.
+    For the latter, BL1 installs a simple stub. The stub expects to receive
+    only a single type of SMC (determined by its function ID in the general
+    purpose register `X0`). This SMC is raised by BL2 to make BL1 pass control
+    to BL3-1 (loaded by BL2) at EL3. Any other SMC leads to an assertion
+    failure.
+
+*   MMU setup
+
+    BL1 sets up EL3 memory translation by creating page tables to cover the
+    first 4GB of physical address space. This covers all the memories and
+    peripherals needed by BL1.
+
+*   Control register setup
+    -   `SCTLR_EL3`. Instruction cache is enabled by setting the `SCTLR_EL3.I`
+        bit. Alignment and stack alignment checking is enabled by setting the
+        `SCTLR_EL3.A` and `SCTLR_EL3.SA` bits. Exception endianness is set to
+        little-endian by clearing the `SCTLR_EL3.EE` bit.
+
+    -   `CPUECTLR`. When the FVP includes a model of a specific ARM processor
+        implementation (for example A57 or A53), then intra-cluster coherency is
+        enabled by setting the `CPUECTLR.SMPEN` bit. The AEMv8 Base FVP is
+        inherently coherent so does not implement `CPUECTLR`.
+
+    -   `SCR`. Use of the HVC instruction from EL1 is enabled by setting the
+        `SCR.HCE` bit. FIQ exceptions are configured to be taken in EL3 by
+        setting the `SCR.FIQ` bit. The register width of the next lower
+        exception level is set to AArch64 by setting the `SCR.RW` bit.
+
+    -   `CPTR_EL3`. Accesses to the `CPACR` from EL1 or EL2, or the `CPTR_EL2`
+        from EL2 are configured to not trap to EL3 by clearing the
+        `CPTR_EL3.TCPAC` bit. Instructions that access the registers associated
+        with Floating Point and Advanced SIMD execution are configured to not
+        trap to EL3 by clearing the `CPTR_EL3.TFP` bit.
+
+    -   `CNTFRQ_EL0`. The `CNTFRQ_EL0` register is programmed with the base
+        frequency of the system counter, which is retrieved from the first entry
+        in the frequency modes table.
+
+    -   Generic Timer. The system level implementation of the generic timer is
+        enabled through the memory mapped interface.
+
+#### Platform initialization
+
+BL1 enables issuing of snoop and DVM (Distributed Virtual Memory) requests from
+the CCI-400 slave interface corresponding to the cluster that includes the
+primary CPU. BL1 also initializes UART0 (PL011 console), which enables access to
+the `printf` family of functions.
+
+#### BL2 image load and execution
+
+BL1 execution continues as follows:
+
+1.  BL1 determines the amount of free trusted SRAM memory available by
+    calculating the extent of its own data section, which also resides in
+    trusted SRAM. BL1 loads a BL2 raw binary image through semi-hosting, at a
+    platform-specific base address. The filename of the BL2 raw binary image on
+    the host file system must be `bl2.bin`. If the BL2 image file is not present
+    or if there is not enough free trusted SRAM the following error message
+    is printed:
+
+        "Failed to load boot loader stage 2 (BL2) firmware."
+
+    If the load is successful, BL1 updates the limits of the remaining free
+    trusted SRAM. It also populates information about the amount of trusted
+    SRAM used by the BL2 image. The exact load location of the image is
+    provided as a base address in the platform header. Further description of
+    the memory layout can be found later in this document.
+
+2.  BL1 prints the following string from the primary CPU to indicate successful
+    execution of the BL1 stage:
+
+        "Booting trusted firmware boot loader stage 1"
+
+3.  BL1 passes control to the BL2 image at Secure EL1, starting from its load
+    address.
+
+4.  BL1 also passes information about the amount of trusted SRAM used and
+    available for use. This information is populated at a platform-specific
+    memory address.
+
+
+### BL2
+
+BL1 loads and passes control to BL2 at Secure EL1. BL2 is linked against and
+loaded at a platform-specific base address (more information can found later
+in this document). The functionality implemented by BL2 is as follows.
+
+#### Architectural initialization
+
+BL2 performs minimal architectural initialization required for subsequent
+stages of the ARM Trusted Firmware and normal world software. It sets up
+Secure EL1 memory translation by creating page tables to address the first 4GB
+of the physical address space in a similar way to BL1. EL1 and EL0 are given
+access to Floating Point & Advanced SIMD registers by clearing the `CPACR.FPEN`
+bits.
+
+#### Platform initialization
+
+BL2 does not perform any platform initialization that affects subsequent
+stages of the ARM Trusted Firmware or normal world software. It copies the
+information regarding the trusted SRAM populated by BL1 using a
+platform-specific mechanism. It also calculates the limits of DRAM (main memory)
+to determine whether there is enough space to load the normal world software
+images. A platform defined base address is used to specify the load address for
+the BL3-1 image.
+
+#### Normal world image load
+
+BL2 loads a rich boot firmware image (UEFI). The image executes in the normal
+world. BL2 relies on BL3-1 to pass control to the normal world software image it
+loads. Hence, BL2 populates a platform-specific area of memory with the
+entrypoint and Current Program Status Register (`CPSR`) of the normal world
+software image. The entrypoint is the load address of the normal world software
+image. The `CPSR` is determined as specified in Section 5.13 of the [PSCI PDD]
+[PSCI]. This information is passed to BL3-1.
+
+##### UEFI firmware load
+
+By default, BL2 assumes the UEFI image is present at the base of NOR flash0
+(`0x08000000`), and arranges for BL3-1 to pass control to that location. As
+mentioned earlier, BL2 populates platform-specific memory with the entrypoint
+and `CPSR` of the UEFI image.
+
+#### BL3-1 image load and execution
+
+BL2 execution continues as follows:
+
+1.  BL2 loads the BL3-1 image into a platform-specific address in trusted SRAM.
+    This is done using semi-hosting. The image is identified by the file
+    `bl31.bin` on the host file-system. If there is not enough memory to load
+    the image or the image is missing it leads to an assertion failure. If the
+    BL3-1 image loads successfully, BL1 updates the amount of trusted SRAM used
+    and available for use by BL3-1. This information is populated at a
+    platform-specific memory address.
+
+2.  BL2 passes control back to BL1 by raising an SMC, providing BL1 with the
+    BL3-1 entrypoint. The exception is handled by the SMC exception handler
+    installed by BL1.
+
+3.  BL1 turns off the MMU and flushes the caches. It clears the
+    `SCTLR_EL3.M/I/C` bits, flushes the data cache to the point of coherency
+    and invalidates the TLBs.
+
+4.  BL1 passes control to BL3-1 at the specified entrypoint at EL3.
+
+
+### BL3-1
+
+The image for this stage is loaded by BL2 and BL1 passes control to BL3-1 at
+EL3. BL3-1 executes solely in trusted SRAM. BL3-1 is linked against and
+loaded at a platform-specific base address (more information can found later
+in this document). The functionality implemented by BL3-1 is as follows.
+
+#### Architectural initialization
+
+Currently, BL3-1 performs a similar architectural initialization to BL1 as
+far as system register settings are concerned. Since BL1 code resides in ROM,
+architectural initialization in BL3-1 allows override of any previous
+initialization done by BL1. BL3-1 creates page tables to address the first
+4GB of physical address space and initializes the MMU accordingly. It replaces
+the exception vectors populated by BL1 with its own. BL3-1 exception vectors
+signal error conditions in the same way as BL1 does if an unexpected
+exception is raised. They implement more elaborate support for handling SMCs
+since this is the only mechanism to access the runtime services implemented by
+BL3-1 (PSCI for example). BL3-1 checks each SMC for validity as specified by
+the [SMC calling convention PDD][SMCCC] before passing control to the required
+SMC handler routine.
+
+#### Platform initialization
+
+BL3-1 performs detailed platform initialization, which enables normal world
+software to function correctly. It also retrieves entrypoint information for
+the normal world software image loaded by BL2 from the platform defined
+memory address populated by BL2.
+
+* GICv2 initialization:
+
+    -   Enable group0 interrupts in the GIC CPU interface.
+    -   Configure group0 interrupts to be asserted as FIQs.
+    -   Disable the legacy interrupt bypass mechanism.
+    -   Configure the priority mask register to allow interrupts of all
+        priorities to be signaled to the CPU interface.
+    -   Mark SGIs 8-15, the secure physical timer interrupt (#29) and the
+        trusted watchdog interrupt (#56) as group0 (secure).
+    -   Target the trusted watchdog interrupt to CPU0.
+    -   Enable these group0 interrupts in the GIC distributor.
+    -   Configure all other interrupts as group1 (non-secure).
+    -   Enable signaling of group0 interrupts in the GIC distributor.
+
+*   GICv3 initialization:
+
+    If a GICv3 implementation is available in the platform, BL3-1 initializes
+    the GICv3 in GICv2 emulation mode with settings as described for GICv2
+    above.
+
+*   Power management initialization:
+
+    BL3-1 implements a state machine to track CPU and cluster state. The state
+    can be one of `OFF`, `ON_PENDING`, `SUSPEND` or `ON`. All secondary CPUs are
+    initially in the `OFF` state. The cluster that the primary CPU belongs to is
+    `ON`; any other cluster is `OFF`. BL3-1 initializes the data structures that
+    implement the state machine, including the locks that protect them. BL3-1
+    accesses the state of a CPU or cluster immediately after reset and before
+    the MMU is enabled in the warm boot path. It is not currently possible to
+    use 'exclusive' based spinlocks, therefore BL3-1 uses locks based on
+    Lamport's Bakery algorithm instead. BL3-1 allocates these locks in device
+    memory. They are accessible irrespective of MMU state.
+
+*   Runtime services initialization:
+
+    The only runtime service implemented by BL3-1 is PSCI. The complete PSCI API
+    is not yet implemented. The following functions are currently implemented:
+
+    -   `PSCI_VERSION`
+    -   `CPU_OFF`
+    -   `CPU_ON`
+    -   `AFFINITY_INFO`
+
+    The `CPU_ON` and `CPU_OFF` functions implement the warm boot path in ARM
+    Trusted Firmware. These are the only functions which have been tested.
+    `AFFINITY_INFO` & `PSCI_VERSION` are present but completely untested in
+    this release.
+
+    Unsupported PSCI functions that can return, return the `NOT_SUPPORTED`
+    (`-1`) error code. Other unsupported PSCI functions that don't return,
+    signal an assertion failure.
+
+    BL3-1 returns the error code `-1` if an SMC is raised for any other runtime
+    service. This behavior is mandated by the [SMC calling convention PDD]
+    [SMCCC].
+
+
+### Normal world software execution
+
+BL3-1 uses the entrypoint information provided by BL2 to jump to the normal
+world software image at the highest available Exception Level (EL2 if
+available, otherwise EL1).
+
+
+### Memory layout on Base FVP ###
+
+The current implementation of the image loader has some limitations. It is
+designed to load images dynamically, at a load address chosen to minimize memory
+fragmentation. The chosen image location can be either at the top or the bottom
+of free memory. However, until this feature is fully functional, the code also
+contains support for loading images at a link-time fixed address. The code that
+dynamically calculates the load address is bypassed and the load address is
+specified statically by the platform.
+
+BL1 is always loaded at address `0x0`. BL2 and BL3-1 are loaded at specified
+locations in Trusted SRAM. The lack of dynamic image loader support means these
+load addresses must currently be adjusted as the code grows. The individual
+images must be linked against their ultimate runtime locations.
+
+BL2 is loaded near the top of the Trusted SRAM. BL3-1 is loaded between BL1
+and BL2. As a general rule, the following constraints must always be enforced:
+
+1.  `BL2_MAX_ADDR <= (<Top of Trusted SRAM>)`
+2.  `BL31_BASE >= BL1_MAX_ADDR`
+3.  `BL2_BASE >= BL31_MAX_ADDR`
+
+Constraint 1 is enforced by BL2's linker script. If it is violated then the
+linker will report an error while building BL2 to indicate that it doesn't
+fit. For example:
+
+    aarch64-none-elf-ld: address 0x40400c8 of bl2.elf section `.bss' is not
+    within region `RAM'
+
+This error means that the BL2 base address needs to be moved down. Be sure that
+the new BL2 load address still obeys constraint 3.
+
+Constraints 2 & 3 must currently be checked by hand. To ensure they are
+enforced, first determine the maximum addresses used by BL1 and BL3-1. This can
+be deduced from the link map files of the different images.
+
+The BL1 link map file (`bl1.map`) gives these 2 values:
+
+*   `FIRMWARE_RAM_COHERENT_START`
+*   `FIRMWARE_RAM_COHERENT_SIZE`
+
+The maximum address used by BL1 can then be easily determined:
+
+    BL1_MAX_ADDR = FIRMWARE_RAM_COHERENT_START + FIRMWARE_RAM_COHERENT_SIZE
+
+The BL3-1 link map file (`bl31.map`) gives the following value:
+
+*   `BL31_DATA_STOP`. This is the the maximum address used by BL3-1.
+
+The current implementation can result in wasted space because a simplified
+`meminfo` structure represents the extents of free memory. For example, to load
+BL2 at address `0x04020000`, the resulting memory layout should be as follows:
+
+    ------------ 0x04040000
+    |          |  <- Free space (1)
+    |----------|
+    |   BL2    |
+    |----------| BL2_BASE (0x0402D000)
+    |          |  <- Free space (2)
+    |----------|
+    |   BL1    |
+    ------------ 0x04000000
+
+In the current implementation, we need to specify whether BL2 is loaded at the
+top or bottom of the free memory. BL2 is top-loaded so in the example above,
+the free space (1) above BL2 is hidden, resulting in the following view of
+memory:
+
+    ------------ 0x04040000
+    |          |
+    |          |
+    |   BL2    |
+    |----------| BL2_BASE (0x0402D000)
+    |          |  <- Free space (2)
+    |----------|
+    |   BL1    |
+    ------------ 0x04000000
+
+BL3-1 is bottom-loaded above BL1. For example, if BL3-1 is bottom-loaded at
+`0x0400E000`, the memory layout should look like this:
+
+    ------------ 0x04040000
+    |          |
+    |          |
+    |   BL2    |
+    |----------| BL2_BASE (0x0402D000)
+    |          |  <- Free space (2)
+    |          |
+    |----------|
+    |          |
+    |   BL31   |
+    |----------|  BL31_BASE (0x0400E000)
+    |          |  <- Free space (3)
+    |----------|
+    |   BL1    |
+    ------------ 0x04000000
+
+But the free space (3) between BL1 and BL3-1 is wasted, resulting in the
+following view:
+
+    ------------ 0x04040000
+    |          |
+    |          |
+    |   BL2    |
+    |----------| BL2_BASE (0x0402D000)
+    |          |  <- Free space (2)
+    |          |
+    |----------|
+    |          |
+    |          |
+    |   BL31   | BL31_BASE (0x0400E000)
+    |          |
+    |----------|
+    |   BL1    |
+    ------------ 0x04000000
+
+
+### Code Structure ###
+
+Trusted Firmware code is logically divided between the three boot loader
+stages mentioned in the previous sections. The code is also divided into the
+following categories (present as directories in the source code):
+
+*   **Architecture specific.** This could be AArch32 or AArch64.
+*   **Platform specific.** Choice of architecture specific code depends upon
+    the platform.
+*   **Common code.** This is platform and architecture agnostic code.
+*   **Library code.** This code comprises of functionality commonly used by all
+    other code.
+*   **Stage specific.** Code specific to a boot stage.
+*   **Drivers.**
+
+Each boot loader stage uses code from one or more of the above mentioned
+categories. Based upon the above, the code layout looks like this:
+
+    Directory    Used by BL1?    Used by BL2?    Used by BL3?
+    bl1          Yes             No              No
+    bl2          No              Yes             No
+    bl31         No              No              Yes
+    arch         Yes             Yes             Yes
+    plat         Yes             Yes             Yes
+    drivers      Yes             No              Yes
+    common       Yes             Yes             Yes
+    lib          Yes             Yes             Yes
+
+All assembler files have the `.S` extension. The linker files for each boot
+stage has the `.ld.S` extension. These are processed by GCC to create the
+resultant `.ld` files used for linking.
+
+FDTs provide a description of the hardware platform and is used by the Linux
+kernel at boot time. These can be found in the `fdts` directory.
+
+
+4.  References
+--------------
+
+1.  Trusted Board Boot Requirements CLIENT PDD (ARM DEN 0006B-5). Available
+    under NDA through your ARM account representative.
+
+2.  [Power State Coordination Interface PDD (ARM DEN 0022B.b)][PSCI].
+
+3.  [SMC Calling Convention PDD (ARM DEN 0028A)][SMCCC].
+
+
+- - - - - - - - - - - - - - - - - - - - - - - - - -
+
+_Copyright (c) 2013 ARM Ltd. All rights reserved._
+
+
+[Change Log]: change-log.md
+
+[Linaro Toolchain]: http://releases.linaro.org/13.09/components/toolchain/binaries/
+[EDK2]:             http://sourceforge.net/projects/edk2/files/ARM/aarch64-uefi-rev14582.tgz/download
+[DS-5]:             http://www.arm.com/products/tools/software-tools/ds-5/index.php
+[PSCI]:             http://infocenter.arm.com/help/topic/com.arm.doc.den0022b/index.html "Power State Coordination Interface PDD (ARM DEN 0022B.b)"
+[SMCCC]:            http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html "SMC Calling Convention PDD (ARM DEN 0028A)"
diff --git a/drivers/arm/interconnect/cci-400/cci400.c b/drivers/arm/interconnect/cci-400/cci400.c
new file mode 100644 (file)
index 0000000..60586ab
--- /dev/null
@@ -0,0 +1,59 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <platform.h>
+#include <cci400.h>
+
+static inline unsigned long get_slave_iface_base(unsigned long mpidr)
+{
+       return CCI400_BASE + SLAVE_IFACE_OFFSET(CCI400_SL_IFACE_INDEX(mpidr));
+}
+
+void cci_enable_coherency(unsigned long mpidr)
+{
+       /* Enable Snoops and DVM messages */
+       mmio_write_32(get_slave_iface_base(mpidr) + SNOOP_CTRL_REG,
+                     DVM_EN_BIT | SNOOP_EN_BIT);
+
+       /* Wait for the dust to settle down */
+       while (mmio_read_32(CCI400_BASE + STATUS_REG) & CHANGE_PENDING_BIT);
+}
+
+void cci_disable_coherency(unsigned long mpidr)
+{
+       /* Disable Snoops and DVM messages */
+       mmio_write_32(get_slave_iface_base(mpidr) + SNOOP_CTRL_REG,
+                     ~(DVM_EN_BIT | SNOOP_EN_BIT));
+
+       /* Wait for the dust to settle down */
+       while (mmio_read_32(CCI400_BASE + STATUS_REG) & CHANGE_PENDING_BIT);
+}
+
diff --git a/drivers/arm/interconnect/cci-400/cci400.h b/drivers/arm/interconnect/cci-400/cci400.h
new file mode 100644 (file)
index 0000000..62e2fbb
--- /dev/null
@@ -0,0 +1,72 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __CCI_400_H__
+#define __CCI_400_H__
+
+/* Slave interface offsets from PERIPHBASE */
+#define SLAVE_IFACE4_OFFSET            0x5000
+#define SLAVE_IFACE3_OFFSET            0x4000
+#define SLAVE_IFACE2_OFFSET            0x3000
+#define SLAVE_IFACE1_OFFSET            0x2000
+#define SLAVE_IFACE0_OFFSET            0x1000
+#define SLAVE_IFACE_OFFSET(index)      SLAVE_IFACE0_OFFSET + (0x1000 * index)
+
+/* Control and ID register offsets */
+#define CTRL_OVERRIDE_REG              0x0
+#define SPEC_CTRL_REG                  0x4
+#define SECURE_ACCESS_REG              0x8
+#define STATUS_REG                     0xc
+#define IMPRECISE_ERR_REG              0x10
+#define PERFMON_CTRL_REG               0x100
+
+/* Slave interface register offsets */
+#define SNOOP_CTRL_REG                 0x0
+#define SH_OVERRIDE_REG                        0x4
+#define READ_CHNL_QOS_VAL_OVERRIDE_REG 0x100
+#define WRITE_CHNL_QOS_VAL_OVERRIDE_REG        0x104
+#define QOS_CTRL_REG                   0x10c
+#define MAX_OT_REG                     0x110
+#define TARGET_LATENCY_REG             0x130
+#define LATENCY_REGULATION_REG         0x134
+#define QOS_RANGE_REG                  0x138
+
+/* Snoop Control register bit definitions */
+#define DVM_EN_BIT                     (1 << 1)
+#define SNOOP_EN_BIT                   (1 << 0)
+
+/* Status register bit definitions */
+#define CHANGE_PENDING_BIT             (1 << 0)
+
+/* Function declarations */
+extern void cci_enable_coherency(unsigned long mpidr);
+extern void cci_disable_coherency(unsigned long mpidr);
+
+#endif /* __CCI_400_H__ */
diff --git a/drivers/arm/peripherals/pl011/console.h b/drivers/arm/peripherals/pl011/console.h
new file mode 100644 (file)
index 0000000..b98db61
--- /dev/null
@@ -0,0 +1,39 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __CONSOLE_H__
+#define __CONSOLE_H__
+
+void console_init(void);
+int console_putc(int c);
+int console_getc(void);
+
+#endif /* __CONSOLE_H__ */
+
diff --git a/drivers/arm/peripherals/pl011/pl011.c b/drivers/arm/peripherals/pl011/pl011.c
new file mode 100644 (file)
index 0000000..2f6f5ea
--- /dev/null
@@ -0,0 +1,82 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <console.h>
+#include <platform.h>
+#include <pl011.h>
+
+/*
+ * TODO: Console init functions shoule be in a console.c. This file should
+ * only contain the pl011 accessors.
+ */
+void console_init(void)
+{
+       unsigned int divisor;
+
+       /* Baud Rate */
+
+#if defined(PL011_INTEGER) && defined(PL011_FRACTIONAL)
+       mmio_write_32(PL011_BASE + UARTIBRD, PL011_INTEGER);
+       mmio_write_32(PL011_BASE + UARTFBRD, PL011_FRACTIONAL);
+#else
+       divisor = (PL011_CLK_IN_HZ * 4) / PL011_BAUDRATE;
+       mmio_write_32(PL011_BASE + UARTIBRD, divisor >> 6);
+       mmio_write_32(PL011_BASE + UARTFBRD, divisor & 0x3F);
+#endif
+
+
+       mmio_write_32(PL011_BASE + UARTLCR_H, PL011_LINE_CONTROL);
+
+       /* Clear any pending errors */
+       mmio_write_32(PL011_BASE + UARTECR, 0);
+
+       /* Enable tx, rx, and uart overall */
+       mmio_write_32(PL011_BASE + UARTCR,
+                     PL011_UARTCR_RXE | PL011_UARTCR_TXE |
+                     PL011_UARTCR_UARTEN);
+}
+
+int console_putc(int c)
+{
+       if (c == '\n') {
+               console_putc('\r');
+       }
+       while ((mmio_read_32(PL011_BASE + UARTFR) & PL011_UARTFR_TXFE)
+              == 0) ;
+       mmio_write_32(PL011_BASE + UARTDR, c);
+       return c;
+}
+
+int console_getc(void)
+{
+       while ((mmio_read_32(PL011_BASE + UARTFR) & PL011_UARTFR_RXFE)
+              != 0) ;
+       return mmio_read_32(PL011_BASE + UARTDR);
+}
diff --git a/drivers/arm/peripherals/pl011/pl011.h b/drivers/arm/peripherals/pl011/pl011.h
new file mode 100644 (file)
index 0000000..53b4dab
--- /dev/null
@@ -0,0 +1,107 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PL011_H__
+#define __PL011_H__
+
+/* PL011 Registers */
+#define UARTDR                    0x000
+#define UARTRSR                   0x004
+#define UARTECR                   0x004
+#define UARTFR                    0x018
+#define UARTILPR                  0x020
+#define UARTIBRD                  0x024
+#define UARTFBRD                  0x028
+#define UARTLCR_H                 0x02C
+#define UARTCR                    0x030
+#define UARTIFLS                  0x034
+#define UARTIMSC                  0x038
+#define UARTRIS                   0x03C
+#define UARTMIS                   0x040
+#define UARTICR                   0x044
+#define UARTDMACR                 0x048
+
+/* Data status bits */
+#define UART_DATA_ERROR_MASK      0x0F00
+
+/* Status reg bits */
+#define UART_STATUS_ERROR_MASK    0x0F
+
+/* Flag reg bits */
+#define PL011_UARTFR_RI           (1 << 8)     /* Ring indicator */
+#define PL011_UARTFR_TXFE         (1 << 7)     /* Transmit FIFO empty */
+#define PL011_UARTFR_RXFF         (1 << 6)     /* Receive  FIFO full */
+#define PL011_UARTFR_TXFF         (1 << 5)     /* Transmit FIFO full */
+#define PL011_UARTFR_RXFE         (1 << 4)     /* Receive  FIFO empty */
+#define PL011_UARTFR_BUSY         (1 << 3)     /* UART busy */
+#define PL011_UARTFR_DCD          (1 << 2)     /* Data carrier detect */
+#define PL011_UARTFR_DSR          (1 << 1)     /* Data set ready */
+#define PL011_UARTFR_CTS          (1 << 0)     /* Clear to send */
+
+/* Control reg bits */
+#define PL011_UARTCR_CTSEN        (1 << 15)    /* CTS hardware flow control enable */
+#define PL011_UARTCR_RTSEN        (1 << 14)    /* RTS hardware flow control enable */
+#define PL011_UARTCR_RTS          (1 << 11)    /* Request to send */
+#define PL011_UARTCR_DTR          (1 << 10)    /* Data transmit ready. */
+#define PL011_UARTCR_RXE          (1 << 9)     /* Receive enable */
+#define PL011_UARTCR_TXE          (1 << 8)     /* Transmit enable */
+#define PL011_UARTCR_LBE          (1 << 7)     /* Loopback enable */
+#define PL011_UARTCR_UARTEN       (1 << 0)     /* UART Enable */
+
+#if !defined(PL011_BASE)
+#error "The PL011_BASE macro must be defined."
+#endif
+
+#if !defined(PL011_BAUDRATE)
+#define PL011_BAUDRATE  115200
+#endif
+
+#if !defined(PL011_CLK_IN_HZ)
+#define PL011_CLK_IN_HZ 24000000
+#endif
+
+#if !defined(PL011_LINE_CONTROL)
+/* FIFO Enabled / No Parity / 8 Data bit / One Stop Bit */
+#define PL011_LINE_CONTROL  (PL011_UARTLCR_H_FEN | PL011_UARTLCR_H_WLEN_8)
+#endif
+
+/* Line Control Register Bits */
+#define PL011_UARTLCR_H_SPS       (1 << 7)     /* Stick parity select */
+#define PL011_UARTLCR_H_WLEN_8    (3 << 5)
+#define PL011_UARTLCR_H_WLEN_7    (2 << 5)
+#define PL011_UARTLCR_H_WLEN_6    (1 << 5)
+#define PL011_UARTLCR_H_WLEN_5    (0 << 5)
+#define PL011_UARTLCR_H_FEN       (1 << 4)     /* FIFOs Enable */
+#define PL011_UARTLCR_H_STP2      (1 << 3)     /* Two stop bits select */
+#define PL011_UARTLCR_H_EPS       (1 << 2)     /* Even parity select */
+#define PL011_UARTLCR_H_PEN       (1 << 1)     /* Parity Enable */
+#define PL011_UARTLCR_H_BRK       (1 << 0)     /* Send break */
+
+#endif /* __PL011_H__ */
diff --git a/drivers/power/fvp_pwrc.c b/drivers/power/fvp_pwrc.c
new file mode 100644 (file)
index 0000000..c7db33b
--- /dev/null
@@ -0,0 +1,103 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <errno.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <platform.h>
+#include <fvp_pwrc.h>
+#include <bakery_lock.h>
+
+/*
+ * TODO: Someday there will be a generic power controller api. At the moment
+ * each platform has its own pwrc so just exporting functions is fine.
+ */
+static bakery_lock pwrc_lock __attribute__ ((section("tzfw_coherent_mem")));
+
+unsigned int fvp_pwrc_get_cpu_wkr(unsigned long mpidr)
+{
+       unsigned int rc = 0;
+       bakery_lock_get(mpidr, &pwrc_lock);
+       mmio_write_32(PWRC_BASE + PSYSR_OFF, (unsigned int) mpidr);
+       rc = PSYSR_WK(mmio_read_32(PWRC_BASE + PSYSR_OFF));
+       bakery_lock_release(mpidr, &pwrc_lock);
+       return rc;
+}
+
+unsigned int fvp_pwrc_read_psysr(unsigned long mpidr)
+{
+       unsigned int rc = 0;
+       bakery_lock_get(mpidr, &pwrc_lock);
+       mmio_write_32(PWRC_BASE + PSYSR_OFF, (unsigned int) mpidr);
+       rc = mmio_read_32(PWRC_BASE + PSYSR_OFF);
+       bakery_lock_release(mpidr, &pwrc_lock);
+       return rc;
+}
+
+void fvp_pwrc_write_pponr(unsigned long mpidr)
+{
+       bakery_lock_get(mpidr, &pwrc_lock);
+       mmio_write_32(PWRC_BASE + PPONR_OFF, (unsigned int) mpidr);
+       bakery_lock_release(mpidr, &pwrc_lock);
+}
+
+void fvp_pwrc_write_ppoffr(unsigned long mpidr)
+{
+       bakery_lock_get(mpidr, &pwrc_lock);
+       mmio_write_32(PWRC_BASE + PPOFFR_OFF, (unsigned int) mpidr);
+       bakery_lock_release(mpidr, &pwrc_lock);
+}
+
+void fvp_pwrc_write_pwkupr(unsigned long mpidr)
+{
+       bakery_lock_get(mpidr, &pwrc_lock);
+       mmio_write_32(PWRC_BASE + PWKUPR_OFF,
+                     (unsigned int) (PWKUPR_WEN | mpidr));
+       bakery_lock_release(mpidr, &pwrc_lock);
+}
+
+void fvp_pwrc_write_pcoffr(unsigned long mpidr)
+{
+       bakery_lock_get(mpidr, &pwrc_lock);
+       mmio_write_32(PWRC_BASE + PCOFFR_OFF, (unsigned int) mpidr);
+       bakery_lock_release(mpidr, &pwrc_lock);
+}
+
+/* Nothing else to do here apart from initializing the lock */
+int fvp_pwrc_setup(void)
+{
+       bakery_lock_init(&pwrc_lock);
+       return 0;
+}
+
+
+
diff --git a/drivers/power/fvp_pwrc.h b/drivers/power/fvp_pwrc.h
new file mode 100644 (file)
index 0000000..a2efcc5
--- /dev/null
@@ -0,0 +1,76 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __FVP_PWRC_H__
+#define __FVP_PWRC_H__
+
+/* FVP Power controller register offset etc */
+#define PPOFFR_OFF             0x0
+#define PPONR_OFF              0x4
+#define PCOFFR_OFF             0x8
+#define PWKUPR_OFF             0xc
+#define PSYSR_OFF              0x10
+
+#define PWKUPR_WEN             (1ull << 31)
+
+#define PSYSR_AFF_L2           (1 << 31)
+#define PSYSR_AFF_L1           (1 << 30)
+#define PSYSR_AFF_L0           (1 << 29)
+#define PSYSR_WEN              (1 << 28)
+#define PSYSR_PC               (1 << 27)
+#define PSYSR_PP               (1 << 26)
+
+#define PSYSR_WK_SHIFT         24
+#define PSYSR_WK_MASK          0x3
+#define PSYSR_WK(x)            (x >> PSYSR_WK_SHIFT) & PSYSR_WK_MASK
+
+#define WKUP_COLD              0x0
+#define WKUP_RESET             0x1
+#define WKUP_PPONR             0x2
+#define WKUP_GICREQ            0x3
+
+#define PSYSR_INVALID          0xffffffff
+
+#ifndef __ASSEMBLY__
+
+/*******************************************************************************
+ * Function & variable prototypes
+ ******************************************************************************/
+extern int fvp_pwrc_setup(void);
+extern void fvp_pwrc_write_pcoffr(unsigned long);
+extern void fvp_pwrc_write_ppoffr(unsigned long);
+extern void fvp_pwrc_write_pponr(unsigned long);
+extern void fvp_pwrc_write_pwkupr(unsigned long);
+extern unsigned int fvp_pwrc_read_psysr(unsigned long);
+extern unsigned int fvp_pwrc_get_cpu_wkr(unsigned long);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __FVP_PWRC_H__ */
diff --git a/fdts/fvp-base-gicv2-psci.dtb b/fdts/fvp-base-gicv2-psci.dtb
new file mode 100644 (file)
index 0000000..bfb2710
Binary files /dev/null and b/fdts/fvp-base-gicv2-psci.dtb differ
diff --git a/fdts/fvp-base-gicv2-psci.dts b/fdts/fvp-base-gicv2-psci.dts
new file mode 100644 (file)
index 0000000..7aa18a5
--- /dev/null
@@ -0,0 +1,250 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/dts-v1/;
+
+/memreserve/ 0x80000000 0x00010000;
+
+/ {
+};
+
+/ {
+       model = "FVP Base";
+       compatible = "arm,vfp-base", "arm,vexpress";
+       interrupt-parent = <&gic>;
+       #address-cells = <2>;
+       #size-cells = <2>;
+
+       chosen { };
+
+       aliases {
+               serial0 = &v2m_serial0;
+               serial1 = &v2m_serial1;
+               serial2 = &v2m_serial2;
+               serial3 = &v2m_serial3;
+       };
+
+       psci {
+               compatible = "arm,psci";
+               method = "smc";
+               cpu_suspend = <0xc4000001>;
+               cpu_off = <0x84000002>;
+               cpu_on = <0xc4000003>;
+       };
+
+       cpus {
+               #address-cells = <2>;
+               #size-cells = <0>;
+
+               cpu@0 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x0>;
+                       enable-method = "psci";
+               };
+               cpu@1 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x1>;
+                       enable-method = "psci";
+               };
+               cpu@2 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x2>;
+                       enable-method = "psci";
+               };
+               cpu@3 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x3>;
+                       enable-method = "psci";
+               };
+               cpu@100 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x100>;
+                       enable-method = "psci";
+               };
+               cpu@101 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x101>;
+                       enable-method = "psci";
+               };
+               cpu@102 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x102>;
+                       enable-method = "psci";
+               };
+               cpu@103 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x103>;
+                       enable-method = "psci";
+               };
+       };
+
+       memory@80000000 {
+               device_type = "memory";
+               reg = <0x00000000 0x80000000 0 0x80000000>;
+               /*
+                     <0x00000008 0x80000000 0 0x80000000>;
+               */
+       };
+
+       gic: interrupt-controller@2f000000 {
+               compatible = "arm,cortex-a15-gic", "arm,cortex-a9-gic";
+               #interrupt-cells = <3>;
+               #address-cells = <0>;
+               interrupt-controller;
+               reg = <0x0 0x2f000000 0 0x10000>,
+                     <0x0 0x2c000000 0 0x2000>,
+                     <0x0 0x2c010000 0 0x2000>,
+                     <0x0 0x2c02F000 0 0x2000>;
+               interrupts = <1 9 0xf04>;
+       };
+
+       timer {
+               compatible = "arm,armv8-timer";
+               interrupts = <1 13 0xff01>,
+                            <1 14 0xff01>,
+                            <1 11 0xff01>,
+                            <1 10 0xff01>;
+               clock-frequency = <100000000>;
+       };
+
+       timer@2a810000 {
+                       compatible = "arm,armv7-timer-mem";
+                       reg = <0x0 0x2a810000 0x0 0x10000>;
+                       clock-frequency = <100000000>;
+                       #address-cells = <2>;
+                       #size-cells = <2>;
+                       ranges;
+                       frame@2a820000 {
+                               frame-number = <0>;
+                               interrupts = <0 25 4>;
+                               reg = <0x0 0x2a820000 0x0 0x10000>;
+                       };
+       };
+
+       pmu {
+               compatible = "arm,armv8-pmuv3";
+               interrupts = <0 60 4>,
+                            <0 61 4>,
+                            <0 62 4>,
+                            <0 63 4>;
+       };
+
+       smb {
+               compatible = "simple-bus";
+
+               #address-cells = <2>;
+               #size-cells = <1>;
+               ranges = <0 0 0 0x08000000 0x04000000>,
+                        <1 0 0 0x14000000 0x04000000>,
+                        <2 0 0 0x18000000 0x04000000>,
+                        <3 0 0 0x1c000000 0x04000000>,
+                        <4 0 0 0x0c000000 0x04000000>,
+                        <5 0 0 0x10000000 0x04000000>;
+
+               #interrupt-cells = <1>;
+               interrupt-map-mask = <0 0 63>;
+               interrupt-map = <0 0  0 &gic 0  0 4>,
+                               <0 0  1 &gic 0  1 4>,
+                               <0 0  2 &gic 0  2 4>,
+                               <0 0  3 &gic 0  3 4>,
+                               <0 0  4 &gic 0  4 4>,
+                               <0 0  5 &gic 0  5 4>,
+                               <0 0  6 &gic 0  6 4>,
+                               <0 0  7 &gic 0  7 4>,
+                               <0 0  8 &gic 0  8 4>,
+                               <0 0  9 &gic 0  9 4>,
+                               <0 0 10 &gic 0 10 4>,
+                               <0 0 11 &gic 0 11 4>,
+                               <0 0 12 &gic 0 12 4>,
+                               <0 0 13 &gic 0 13 4>,
+                               <0 0 14 &gic 0 14 4>,
+                               <0 0 15 &gic 0 15 4>,
+                               <0 0 16 &gic 0 16 4>,
+                               <0 0 17 &gic 0 17 4>,
+                               <0 0 18 &gic 0 18 4>,
+                               <0 0 19 &gic 0 19 4>,
+                               <0 0 20 &gic 0 20 4>,
+                               <0 0 21 &gic 0 21 4>,
+                               <0 0 22 &gic 0 22 4>,
+                               <0 0 23 &gic 0 23 4>,
+                               <0 0 24 &gic 0 24 4>,
+                               <0 0 25 &gic 0 25 4>,
+                               <0 0 26 &gic 0 26 4>,
+                               <0 0 27 &gic 0 27 4>,
+                               <0 0 28 &gic 0 28 4>,
+                               <0 0 29 &gic 0 29 4>,
+                               <0 0 30 &gic 0 30 4>,
+                               <0 0 31 &gic 0 31 4>,
+                               <0 0 32 &gic 0 32 4>,
+                               <0 0 33 &gic 0 33 4>,
+                               <0 0 34 &gic 0 34 4>,
+                               <0 0 35 &gic 0 35 4>,
+                               <0 0 36 &gic 0 36 4>,
+                               <0 0 37 &gic 0 37 4>,
+                               <0 0 38 &gic 0 38 4>,
+                               <0 0 39 &gic 0 39 4>,
+                               <0 0 40 &gic 0 40 4>,
+                               <0 0 41 &gic 0 41 4>,
+                               <0 0 42 &gic 0 42 4>;
+
+               /include/ "rtsm_ve-motherboard.dtsi"
+       };
+
+       panels {
+               panel@0 {
+                       compatible      = "panel";
+                       mode            = "XVGA";
+                       refresh         = <60>;
+                       xres            = <1024>;
+                       yres            = <768>;
+                       pixclock        = <15748>;
+                       left_margin     = <152>;
+                       right_margin    = <48>;
+                       upper_margin    = <23>;
+                       lower_margin    = <3>;
+                       hsync_len       = <104>;
+                       vsync_len       = <4>;
+                       sync            = <0>;
+                       vmode           = "FB_VMODE_NONINTERLACED";
+                       tim2            = "TIM2_BCD", "TIM2_IPC";
+                       cntl            = "CNTL_LCDTFT", "CNTL_BGR", "CNTL_LCDVCOMP(1)";
+                       caps            = "CLCD_CAP_5551", "CLCD_CAP_565", "CLCD_CAP_888";
+                       bpp             = <16>;
+               };
+       };
+};
diff --git a/fdts/fvp-base-gicv2legacy-psci.dtb b/fdts/fvp-base-gicv2legacy-psci.dtb
new file mode 100644 (file)
index 0000000..227c161
Binary files /dev/null and b/fdts/fvp-base-gicv2legacy-psci.dtb differ
diff --git a/fdts/fvp-base-gicv2legacy-psci.dts b/fdts/fvp-base-gicv2legacy-psci.dts
new file mode 100644 (file)
index 0000000..340ae50
--- /dev/null
@@ -0,0 +1,250 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/dts-v1/;
+
+/memreserve/ 0x80000000 0x00010000;
+
+/ {
+};
+
+/ {
+       model = "FVP Base";
+       compatible = "arm,vfp-base", "arm,vexpress";
+       interrupt-parent = <&gic>;
+       #address-cells = <2>;
+       #size-cells = <2>;
+
+       chosen { };
+
+       aliases {
+               serial0 = &v2m_serial0;
+               serial1 = &v2m_serial1;
+               serial2 = &v2m_serial2;
+               serial3 = &v2m_serial3;
+       };
+
+       psci {
+               compatible = "arm,psci";
+               method = "smc";
+               cpu_suspend = <0xc4000001>;
+               cpu_off = <0x84000002>;
+               cpu_on = <0xc4000003>;
+       };
+
+       cpus {
+               #address-cells = <2>;
+               #size-cells = <0>;
+
+               cpu@0 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x0>;
+                       enable-method = "psci";
+               };
+               cpu@1 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x1>;
+                       enable-method = "psci";
+               };
+               cpu@2 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x2>;
+                       enable-method = "psci";
+               };
+               cpu@3 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x3>;
+                       enable-method = "psci";
+               };
+               cpu@100 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x100>;
+                       enable-method = "psci";
+               };
+               cpu@101 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x101>;
+                       enable-method = "psci";
+               };
+               cpu@102 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x102>;
+                       enable-method = "psci";
+               };
+               cpu@103 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x103>;
+                       enable-method = "psci";
+               };
+       };
+
+       memory@80000000 {
+               device_type = "memory";
+               reg = <0x00000000 0x80000000 0 0x80000000>;
+               /*
+                     <0x00000008 0x80000000 0 0x80000000>;
+               */
+       };
+
+       gic: interrupt-controller@2c001000 {
+               compatible = "arm,cortex-a15-gic", "arm,cortex-a9-gic";
+               #interrupt-cells = <3>;
+               #address-cells = <0>;
+               interrupt-controller;
+               reg = <0x0 0x2c001000 0 0x1000>,
+                     <0x0 0x2c002000 0 0x1000>,
+                     <0x0 0x2c004000 0 0x2000>,
+                     <0x0 0x2c006000 0 0x2000>;
+               interrupts = <1 9 0xf04>;
+       };
+
+       timer {
+               compatible = "arm,armv8-timer";
+               interrupts = <1 13 0xff01>,
+                            <1 14 0xff01>,
+                            <1 11 0xff01>,
+                            <1 10 0xff01>;
+               clock-frequency = <100000000>;
+       };
+
+       timer@2a810000 {
+                       compatible = "arm,armv7-timer-mem";
+                       reg = <0x0 0x2a810000 0x0 0x10000>;
+                       clock-frequency = <100000000>;
+                       #address-cells = <2>;
+                       #size-cells = <2>;
+                       ranges;
+                       frame@2a820000 {
+                               frame-number = <0>;
+                               interrupts = <0 25 4>;
+                               reg = <0x0 0x2a820000 0x0 0x10000>;
+                       };
+       };
+
+       pmu {
+               compatible = "arm,armv8-pmuv3";
+               interrupts = <0 60 4>,
+                            <0 61 4>,
+                            <0 62 4>,
+                            <0 63 4>;
+       };
+
+       smb {
+               compatible = "simple-bus";
+
+               #address-cells = <2>;
+               #size-cells = <1>;
+               ranges = <0 0 0 0x08000000 0x04000000>,
+                        <1 0 0 0x14000000 0x04000000>,
+                        <2 0 0 0x18000000 0x04000000>,
+                        <3 0 0 0x1c000000 0x04000000>,
+                        <4 0 0 0x0c000000 0x04000000>,
+                        <5 0 0 0x10000000 0x04000000>;
+
+               #interrupt-cells = <1>;
+               interrupt-map-mask = <0 0 63>;
+               interrupt-map = <0 0  0 &gic 0  0 4>,
+                               <0 0  1 &gic 0  1 4>,
+                               <0 0  2 &gic 0  2 4>,
+                               <0 0  3 &gic 0  3 4>,
+                               <0 0  4 &gic 0  4 4>,
+                               <0 0  5 &gic 0  5 4>,
+                               <0 0  6 &gic 0  6 4>,
+                               <0 0  7 &gic 0  7 4>,
+                               <0 0  8 &gic 0  8 4>,
+                               <0 0  9 &gic 0  9 4>,
+                               <0 0 10 &gic 0 10 4>,
+                               <0 0 11 &gic 0 11 4>,
+                               <0 0 12 &gic 0 12 4>,
+                               <0 0 13 &gic 0 13 4>,
+                               <0 0 14 &gic 0 14 4>,
+                               <0 0 15 &gic 0 15 4>,
+                               <0 0 16 &gic 0 16 4>,
+                               <0 0 17 &gic 0 17 4>,
+                               <0 0 18 &gic 0 18 4>,
+                               <0 0 19 &gic 0 19 4>,
+                               <0 0 20 &gic 0 20 4>,
+                               <0 0 21 &gic 0 21 4>,
+                               <0 0 22 &gic 0 22 4>,
+                               <0 0 23 &gic 0 23 4>,
+                               <0 0 24 &gic 0 24 4>,
+                               <0 0 25 &gic 0 25 4>,
+                               <0 0 26 &gic 0 26 4>,
+                               <0 0 27 &gic 0 27 4>,
+                               <0 0 28 &gic 0 28 4>,
+                               <0 0 29 &gic 0 29 4>,
+                               <0 0 30 &gic 0 30 4>,
+                               <0 0 31 &gic 0 31 4>,
+                               <0 0 32 &gic 0 32 4>,
+                               <0 0 33 &gic 0 33 4>,
+                               <0 0 34 &gic 0 34 4>,
+                               <0 0 35 &gic 0 35 4>,
+                               <0 0 36 &gic 0 36 4>,
+                               <0 0 37 &gic 0 37 4>,
+                               <0 0 38 &gic 0 38 4>,
+                               <0 0 39 &gic 0 39 4>,
+                               <0 0 40 &gic 0 40 4>,
+                               <0 0 41 &gic 0 41 4>,
+                               <0 0 42 &gic 0 42 4>;
+
+               /include/ "rtsm_ve-motherboard.dtsi"
+       };
+
+       panels {
+               panel@0 {
+                       compatible      = "panel";
+                       mode            = "XVGA";
+                       refresh         = <60>;
+                       xres            = <1024>;
+                       yres            = <768>;
+                       pixclock        = <15748>;
+                       left_margin     = <152>;
+                       right_margin    = <48>;
+                       upper_margin    = <23>;
+                       lower_margin    = <3>;
+                       hsync_len       = <104>;
+                       vsync_len       = <4>;
+                       sync            = <0>;
+                       vmode           = "FB_VMODE_NONINTERLACED";
+                       tim2            = "TIM2_BCD", "TIM2_IPC";
+                       cntl            = "CNTL_LCDTFT", "CNTL_BGR", "CNTL_LCDVCOMP(1)";
+                       caps            = "CLCD_CAP_5551", "CLCD_CAP_565", "CLCD_CAP_888";
+                       bpp             = <16>;
+               };
+       };
+};
diff --git a/fdts/fvp-base-gicv3-psci.dtb b/fdts/fvp-base-gicv3-psci.dtb
new file mode 100644 (file)
index 0000000..678b45b
Binary files /dev/null and b/fdts/fvp-base-gicv3-psci.dtb differ
diff --git a/fdts/fvp-base-gicv3-psci.dts b/fdts/fvp-base-gicv3-psci.dts
new file mode 100644 (file)
index 0000000..98c7487
--- /dev/null
@@ -0,0 +1,250 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/dts-v1/;
+
+/memreserve/ 0x80000000 0x00010000;
+
+/ {
+};
+
+/ {
+       model = "FVP Base";
+       compatible = "arm,vfp-base", "arm,vexpress";
+       interrupt-parent = <&gic>;
+       #address-cells = <2>;
+       #size-cells = <2>;
+
+       chosen { };
+
+       aliases {
+               serial0 = &v2m_serial0;
+               serial1 = &v2m_serial1;
+               serial2 = &v2m_serial2;
+               serial3 = &v2m_serial3;
+       };
+
+       psci {
+               compatible = "arm,psci";
+               method = "smc";
+               cpu_suspend = <0xc4000001>;
+               cpu_off = <0x84000002>;
+               cpu_on = <0xc4000003>;
+       };
+
+       cpus {
+               #address-cells = <2>;
+               #size-cells = <0>;
+
+               cpu@0 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x0>;
+                       enable-method = "psci";
+               };
+               cpu@1 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x1>;
+                       enable-method = "psci";
+               };
+               cpu@2 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x2>;
+                       enable-method = "psci";
+               };
+               cpu@3 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x3>;
+                       enable-method = "psci";
+               };
+               cpu@100 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x100>;
+                       enable-method = "psci";
+               };
+               cpu@101 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x101>;
+                       enable-method = "psci";
+               };
+               cpu@102 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x102>;
+                       enable-method = "psci";
+               };
+               cpu@103 {
+                       device_type = "cpu";
+                       compatible = "arm,armv8";
+                       reg = <0x0 0x103>;
+                       enable-method = "psci";
+               };
+       };
+
+       memory@80000000 {
+               device_type = "memory";
+               reg = <0x00000000 0x80000000 0 0x80000000>;
+               /*
+                     <0x00000008 0x80000000 0 0x80000000>;
+               */
+       };
+
+       gic: interrupt-controller@2cf00000 {
+               compatible = "arm,gic-v3";
+               #interrupt-cells = <3>;
+               interrupt-controller;
+               reg = <0x0 0x2f000000 0 0x10000>,       // GICD
+                     <0x0 0x2f100000 0 0x200000>,      // GICR
+                     <0x0 0x2c000000 0 0x2000>,        // GICC
+                     <0x0 0x2c010000 0 0x2000>,        // GICH
+                     <0x0 0x2c02F000 0 0x2000>;        // GICV
+               interrupts = <1 9 4>;
+       };
+
+       timer {
+               compatible = "arm,armv8-timer";
+               interrupts = <1 13 0xff01>,
+                            <1 14 0xff01>,
+                            <1 11 0xff01>,
+                            <1 10 0xff01>;
+               clock-frequency = <100000000>;
+       };
+
+       timer@2a810000 {
+                       compatible = "arm,armv7-timer-mem";
+                       reg = <0x0 0x2a810000 0x0 0x10000>;
+                       clock-frequency = <100000000>;
+                       #address-cells = <2>;
+                       #size-cells = <2>;
+                       ranges;
+                       frame@2a820000 {
+                               frame-number = <0>;
+                               interrupts = <0 25 4>;
+                               reg = <0x0 0x2a820000 0x0 0x10000>;
+                       };
+       };
+
+       pmu {
+               compatible = "arm,armv8-pmuv3";
+               interrupts = <0 60 4>,
+                            <0 61 4>,
+                            <0 62 4>,
+                            <0 63 4>;
+       };
+
+       smb {
+               compatible = "simple-bus";
+
+               #address-cells = <2>;
+               #size-cells = <1>;
+               ranges = <0 0 0 0x08000000 0x04000000>,
+                        <1 0 0 0x14000000 0x04000000>,
+                        <2 0 0 0x18000000 0x04000000>,
+                        <3 0 0 0x1c000000 0x04000000>,
+                        <4 0 0 0x0c000000 0x04000000>,
+                        <5 0 0 0x10000000 0x04000000>;
+
+               #interrupt-cells = <1>;
+               interrupt-map-mask = <0 0 63>;
+               interrupt-map = <0 0  0 &gic 0  0 4>,
+                               <0 0  1 &gic 0  1 4>,
+                               <0 0  2 &gic 0  2 4>,
+                               <0 0  3 &gic 0  3 4>,
+                               <0 0  4 &gic 0  4 4>,
+                               <0 0  5 &gic 0  5 4>,
+                               <0 0  6 &gic 0  6 4>,
+                               <0 0  7 &gic 0  7 4>,
+                               <0 0  8 &gic 0  8 4>,
+                               <0 0  9 &gic 0  9 4>,
+                               <0 0 10 &gic 0 10 4>,
+                               <0 0 11 &gic 0 11 4>,
+                               <0 0 12 &gic 0 12 4>,
+                               <0 0 13 &gic 0 13 4>,
+                               <0 0 14 &gic 0 14 4>,
+                               <0 0 15 &gic 0 15 4>,
+                               <0 0 16 &gic 0 16 4>,
+                               <0 0 17 &gic 0 17 4>,
+                               <0 0 18 &gic 0 18 4>,
+                               <0 0 19 &gic 0 19 4>,
+                               <0 0 20 &gic 0 20 4>,
+                               <0 0 21 &gic 0 21 4>,
+                               <0 0 22 &gic 0 22 4>,
+                               <0 0 23 &gic 0 23 4>,
+                               <0 0 24 &gic 0 24 4>,
+                               <0 0 25 &gic 0 25 4>,
+                               <0 0 26 &gic 0 26 4>,
+                               <0 0 27 &gic 0 27 4>,
+                               <0 0 28 &gic 0 28 4>,
+                               <0 0 29 &gic 0 29 4>,
+                               <0 0 30 &gic 0 30 4>,
+                               <0 0 31 &gic 0 31 4>,
+                               <0 0 32 &gic 0 32 4>,
+                               <0 0 33 &gic 0 33 4>,
+                               <0 0 34 &gic 0 34 4>,
+                               <0 0 35 &gic 0 35 4>,
+                               <0 0 36 &gic 0 36 4>,
+                               <0 0 37 &gic 0 37 4>,
+                               <0 0 38 &gic 0 38 4>,
+                               <0 0 39 &gic 0 39 4>,
+                               <0 0 40 &gic 0 40 4>,
+                               <0 0 41 &gic 0 41 4>,
+                               <0 0 42 &gic 0 42 4>;
+
+               /include/ "rtsm_ve-motherboard.dtsi"
+       };
+
+       panels {
+               panel@0 {
+                       compatible      = "panel";
+                       mode            = "XVGA";
+                       refresh         = <60>;
+                       xres            = <1024>;
+                       yres            = <768>;
+                       pixclock        = <15748>;
+                       left_margin     = <152>;
+                       right_margin    = <48>;
+                       upper_margin    = <23>;
+                       lower_margin    = <3>;
+                       hsync_len       = <104>;
+                       vsync_len       = <4>;
+                       sync            = <0>;
+                       vmode           = "FB_VMODE_NONINTERLACED";
+                       tim2            = "TIM2_BCD", "TIM2_IPC";
+                       cntl            = "CNTL_LCDTFT", "CNTL_BGR", "CNTL_LCDVCOMP(1)";
+                       caps            = "CLCD_CAP_5551", "CLCD_CAP_565", "CLCD_CAP_888";
+                       bpp             = <16>;
+               };
+       };
+};
diff --git a/fdts/rtsm_ve-motherboard.dtsi b/fdts/rtsm_ve-motherboard.dtsi
new file mode 100644 (file)
index 0000000..00e92c5
--- /dev/null
@@ -0,0 +1,264 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+       motherboard {
+               arm,v2m-memory-map = "rs1";
+               compatible = "arm,vexpress,v2m-p1", "simple-bus";
+               #address-cells = <2>; /* SMB chipselect number and offset */
+               #size-cells = <1>;
+               #interrupt-cells = <1>;
+               ranges;
+
+               flash@0,00000000 {
+                       compatible = "arm,vexpress-flash", "cfi-flash";
+                       reg = <0 0x00000000 0x04000000>,
+                             <4 0x00000000 0x04000000>;
+                       bank-width = <4>;
+               };
+
+               vram@2,00000000 {
+                       compatible = "arm,vexpress-vram";
+                       reg = <2 0x00000000 0x00800000>;
+               };
+
+               ethernet@2,02000000 {
+                       compatible = "smsc,lan91c111";
+                       reg = <2 0x02000000 0x10000>;
+                       interrupts = <15>;
+               };
+
+               v2m_clk24mhz: clk24mhz {
+                       compatible = "fixed-clock";
+                       #clock-cells = <0>;
+                       clock-frequency = <24000000>;
+                       clock-output-names = "v2m:clk24mhz";
+               };
+
+               v2m_refclk1mhz: refclk1mhz {
+                       compatible = "fixed-clock";
+                       #clock-cells = <0>;
+                       clock-frequency = <1000000>;
+                       clock-output-names = "v2m:refclk1mhz";
+               };
+
+               v2m_refclk32khz: refclk32khz {
+                       compatible = "fixed-clock";
+                       #clock-cells = <0>;
+                       clock-frequency = <32768>;
+                       clock-output-names = "v2m:refclk32khz";
+               };
+
+               iofpga@3,00000000 {
+                       compatible = "arm,amba-bus", "simple-bus";
+                       #address-cells = <1>;
+                       #size-cells = <1>;
+                       ranges = <0 3 0 0x200000>;
+
+                       v2m_sysreg: sysreg@010000 {
+                               compatible = "arm,vexpress-sysreg";
+                               reg = <0x010000 0x1000>;
+                               gpio-controller;
+                               #gpio-cells = <2>;
+                       };
+
+                       v2m_sysctl: sysctl@020000 {
+                               compatible = "arm,sp810", "arm,primecell";
+                               reg = <0x020000 0x1000>;
+                               clocks = <&v2m_refclk32khz>, <&v2m_refclk1mhz>, <&v2m_clk24mhz>;
+                               clock-names = "refclk", "timclk", "apb_pclk";
+                               #clock-cells = <1>;
+                               clock-output-names = "timerclken0", "timerclken1", "timerclken2", "timerclken3";
+                       };
+
+                       aaci@040000 {
+                               compatible = "arm,pl041", "arm,primecell";
+                               reg = <0x040000 0x1000>;
+                               interrupts = <11>;
+                               clocks = <&v2m_clk24mhz>;
+                               clock-names = "apb_pclk";
+                       };
+
+                       mmci@050000 {
+                               compatible = "arm,pl180", "arm,primecell";
+                               reg = <0x050000 0x1000>;
+                               interrupts = <9 10>;
+                               cd-gpios = <&v2m_sysreg 0 0>;
+                               wp-gpios = <&v2m_sysreg 1 0>;
+                               max-frequency = <12000000>;
+                               vmmc-supply = <&v2m_fixed_3v3>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "mclk", "apb_pclk";
+                       };
+
+                       kmi@060000 {
+                               compatible = "arm,pl050", "arm,primecell";
+                               reg = <0x060000 0x1000>;
+                               interrupts = <12>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "KMIREFCLK", "apb_pclk";
+                       };
+
+                       kmi@070000 {
+                               compatible = "arm,pl050", "arm,primecell";
+                               reg = <0x070000 0x1000>;
+                               interrupts = <13>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "KMIREFCLK", "apb_pclk";
+                       };
+
+                       v2m_serial0: uart@090000 {
+                               compatible = "arm,pl011", "arm,primecell";
+                               reg = <0x090000 0x1000>;
+                               interrupts = <5>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "uartclk", "apb_pclk";
+                       };
+
+                       v2m_serial1: uart@0a0000 {
+                               compatible = "arm,pl011", "arm,primecell";
+                               reg = <0x0a0000 0x1000>;
+                               interrupts = <6>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "uartclk", "apb_pclk";
+                       };
+
+                       v2m_serial2: uart@0b0000 {
+                               compatible = "arm,pl011", "arm,primecell";
+                               reg = <0x0b0000 0x1000>;
+                               interrupts = <7>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "uartclk", "apb_pclk";
+                       };
+
+                       v2m_serial3: uart@0c0000 {
+                               compatible = "arm,pl011", "arm,primecell";
+                               reg = <0x0c0000 0x1000>;
+                               interrupts = <8>;
+                               clocks = <&v2m_clk24mhz>, <&v2m_clk24mhz>;
+                               clock-names = "uartclk", "apb_pclk";
+                       };
+
+                       wdt@0f0000 {
+                               compatible = "arm,sp805", "arm,primecell";
+                               reg = <0x0f0000 0x1000>;
+                               interrupts = <0>;
+                               clocks = <&v2m_refclk32khz>, <&v2m_clk24mhz>;
+                               clock-names = "wdogclk", "apb_pclk";
+                       };
+
+                       v2m_timer01: timer@110000 {
+                               compatible = "arm,sp804", "arm,primecell";
+                               reg = <0x110000 0x1000>;
+                               interrupts = <2>;
+                               clocks = <&v2m_sysctl 0>, <&v2m_sysctl 1>, <&v2m_clk24mhz>;
+                               clock-names = "timclken1", "timclken2", "apb_pclk";
+                       };
+
+                       v2m_timer23: timer@120000 {
+                               compatible = "arm,sp804", "arm,primecell";
+                               reg = <0x120000 0x1000>;
+                               interrupts = <3>;
+                               clocks = <&v2m_sysctl 2>, <&v2m_sysctl 3>, <&v2m_clk24mhz>;
+                               clock-names = "timclken1", "timclken2", "apb_pclk";
+                       };
+
+                       rtc@170000 {
+                               compatible = "arm,pl031", "arm,primecell";
+                               reg = <0x170000 0x1000>;
+                               interrupts = <4>;
+                               clocks = <&v2m_clk24mhz>;
+                               clock-names = "apb_pclk";
+                       };
+
+                       clcd@1f0000 {
+                               compatible = "arm,pl111", "arm,primecell";
+                               reg = <0x1f0000 0x1000>;
+                               interrupts = <14>;
+                               clocks = <&v2m_oscclk1>, <&v2m_clk24mhz>;
+                               clock-names = "clcdclk", "apb_pclk";
+                               mode = "XVGA";
+                               use_dma = <0>;
+                               framebuffer = <0x18000000 0x00180000>;
+                       };
+
+                       virtio_block@0130000 {
+                               compatible = "virtio,mmio";
+                               reg = <0x130000 0x1000>;
+                               interrupts = <0x2a>;
+                       };
+               };
+
+               v2m_fixed_3v3: fixedregulator@0 {
+                       compatible = "regulator-fixed";
+                       regulator-name = "3V3";
+                       regulator-min-microvolt = <3300000>;
+                       regulator-max-microvolt = <3300000>;
+                       regulator-always-on;
+               };
+
+               mcc {
+                       compatible = "arm,vexpress,config-bus", "simple-bus";
+                       arm,vexpress,config-bridge = <&v2m_sysreg>;
+
+                       v2m_oscclk1: osc@1 {
+                               /* CLCD clock */
+                               compatible = "arm,vexpress-osc";
+                               arm,vexpress-sysreg,func = <1 1>;
+                               freq-range = <23750000 63500000>;
+                               #clock-cells = <0>;
+                               clock-output-names = "v2m:oscclk1";
+                       };
+
+                       reset@0 {
+                               compatible = "arm,vexpress-reset";
+                               arm,vexpress-sysreg,func = <5 0>;
+                       };
+
+                       muxfpga@0 {
+                               compatible = "arm,vexpress-muxfpga";
+                               arm,vexpress-sysreg,func = <7 0>;
+                       };
+
+                       shutdown@0 {
+                               compatible = "arm,vexpress-shutdown";
+                               arm,vexpress-sysreg,func = <8 0>;
+                       };
+
+                       reboot@0 {
+                               compatible = "arm,vexpress-reboot";
+                               arm,vexpress-sysreg,func = <9 0>;
+                       };
+
+                       dvimode@0 {
+                               compatible = "arm,vexpress-dvimode";
+                               arm,vexpress-sysreg,func = <11 0>;
+                       };
+               };
+       };
diff --git a/include/aarch64/arch.h b/include/aarch64/arch.h
new file mode 100644 (file)
index 0000000..3a23e4f
--- /dev/null
@@ -0,0 +1,315 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __ARCH_H__
+#define __ARCH_H__
+
+#include <bl_common.h>
+
+/*******************************************************************************
+ * MIDR bit definitions
+ ******************************************************************************/
+#define MIDR_PN_MASK           0xfff
+#define MIDR_PN_SHIFT          0x4
+#define MIDR_PN_AEM            0xd0f
+#define MIDR_PN_A57            0xd07
+#define MIDR_PN_A53            0xd03
+
+/*******************************************************************************
+ * MPIDR macros
+ ******************************************************************************/
+#define MPIDR_CPU_MASK         MPIDR_AFFLVL_MASK
+#define MPIDR_CLUSTER_MASK     MPIDR_AFFLVL_MASK << MPIDR_AFFINITY_BITS
+#define MPIDR_AFFINITY_BITS    8
+#define MPIDR_AFFLVL_MASK      0xff
+#define MPIDR_AFF0_SHIFT       0
+#define MPIDR_AFF1_SHIFT       8
+#define MPIDR_AFF2_SHIFT       16
+#define MPIDR_AFF3_SHIFT       32
+#define MPIDR_AFFINITY_MASK    0xff00ffffff
+#define MPIDR_AFFLVL_SHIFT     3
+#define MPIDR_AFFLVL0          0
+#define MPIDR_AFFLVL1          1
+#define MPIDR_AFFLVL2          2
+#define MPIDR_AFFLVL3          3
+/* TODO: Support only the first 3 affinity levels for now */
+#define MPIDR_MAX_AFFLVL       2
+
+/* Constant to highlight the assumption that MPIDR allocation starts from 0 */
+#define FIRST_MPIDR            0
+
+/*******************************************************************************
+ * Implementation defined sysreg encodings
+ ******************************************************************************/
+#define CPUECTLR_EL1   S3_1_C15_C2_1
+
+/*******************************************************************************
+ * System register bit definitions
+ ******************************************************************************/
+/* CLIDR definitions */
+#define LOUIS_SHIFT            21
+#define LOC_SHIFT              24
+#define CLIDR_FIELD_WIDTH      3
+
+/* CSSELR definitions */
+#define LEVEL_SHIFT            1
+
+/* D$ set/way op type defines */
+#define DCISW                  0x0
+#define DCCISW                 0x1
+#define DCCSW                  0x2
+
+/* ID_AA64PFR0_EL1 definitions */
+#define ID_AA64PFR0_EL0_SHIFT  0
+#define ID_AA64PFR0_EL1_SHIFT  4
+#define ID_AA64PFR0_EL2_SHIFT  8
+#define ID_AA64PFR0_EL3_SHIFT  12
+#define ID_AA64PFR0_ELX_MASK   0xf
+
+/* ID_PFR1_EL1 definitions */
+#define ID_PFR1_VIRTEXT_SHIFT  12
+#define ID_PFR1_VIRTEXT_MASK   0xf
+#define GET_VIRT_EXT(id)       ((id >> ID_PFR1_VIRTEXT_SHIFT) \
+                                & ID_PFR1_VIRTEXT_MASK)
+
+/* SCTLR definitions */
+#define SCTLR_EL2_RES1  ((1 << 29) | (1 << 28) | (1 << 23) | (1 << 22) | \
+                       (1 << 18) | (1 << 16) | (1 << 11) | (1 << 5) |  \
+                       (1 << 4))
+
+#define SCTLR_EL1_RES1  ((1 << 29) | (1 << 28) | (1 << 23) | (1 << 22) | \
+                       (1 << 11))
+#define SCTLR_M_BIT            (1 << 0)
+#define SCTLR_A_BIT            (1 << 1)
+#define SCTLR_C_BIT            (1 << 2)
+#define SCTLR_SA_BIT           (1 << 3)
+#define SCTLR_B_BIT            (1 << 7)
+#define SCTLR_Z_BIT            (1 << 11)
+#define SCTLR_I_BIT            (1 << 12)
+#define SCTLR_WXN_BIT          (1 << 19)
+#define SCTLR_EXCEPTION_BITS   (0x3 << 6)
+#define SCTLR_EE_BIT           (1 << 25)
+
+/* CPUECTLR definitions */
+#define CPUECTLR_SMP_BIT       (1 << 6)
+
+/* CPACR_El1 definitions */
+#define CPACR_EL1_FPEN(x)      (x << 20)
+#define CPACR_EL1_FP_TRAP_EL0  0x1
+#define CPACR_EL1_FP_TRAP_ALL  0x2
+#define CPACR_EL1_FP_TRAP_NONE 0x3
+
+/* SCR definitions */
+#define SCR_RES1_BITS          ((1 << 4) | (1 << 5))
+#define SCR_TWE_BIT            (1 << 13)
+#define SCR_TWI_BIT            (1 << 12)
+#define SCR_ST_BIT             (1 << 11)
+#define SCR_RW_BIT             (1 << 10)
+#define SCR_SIF_BIT            (1 << 9)
+#define SCR_HCE_BIT            (1 << 8)
+#define SCR_SMD_BIT            (1 << 7)
+#define SCR_EA_BIT             (1 << 3)
+#define SCR_FIQ_BIT            (1 << 2)
+#define SCR_IRQ_BIT            (1 << 1)
+#define SCR_NS_BIT             (1 << 0)
+
+/* HCR definitions */
+#define HCR_RW_BIT             (1ull << 31)
+#define HCR_AMO_BIT            (1 << 5)
+#define HCR_IMO_BIT            (1 << 4)
+#define HCR_FMO_BIT            (1 << 3)
+
+/* CNTHCTL_EL2 definitions */
+#define EL1PCEN_BIT            (1 << 1)
+#define EL1PCTEN_BIT           (1 << 0)
+
+/* CNTKCTL_EL1 definitions */
+#define EL0PTEN_BIT            (1 << 9)
+#define EL0VTEN_BIT            (1 << 8)
+#define EL0PCTEN_BIT           (1 << 0)
+#define EL0VCTEN_BIT           (1 << 1)
+
+/* CPTR_EL3 definitions */
+#define TCPAC_BIT              (1ull << 31)
+#define TFP_BIT                        (1 << 10)
+
+/* CPSR/SPSR definitions */
+#define DAIF_FIQ_BIT           (1 << 0)
+#define DAIF_IRQ_BIT           (1 << 1)
+#define DAIF_ABT_BIT           (1 << 2)
+#define DAIF_DBG_BIT           (1 << 3)
+#define PSR_DAIF_SHIFT         0x6
+
+/*
+ * TCR defintions
+ */
+#define TCR_EL3_RES1           ((1UL << 31) | (1UL << 23))
+
+#define TCR_T0SZ_4GB           32
+
+#define TCR_RGN_INNER_NC       (0x0 << 8)
+#define TCR_RGN_INNER_WBA      (0x1 << 8)
+#define TCR_RGN_INNER_WT       (0x2 << 8)
+#define TCR_RGN_INNER_WBNA     (0x3 << 8)
+
+#define TCR_RGN_OUTER_NC       (0x0 << 10)
+#define TCR_RGN_OUTER_WBA      (0x1 << 10)
+#define TCR_RGN_OUTER_WT       (0x2 << 10)
+#define TCR_RGN_OUTER_WBNA     (0x3 << 10)
+
+#define TCR_SH_NON_SHAREABLE   (0x0 << 12)
+#define TCR_SH_OUTER_SHAREABLE (0x2 << 12)
+#define TCR_SH_INNER_SHAREABLE (0x3 << 12)
+
+#define MODE_RW_64             0x0
+#define MODE_RW_32             0x1
+#define MODE_SP_EL0            0x0
+#define MODE_SP_ELX            0x1
+#define MODE_EL3               0x3
+#define MODE_EL2               0x2
+#define MODE_EL1               0x1
+#define MODE_EL0               0x0
+
+#define MODE_RW_SHIFT          0x4
+#define MODE_EL_SHIFT          0x2
+#define MODE_SP_SHIFT          0x0
+
+#define GET_RW(mode)           ((mode >> MODE_RW_SHIFT) & 0x1)
+#define GET_EL(mode)           ((mode >> MODE_EL_SHIFT) & 0x3)
+#define GET_SP(mode)           ((mode >> MODE_SP_SHIFT) & 0x1)
+#define PSR_MODE(rw, el, sp)   (rw << MODE_RW_SHIFT | el << MODE_EL_SHIFT \
+                                | sp << MODE_SP_SHIFT)
+
+#define SPSR32_EE_BIT          (1 << 9)
+#define SPSR32_T_BIT           (1 << 5)
+
+#define AARCH32_MODE_SVC       0x13
+#define AARCH32_MODE_HYP       0x1a
+
+/* Miscellaneous MMU related constants */
+#define NUM_2MB_IN_GB          (1 << 9)
+#define NUM_4K_IN_2MB          (1 << 9)
+
+#define TWO_MB_SHIFT           21
+#define ONE_GB_SHIFT           30
+#define FOUR_KB_SHIFT          12
+
+#define ONE_GB_INDEX(x)                ((x) >> ONE_GB_SHIFT)
+#define TWO_MB_INDEX(x)                ((x) >> TWO_MB_SHIFT)
+#define FOUR_KB_INDEX(x)       ((x) >> FOUR_KB_SHIFT)
+
+#define INVALID_DESC           0x0
+#define BLOCK_DESC             0x1
+#define TABLE_DESC             0x3
+
+#define FIRST_LEVEL_DESC_N     ONE_GB_SHIFT
+#define SECOND_LEVEL_DESC_N    TWO_MB_SHIFT
+#define THIRD_LEVEL_DESC_N     FOUR_KB_SHIFT
+
+#define LEVEL1                 1
+#define LEVEL2                 2
+#define LEVEL3                 3
+
+#define XN                     (1ull << 2)
+#define PXN                    (1ull << 1)
+#define CONT_HINT              (1ull << 0)
+
+#define UPPER_ATTRS(x)         (x & 0x7) << 52
+#define NON_GLOBAL             (1 << 9)
+#define ACCESS_FLAG            (1 << 8)
+#define NSH                    (0x0 << 6)
+#define OSH                    (0x2 << 6)
+#define ISH                    (0x3 << 6)
+
+/*
+ * AP[1] bit is ignored by hardware and is
+ * treated as if it is One in EL2/EL3
+ */
+#define AP_RO                  (0x1 << 5)
+#define AP_RW                  (0x0 << 5)
+
+#define NS                             (0x1 << 3)
+#define ATTR_SO_INDEX                  0x2
+#define ATTR_DEVICE_INDEX              0x1
+#define ATTR_IWBWA_OWBWA_NTR_INDEX     0x0
+#define LOWER_ATTRS(x)                 (((x) & 0xfff) << 2)
+#define ATTR_SO                                (0x0)
+#define ATTR_DEVICE                    (0x4)
+#define ATTR_IWBWA_OWBWA_NTR           (0xff)
+#define MAIR_ATTR_SET(attr, index)     (attr << (index << 3))
+
+/* Exception Syndrome register bits and bobs */
+#define ESR_EC_SHIFT                   26
+#define ESR_EC_MASK                    0x3f
+#define ESR_EC_LENGTH                  6
+#define EC_UNKNOWN                     0x0
+#define EC_WFE_WFI                     0x1
+#define EC_AARCH32_CP15_MRC_MCR                0x3
+#define EC_AARCH32_CP15_MRRC_MCRR      0x4
+#define EC_AARCH32_CP14_MRC_MCR                0x5
+#define EC_AARCH32_CP14_LDC_STC                0x6
+#define EC_FP_SIMD                     0x7
+#define EC_AARCH32_CP10_MRC            0x8
+#define EC_AARCH32_CP14_MRRC_MCRR      0xc
+#define EC_ILLEGAL                     0xe
+#define EC_AARCH32_SVC                 0x11
+#define EC_AARCH32_HVC                 0x12
+#define EC_AARCH32_SMC                 0x13
+#define EC_AARCH64_SVC                 0x15
+#define EC_AARCH64_HVC                 0x16
+#define EC_AARCH64_SMC                 0x17
+#define EC_AARCH64_SYS                 0x18
+#define EC_IABORT_LOWER_EL             0x20
+#define EC_IABORT_CUR_EL               0x21
+#define EC_PC_ALIGN                    0x22
+#define EC_DABORT_LOWER_EL             0x24
+#define EC_DABORT_CUR_EL               0x25
+#define EC_SP_ALIGN                    0x26
+#define EC_AARCH32_FP                  0x28
+#define EC_AARCH64_FP                  0x2c
+#define EC_SERROR                      0x2f
+
+#define EC_BITS(x)                     (x >> ESR_EC_SHIFT) & ESR_EC_MASK
+
+#ifndef __ASSEMBLY__
+
+/*******************************************************************************
+ * Function prototypes
+ ******************************************************************************/
+
+extern void early_exceptions(void);
+extern void runtime_exceptions(void);
+extern void bl1_arch_setup(void);
+extern void bl2_arch_setup(void);
+extern void bl31_arch_setup(void);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __ARCH_H__ */
diff --git a/include/aarch64/arch_helpers.h b/include/aarch64/arch_helpers.h
new file mode 100644 (file)
index 0000000..348d545
--- /dev/null
@@ -0,0 +1,295 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __ARCH_HELPERS_H__
+#define __ARCH_HELPERS_H__
+
+#include <arch.h>
+
+/*******************************************************************************
+ * Generic timer memory mapped registers & offsets
+ ******************************************************************************/
+#define CNTCR_OFF                      0x000
+#define CNTFID_OFF                     0x020
+
+#define CNTCR_EN                       (1 << 0)
+#define CNTCR_HDBG                     (1 << 1)
+#define CNTCR_FCREQ(x)                 (1 << (8 + (x)))
+
+#ifndef __ASSEMBLY__
+
+/*******************************************************************************
+ * TLB maintenance accessor prototypes
+ ******************************************************************************/
+extern void tlbiall(void);
+extern void tlbiallis(void);
+extern void tlbialle1(void);
+extern void tlbialle1is(void);
+extern void tlbialle2(void);
+extern void tlbialle2is(void);
+extern void tlbialle3(void);
+extern void tlbialle3is(void);
+extern void tlbivmalle1(void);
+
+/*******************************************************************************
+ * Cache maintenance accessor prototypes
+ ******************************************************************************/
+extern void dcisw(unsigned long);
+extern void dccisw(unsigned long);
+extern void dccsw(unsigned long);
+extern void dccvac(unsigned long);
+extern void dcivac(unsigned long);
+extern void dccivac(unsigned long);
+extern void dccvau(unsigned long);
+extern void dczva(unsigned long);
+extern void flush_dcache_range(unsigned long, unsigned long);
+extern void inv_dcache_range(unsigned long, unsigned long);
+extern void dcsw_op_louis(unsigned int);
+extern void dcsw_op_all(unsigned int);
+
+/*******************************************************************************
+ * Misc. accessor prototypes
+ ******************************************************************************/
+extern void enable_irq(void);
+extern void enable_fiq(void);
+extern void enable_serror(void);
+
+extern void disable_irq(void);
+extern void disable_fiq(void);
+extern void disable_serror(void);
+
+extern unsigned long read_id_pfr1_el1(void);
+extern unsigned long read_id_aa64pfr0_el1(void);
+extern unsigned long read_current_el(void);
+extern unsigned long read_daif(void);
+extern unsigned long read_spsr(void);
+extern unsigned long read_spsr_el1(void);
+extern unsigned long read_spsr_el2(void);
+extern unsigned long read_spsr_el3(void);
+extern unsigned long read_elr(void);
+extern unsigned long read_elr_el1(void);
+extern unsigned long read_elr_el2(void);
+extern unsigned long read_elr_el3(void);
+
+extern void write_daif(unsigned long);
+extern void write_spsr(unsigned long);
+extern void write_spsr_el1(unsigned long);
+extern void write_spsr_el2(unsigned long);
+extern void write_spsr_el3(unsigned long);
+extern void write_elr(unsigned long);
+extern void write_elr_el1(unsigned long);
+extern void write_elr_el2(unsigned long);
+extern void write_elr_el3(unsigned long);
+
+extern void wfi(void);
+extern void wfe(void);
+extern void rfe(void);
+extern void sev(void);
+extern void dsb(void);
+extern void isb(void);
+
+extern unsigned int get_afflvl_shift(unsigned int);
+extern unsigned int mpidr_mask_lower_afflvls(unsigned long, unsigned int);
+
+extern void eret(unsigned long, unsigned long,
+                unsigned long, unsigned long,
+                unsigned long, unsigned long,
+                unsigned long, unsigned long);
+
+extern unsigned long  smc(unsigned long, unsigned long,
+                         unsigned long, unsigned long,
+                         unsigned long, unsigned long,
+                         unsigned long, unsigned long);
+
+/*******************************************************************************
+ * System register accessor prototypes
+ ******************************************************************************/
+extern unsigned long read_midr(void);
+extern unsigned long read_mpidr(void);
+
+extern unsigned long read_scr(void);
+extern unsigned long read_hcr(void);
+
+extern unsigned long read_vbar(void);
+extern unsigned long read_vbar_el1(void);
+extern unsigned long read_vbar_el2(void);
+extern unsigned long read_vbar_el3(void);
+
+extern unsigned long read_sctlr(void);
+extern unsigned long read_sctlr_el1(void);
+extern unsigned long read_sctlr_el2(void);
+extern unsigned long read_sctlr_el3(void);
+
+extern unsigned long read_actlr(void);
+extern unsigned long read_actlr_el1(void);
+extern unsigned long read_actlr_el2(void);
+extern unsigned long read_actlr_el3(void);
+
+extern unsigned long read_esr(void);
+extern unsigned long read_esr_el1(void);
+extern unsigned long read_esr_el2(void);
+extern unsigned long read_esr_el3(void);
+
+extern unsigned long read_afsr0(void);
+extern unsigned long read_afsr0_el1(void);
+extern unsigned long read_afsr0_el2(void);
+extern unsigned long read_afsr0_el3(void);
+
+extern unsigned long read_afsr1(void);
+extern unsigned long read_afsr1_el1(void);
+extern unsigned long read_afsr1_el2(void);
+extern unsigned long read_afsr1_el3(void);
+
+extern unsigned long read_far(void);
+extern unsigned long read_far_el1(void);
+extern unsigned long read_far_el2(void);
+extern unsigned long read_far_el3(void);
+
+extern unsigned long read_mair(void);
+extern unsigned long read_mair_el1(void);
+extern unsigned long read_mair_el2(void);
+extern unsigned long read_mair_el3(void);
+
+extern unsigned long read_amair(void);
+extern unsigned long read_amair_el1(void);
+extern unsigned long read_amair_el2(void);
+extern unsigned long read_amair_el3(void);
+
+extern unsigned long read_rvbar(void);
+extern unsigned long read_rvbar_el1(void);
+extern unsigned long read_rvbar_el2(void);
+extern unsigned long read_rvbar_el3(void);
+
+extern unsigned long read_rmr(void);
+extern unsigned long read_rmr_el1(void);
+extern unsigned long read_rmr_el2(void);
+extern unsigned long read_rmr_el3(void);
+
+extern unsigned long read_tcr(void);
+extern unsigned long read_tcr_el1(void);
+extern unsigned long read_tcr_el2(void);
+extern unsigned long read_tcr_el3(void);
+
+extern unsigned long read_ttbr0(void);
+extern unsigned long read_ttbr0_el1(void);
+extern unsigned long read_ttbr0_el2(void);
+extern unsigned long read_ttbr0_el3(void);
+
+extern unsigned long read_ttbr1(void);
+extern unsigned long read_ttbr1_el1(void);
+extern unsigned long read_ttbr1_el2(void);
+
+extern unsigned long read_cptr(void);
+extern unsigned long read_cptr_el2(void);
+extern unsigned long read_cptr_el3(void);
+
+extern unsigned long read_cpacr(void);
+extern unsigned long read_cpuectlr(void);
+extern unsigned int read_cntfrq_el0(void);
+extern unsigned long read_cnthctl_el2(void);
+
+extern void write_scr(unsigned long);
+extern void write_hcr(unsigned long);
+extern void write_cpacr(unsigned long);
+extern void write_cntfrq_el0(unsigned int);
+extern void write_cnthctl_el2(unsigned long);
+
+extern void write_vbar(unsigned long);
+extern void write_vbar_el1(unsigned long);
+extern void write_vbar_el2(unsigned long);
+extern void write_vbar_el3(unsigned long);
+
+extern void write_sctlr(unsigned long);
+extern void write_sctlr_el1(unsigned long);
+extern void write_sctlr_el2(unsigned long);
+extern void write_sctlr_el3(unsigned long);
+
+extern void write_actlr(unsigned long);
+extern void write_actlr_el1(unsigned long);
+extern void write_actlr_el2(unsigned long);
+extern void write_actlr_el3(unsigned long);
+
+extern void write_esr(unsigned long);
+extern void write_esr_el1(unsigned long);
+extern void write_esr_el2(unsigned long);
+extern void write_esr_el3(unsigned long);
+
+extern void write_afsr0(unsigned long);
+extern void write_afsr0_el1(unsigned long);
+extern void write_afsr0_el2(unsigned long);
+extern void write_afsr0_el3(unsigned long);
+
+extern void write_afsr1(unsigned long);
+extern void write_afsr1_el1(unsigned long);
+extern void write_afsr1_el2(unsigned long);
+extern void write_afsr1_el3(unsigned long);
+
+extern void write_far(unsigned long);
+extern void write_far_el1(unsigned long);
+extern void write_far_el2(unsigned long);
+extern void write_far_el3(unsigned long);
+
+extern void write_mair(unsigned long);
+extern void write_mair_el1(unsigned long);
+extern void write_mair_el2(unsigned long);
+extern void write_mair_el3(unsigned long);
+
+extern void write_amair(unsigned long);
+extern void write_amair_el1(unsigned long);
+extern void write_amair_el2(unsigned long);
+extern void write_amair_el3(unsigned long);
+
+extern void write_rmr(unsigned long);
+extern void write_rmr_el1(unsigned long);
+extern void write_rmr_el2(unsigned long);
+extern void write_rmr_el3(unsigned long);
+
+extern void write_tcr(unsigned long);
+extern void write_tcr_el1(unsigned long);
+extern void write_tcr_el2(unsigned long);
+extern void write_tcr_el3(unsigned long);
+
+extern void write_ttbr0(unsigned long);
+extern void write_ttbr0_el1(unsigned long);
+extern void write_ttbr0_el2(unsigned long);
+extern void write_ttbr0_el3(unsigned long);
+
+extern void write_ttbr1(unsigned long);
+extern void write_ttbr1_el1(unsigned long);
+extern void write_ttbr1_el2(unsigned long);
+
+extern void write_cptr(unsigned long);
+extern void write_cpuectlr(unsigned long);
+extern void write_cptr_el2(unsigned long);
+extern void write_cptr_el3(unsigned long);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __ARCH_HELPERS_H__ */
diff --git a/include/asm_macros.S b/include/asm_macros.S
new file mode 100644 (file)
index 0000000..f7afdfc
--- /dev/null
@@ -0,0 +1,82 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+       .macro  func_prologue
+       stp     x29, x30, [sp, #-0x10]!
+       mov     x29,sp
+       .endm
+
+       .macro  func_epilogue
+       ldp     x29, x30, [sp], #0x10
+       .endm
+
+
+       .macro  dcache_line_size  reg, tmp
+       mrs     \tmp, ctr_el0
+       ubfx    \tmp, \tmp, #16, #4
+       mov     \reg, #4
+       lsl     \reg, \reg, \tmp
+       .endm
+
+
+       .macro  icache_line_size  reg, tmp
+       mrs     \tmp, ctr_el0
+       and     \tmp, \tmp, #0xf
+       mov     \reg, #4
+       lsl     \reg, \reg, \tmp
+       .endm
+
+
+       .macro  exception_entry  func
+       stp     x29, x30, [sp, #-0x10]!
+       bl      \func
+       .endm
+
+
+       .macro  exception_exit  func
+       bl      \func
+       ldp     x29, x30, [sp], #0x10
+       .endm
+
+
+       .macro  smc_check  label
+       bl      read_esr
+       ubfx    x0, x0, #ESR_EC_SHIFT, #ESR_EC_LENGTH
+       cmp     x0, #EC_AARCH64_SMC
+       b.ne    $label
+       .endm
+
+
+       .macro  setup_dcsw_op_args  start_level, end_level, clidr, shift, fw, ls
+       mrs     \clidr, clidr_el1
+       mov     \start_level, xzr
+       ubfx    \end_level, \clidr, \shift, \fw
+       lsl     \end_level, \end_level, \ls
+       .endm
diff --git a/include/bakery_lock.h b/include/bakery_lock.h
new file mode 100644 (file)
index 0000000..6c4ab8f
--- /dev/null
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BAKERY_LOCK_H__
+#define __BAKERY_LOCK_H__
+
+#include <platform.h>
+
+#define BAKERY_LOCK_MAX_CPUS           PLATFORM_CORE_COUNT
+
+#ifndef __ASSEMBLY__
+typedef struct {
+       volatile int owner;
+       volatile char entering[BAKERY_LOCK_MAX_CPUS];
+       volatile unsigned number[BAKERY_LOCK_MAX_CPUS];
+} bakery_lock;
+
+#define NO_OWNER (-1)
+
+void bakery_lock_init(bakery_lock* bakery);
+/* Check whether a lock is held. Mainly used for debug purpose. */
+int bakery_lock_held(unsigned long mpidr, const bakery_lock * bakery);
+void bakery_lock_get(unsigned long mpidr, bakery_lock* bakery);
+void bakery_lock_release(unsigned long mpidr, bakery_lock* bakery);
+int bakery_lock_try(unsigned long mpidr, bakery_lock* bakery);
+#endif /*__ASSEMBLY__*/
+
+#endif /* __BAKERY_LOCK_H__ */
diff --git a/include/bl1.h b/include/bl1.h
new file mode 100644 (file)
index 0000000..868ee4f
--- /dev/null
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BL1_H__
+#define __BL1_H__
+
+#include <bl_common.h>
+
+/******************************************************************************
+ * Function ID of the only SMC that the BL1 exception handlers service.
+ * The chosen value is the first function ID of the ARM SMC64 range.
+ *****************************************************************************/
+#define RUN_IMAGE      0xC0000000
+
+#ifndef __ASSEMBLY__
+
+/******************************************
+ * Function prototypes
+ *****************************************/
+extern void bl1_platform_setup(void);
+extern meminfo bl1_get_sec_mem_layout(void);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __BL1_H__ */
diff --git a/include/bl2.h b/include/bl2.h
new file mode 100644 (file)
index 0000000..6fa8721
--- /dev/null
@@ -0,0 +1,48 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BL2_H__
+#define __BL2_H__
+
+#include <bl_common.h>
+
+/******************************************
+ * Data declarations
+ *****************************************/
+extern unsigned long long bl2_entrypoint;
+
+/******************************************
+ * Function prototypes
+ *****************************************/
+extern void bl2_platform_setup(void);
+extern meminfo bl2_get_sec_mem_layout(void);
+extern meminfo bl2_get_ns_mem_layout(void);
+
+#endif /* __BL2_H__ */
diff --git a/include/bl31.h b/include/bl31.h
new file mode 100644 (file)
index 0000000..0d123a4
--- /dev/null
@@ -0,0 +1,51 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BL31_H__
+#define __BL31_H__
+
+#include <bl_common.h>
+
+/*******************************************************************************
+ * Data declarations
+ ******************************************************************************/
+extern unsigned long bl31_entrypoint;
+
+/*******************************************************************************
+ * Function prototypes
+ ******************************************************************************/
+extern void bl31_platform_setup(void);
+extern meminfo bl31_get_sec_mem_layout(void);
+extern el_change_info* bl31_get_next_image_info(unsigned long);
+extern void gic_cpuif_deactivate(unsigned int);
+extern void gic_cpuif_setup(unsigned int);
+extern void gic_pcpu_distif_setup(unsigned int);
+extern void gic_setup(void);
+#endif /* __BL31_H__ */
diff --git a/include/bl_common.h b/include/bl_common.h
new file mode 100644 (file)
index 0000000..58accdb
--- /dev/null
@@ -0,0 +1,134 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __BL_COMMON_H__
+#define __BL_COMMON_H__
+
+#define SECURE         0
+#define NON_SECURE     1
+
+#define UP     1
+#define DOWN   0
+
+/*******************************************************************************
+ * Constants for loading images. When BLx wants to load BLy, it looks at a
+ * meminfo structure to find the extents of free memory. Then depending upon
+ * how it has been configured, it can either load BLy at the top or bottom of
+ * the free memory. These constants indicate the choice.
+ * TODO: Make this configurable while building the trusted firmware.
+ ******************************************************************************/
+#define TOP_LOAD       0x1
+#define BOT_LOAD       !TOP_LOAD
+#define LOAD_MASK      (1 << 0)
+
+/*******************************************************************************
+ * Size of memory for sharing data while changing exception levels.
+ *
+ * There are 2 cases where this memory buffer is used:
+ *
+ *   - when BL1 (running in EL3) passes control to BL2 (running in S-EL1).
+ *     BL1 needs to pass the memory layout to BL2, to allow BL2 to find out
+ *     how much free trusted ram remains;
+ *
+ *   - when BL2 (running in S-EL1) passes control back to BL1 (running in EL3)
+ *     to make it run BL31.  BL2 needs to pass the memory layout, as well as
+ *     information on how to pass control to the non-trusted software image.
+ ******************************************************************************/
+#define EL_CHANGE_MEM_SIZE     (sizeof(meminfo) + sizeof(el_change_info))
+
+
+#ifndef __ASSEMBLY__
+/*******************************************************************************
+ * Structure used for telling the next BL how much of a particular type of
+ * memory is available for its use and how much is already used.
+ ******************************************************************************/
+typedef struct {
+       unsigned long total_base;
+       long total_size;
+       unsigned long free_base;
+       long free_size;
+       unsigned long attr;
+       unsigned long next;
+} meminfo;
+
+typedef struct {
+       unsigned long arg0;
+       unsigned long arg1;
+       unsigned long arg2;
+       unsigned long arg3;
+       unsigned long arg4;
+       unsigned long arg5;
+       unsigned long arg6;
+       unsigned long arg7;
+} aapcs64_params;
+
+/*******************************************************************************
+ * This structure represents the superset of information needed while switching
+ * exception levels. The only two mechanisms to do so are ERET & SMC. In case of
+ * SMC all members apart from 'aapcs64_params' will be ignored. The 'next'
+ * member is a placeholder for a complicated case in the distant future when BL2
+ * will load multiple BL3x images as well as a non-secure image. So multiple
+ * such structures will have to be passed to BL31 in S-EL3.
+ ******************************************************************************/
+typedef struct {
+       unsigned long entrypoint;
+       unsigned long spsr;
+       unsigned long security_state;
+       aapcs64_params args;
+       unsigned long next;
+} el_change_info;
+
+/*******************************************************************************
+ * Function & variable prototypes
+ ******************************************************************************/
+extern unsigned long page_align(unsigned long, unsigned);
+extern void change_security_state(unsigned int);
+extern int drop_el(aapcs64_params *, unsigned long, unsigned long);
+extern long raise_el(aapcs64_params *);
+extern long change_el(el_change_info *);
+extern unsigned long make_spsr(unsigned long, unsigned long, unsigned long);
+extern void init_bl2_mem_layout(meminfo *,
+                               meminfo *,
+                               unsigned int,
+                               unsigned long) __attribute__((weak));
+extern void init_bl31_mem_layout(const meminfo *,
+                                meminfo *,
+                                unsigned int) __attribute__((weak));
+extern unsigned long load_image(meminfo *, const char *, unsigned int, unsigned long);
+extern int run_image(unsigned long,
+                    unsigned long,
+                    unsigned long,
+                    meminfo *,
+                    void *);
+extern unsigned long *get_el_change_mem_ptr(void);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __BL_COMMON_H__ */
diff --git a/include/mmio.h b/include/mmio.h
new file mode 100644 (file)
index 0000000..ecc1f87
--- /dev/null
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __MMIO_H__
+#define __MMIO_H__
+
+#ifndef __ASSEMBLY__
+
+#include <stdint.h>
+
+extern void mmio_write_32(uintptr_t addr, uint32_t value);
+extern uint32_t mmio_read_32(uintptr_t addr);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __MMIO_H__ */
diff --git a/include/pm.h b/include/pm.h
new file mode 100644 (file)
index 0000000..7a4ef8b
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PM_H__
+#define __PM_H__
+
+#ifndef __ASSEMBLY__
+
+/*******************************************************************************
+ * Structure populated by platform specific code to export routines which
+ * perform common low level pm functions
+ ******************************************************************************/
+typedef struct {
+       int (*cpu_on)(unsigned long);
+       int (*cpu_off)(unsigned long);
+       int (*cpu_suspend)(unsigned long);
+       int (*affinity_info)(unsigned long, unsigned int);
+} pm_frontend_ops;
+
+/*******************************************************************************
+ * Structure populated by a generic power management api implementation e.g.
+ * psci to perform api specific bits after a cpu has been turned on.
+ ******************************************************************************/
+typedef struct {
+       unsigned long (*cpu_off_finisher)(unsigned long);
+       unsigned long (*cpu_suspend_finisher)(unsigned long);
+} pm_backend_ops;
+
+/*******************************************************************************
+ * Function & variable prototypes
+ ******************************************************************************/
+extern pm_frontend_ops *get_pm_frontend_ops(void);
+extern pm_backend_ops *get_pm_backend_ops(void);
+extern void set_pm_frontend_ops(pm_frontend_ops *);
+extern void set_pm_backend_ops(pm_backend_ops *);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __PM_H__ */
diff --git a/include/psci.h b/include/psci.h
new file mode 100644 (file)
index 0000000..f63e32c
--- /dev/null
@@ -0,0 +1,166 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PSCI_H__
+#define __PSCI_H__
+
+/*******************************************************************************
+ * Defines for runtime services func ids
+ ******************************************************************************/
+#define PSCI_VERSION                   0x84000000
+#define PSCI_CPU_SUSPEND_AARCH32       0x84000001
+#define PSCI_CPU_SUSPEND_AARCH64       0xc4000001
+#define PSCI_CPU_OFF                   0x84000002
+#define PSCI_CPU_ON_AARCH32            0x84000003
+#define PSCI_CPU_ON_AARCH64            0xc4000003
+#define PSCI_AFFINITY_INFO_AARCH32     0x84000004
+#define PSCI_AFFINITY_INFO_AARCH64     0xc4000004
+#define PSCI_MIG_AARCH32               0x84000005
+#define PSCI_MIG_AARCH64               0xc4000005
+#define PSCI_MIG_INFO_TYPE             0x84000006
+#define PSCI_MIG_INFO_UP_CPU_AARCH32   0x84000007
+#define PSCI_MIG_INFO_UP_CPU_AARCH64   0xc4000007
+#define PSCI_SYSTEM_OFF                0x84000008
+#define PSCI_SYSTEM_RESET              0x84000009
+
+/*******************************************************************************
+ * PSCI Migrate and friends
+ ******************************************************************************/
+#define PSCI_TOS_UP_MIG_CAP    0
+#define PSCI_TOS_NOT_UP_MIG_CAP        1
+#define PSCI_TOS_NOT_PRESENT   2
+
+/*******************************************************************************
+ * PSCI CPU_SUSPEND 'power_state' parameter specific defines
+ ******************************************************************************/
+#define PSTATE_ID_SHIFT                15
+#define PSTATE_TYPE_SHIFT      16
+#define PSTATE_AFF_LVL_SHIFT   25
+
+#define PSTATE_ID_MASK         0xffff
+#define PSTATE_TYPE_MASK       0x1
+#define PSTATE_AFF_LVL_MASK    0x3
+
+#define psci_get_pstate_id(pstate)     (pstate >> PSTATE_ID_SHIFT) & \
+                                       PSTATE_ID_MASK
+#define psci_get_pstate_type(pstate)   (pstate >> PSTATE_TYPE_SHIFT) & \
+                                       PSTATE_TYPE_MASK
+#define psci_get_pstate_afflvl(pstate) (pstate >> PSTATE_AFF_LVL_SHIFT) & \
+                                       PSTATE_AFF_LVL_MASK
+
+/*******************************************************************************
+ * PSCI version
+ ******************************************************************************/
+#define PSCI_MAJOR_VER         (0 << 16)
+#define PSCI_MINOR_VER         0x2
+
+/*******************************************************************************
+ * PSCI error codes
+ ******************************************************************************/
+#define PSCI_E_SUCCESS         0
+#define PSCI_E_NOT_SUPPORTED   -1
+#define PSCI_E_INVALID_PARAMS  -2
+#define PSCI_E_DENIED          -3
+#define PSCI_E_ALREADY_ON      -4
+#define PSCI_E_ON_PENDING      -5
+#define PSCI_E_INTERN_FAIL     -6
+#define PSCI_E_NOT_PRESENT     -7
+#define PSCI_E_DISABLED                -8
+
+/*******************************************************************************
+ * PSCI affinity state related constants. An affinity instance could be present
+ * or absent physically to cater for asymmetric topologies. If present then it
+ * could in one of the 4 further defined states.
+ ******************************************************************************/
+#define PSCI_STATE_SHIFT       1
+#define PSCI_STATE_MASK                0x7
+#define psci_get_state(x)      (x >> PSCI_STATE_SHIFT) & PSCI_STATE_MASK
+#define psci_set_state(x,y)    x &= ~(PSCI_STATE_MASK << PSCI_STATE_SHIFT); \
+                               x |= (y & PSCI_STATE_MASK) << PSCI_STATE_SHIFT;
+
+#define PSCI_AFF_ABSENT                0x0
+#define PSCI_AFF_PRESENT       0x1
+#define PSCI_STATE_OFF         0x0
+#define PSCI_STATE_ON_PENDING  0x1
+#define PSCI_STATE_SUSPEND     0x2
+#define PSCI_STATE_ON          0x3
+
+/* Number of affinity instances whose state this psci imp. can track */
+#define PSCI_NUM_AFFS          32ull
+
+#ifndef __ASSEMBLY__
+/*******************************************************************************
+ * Structure populated by platform specific code to export routines which
+ * perform common low level pm functions
+ ******************************************************************************/
+typedef struct {
+       int (*affinst_standby)(unsigned int);
+       int (*affinst_on)(unsigned long,
+                         unsigned long,
+                         unsigned long,
+                         unsigned int,
+                         unsigned int);
+       int (*affinst_off)(unsigned long, unsigned int, unsigned int);
+       int (*affinst_suspend)(unsigned long,
+                              unsigned long,
+                              unsigned long,
+                              unsigned int,
+                              unsigned int);
+       int (*affinst_on_finish)(unsigned long, unsigned int, unsigned int);
+       int (*affinst_suspend_finish)(unsigned long,
+                                     unsigned int,
+                                     unsigned int);
+} plat_pm_ops;
+
+/*******************************************************************************
+ * Function & Data prototypes
+ ******************************************************************************/
+extern unsigned int psci_version(void);
+extern int psci_cpu_on(unsigned long,
+                      unsigned long,
+                      unsigned long);
+extern int __psci_cpu_suspend(unsigned int, unsigned long, unsigned long);
+extern int __psci_cpu_off(void);
+extern int psci_affinity_info(unsigned long, unsigned int);
+extern int psci_migrate(unsigned int);
+extern unsigned int psci_migrate_info_type(void);
+extern unsigned long psci_migrate_info_up_cpu(void);
+extern void psci_system_off(void);
+extern void psci_system_reset(void);
+extern int psci_cpu_on(unsigned long,
+                      unsigned long,
+                      unsigned long);
+extern void psci_aff_on_finish_entry(void);
+extern void psci_aff_suspend_finish_entry(void);
+extern void psci_setup(unsigned long);
+#endif /*__ASSEMBLY__*/
+
+
+#endif /* __PSCI_H__ */
diff --git a/include/runtime_svc.h b/include/runtime_svc.h
new file mode 100644 (file)
index 0000000..ea6accb
--- /dev/null
@@ -0,0 +1,122 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __RUNTIME_SVC_H__
+#define __RUNTIME_SVC_H__
+#include <psci.h>
+
+/*******************************************************************************
+ * Bit definitions inside the function id as per the SMC calling convention
+ ******************************************************************************/
+#define FUNCID_TYPE_SHIFT              31
+#define FUNCID_CC_SHIFT                        30
+#define FUNCID_OWNER_SHIFT             24
+#define FUNCID_NUM_SHIFT               0
+
+#define FUNCID_TYPE_MASK               0x1
+#define FUNCID_CC_MASK                 0x1
+#define FUNCID_OWNER_MASK              0x3f
+#define FUNCID_NUM_MASK                        0xffff
+
+#define GET_SMC_CC(id)                 ((id >> FUNCID_CC_SHIFT) & \
+                                        FUNCID_CC_MASK)
+
+#define SMC_64                         1
+#define SMC_32                         0
+#define SMC_UNK                                0xffffffff
+
+/*******************************************************************************
+ * Constants to indicate type of exception to the common exception handler.
+ ******************************************************************************/
+#define SYNC_EXCEPTION_SP_EL0          0x0
+#define IRQ_SP_EL0                     0x1
+#define FIQ_SP_EL0                     0x2
+#define SERROR_SP_EL0                  0x3
+#define SYNC_EXCEPTION_SP_ELX          0x4
+#define IRQ_SP_ELX                     0x5
+#define FIQ_SP_ELX                     0x6
+#define SERROR_SP_ELX                  0x7
+#define SYNC_EXCEPTION_AARCH64         0x8
+#define IRQ_AARCH64                    0x9
+#define FIQ_AARCH64                    0xa
+#define SERROR_AARCH64                 0xb
+#define SYNC_EXCEPTION_AARCH32         0xc
+#define IRQ_AARCH32                    0xd
+#define FIQ_AARCH32                    0xe
+#define SERROR_AARCH32                 0xf
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+       unsigned long x0;
+       unsigned long x1;
+       unsigned long x2;
+       unsigned long x3;
+       unsigned long x4;
+       unsigned long x5;
+       unsigned long x6;
+       unsigned long x7;
+       unsigned long x8;
+       unsigned long x9;
+       unsigned long x10;
+       unsigned long x11;
+       unsigned long x12;
+       unsigned long x13;
+       unsigned long x14;
+       unsigned long x15;
+       unsigned long x16;
+       unsigned long x17;
+       unsigned long x18;
+       unsigned long x19;
+       unsigned long x20;
+       unsigned long x21;
+       unsigned long x22;
+       unsigned long x23;
+       unsigned long x24;
+       unsigned long x25;
+       unsigned long x26;
+       unsigned long x27;
+       unsigned long x28;
+       unsigned long sp_el0;
+       unsigned long spsr;
+       unsigned long fp;
+       unsigned long lr;
+} gp_regs;
+
+
+/*******************************************************************************
+ * Function & variable prototypes
+ ******************************************************************************/
+extern void runtime_svc_init(unsigned long mpidr);
+
+#endif /*__ASSEMBLY__*/
+
+
+#endif /* __RUNTIME_SVC_H__ */
diff --git a/include/semihosting.h b/include/semihosting.h
new file mode 100644 (file)
index 0000000..b56ff2f
--- /dev/null
@@ -0,0 +1,74 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __SEMIHOSTING_H__
+#define __SEMIHOSTING_H__
+
+#define SEMIHOSTING_SYS_OPEN            0x01
+#define SEMIHOSTING_SYS_CLOSE           0x02
+#define SEMIHOSTING_SYS_WRITE0          0x04
+#define SEMIHOSTING_SYS_WRITEC          0x03
+#define SEMIHOSTING_SYS_WRITE           0x05
+#define SEMIHOSTING_SYS_READ            0x06
+#define SEMIHOSTING_SYS_READC           0x07
+#define SEMIHOSTING_SYS_SEEK            0x0A
+#define SEMIHOSTING_SYS_FLEN            0x0C
+#define SEMIHOSTING_SYS_REMOVE          0x0E
+#define SEMIHOSTING_SYS_SYSTEM          0x12
+#define SEMIHOSTING_SYS_ERRNO           0x13
+
+#define FOPEN_MODE_R                   0x0
+#define FOPEN_MODE_RB                  0x1
+#define FOPEN_MODE_RPLUS               0x2
+#define FOPEN_MODE_RPLUSB              0x3
+#define FOPEN_MODE_W                   0x4
+#define FOPEN_MODE_WB                  0x5
+#define FOPEN_MODE_WPLUS               0x6
+#define FOPEN_MODE_WPLUSB              0x7
+#define FOPEN_MODE_A                   0x8
+#define FOPEN_MODE_AB                  0x9
+#define FOPEN_MODE_APLUS               0xa
+#define FOPEN_MODE_APLUSB              0xb
+
+int semihosting_connection_supported(void);
+int semihosting_file_open(const char *file_name, unsigned int mode);
+int semihosting_file_seek(int file_handle, unsigned int offset);
+int semihosting_file_read(int file_handle, int *length, void *buffer);
+int semihosting_file_write(int file_handle, int *length, void *buffer);
+int semihosting_file_close(int file_handle);
+int semihosting_file_length(int file_handle);
+int semihosting_system(char *command_line);
+int semihosting_get_flen(const char* file_name);
+int semihosting_download_file(const char* file_name, int buf_size, void *buf);
+void semihosting_write_char(char character);
+void semihosting_write_string(char *string);
+char semihosting_read_char(void);
+
+#endif /* __SEMIHOSTING_H__ */
diff --git a/include/spinlock.h b/include/spinlock.h
new file mode 100644 (file)
index 0000000..9cc261f
--- /dev/null
@@ -0,0 +1,41 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __SPINLOCK_H__
+#define __SPINLOCK_H__
+
+typedef struct {
+       volatile unsigned int lock;
+} spinlock_t;
+
+void spin_lock(spinlock_t *lock);
+void spin_unlock(spinlock_t *lock);
+
+#endif /* __SPINLOCK_H__ */
diff --git a/lib/arch/aarch64/cache_helpers.S b/lib/arch/aarch64/cache_helpers.S
new file mode 100644 (file)
index 0000000..b8a5608
--- /dev/null
@@ -0,0 +1,233 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+#include <asm_macros.S>
+
+       .globl  dcisw
+       .globl  dccisw
+       .globl  dccsw
+       .globl  dccvac
+       .globl  dcivac
+       .globl  dccivac
+       .globl  dccvau
+       .globl  dczva
+       .globl  flush_dcache_range
+       .globl  inv_dcache_range
+       .globl  dcsw_op_louis
+       .globl  dcsw_op_all
+
+       .section        .text, "ax"; .align 3
+
+dcisw:; .type dcisw, %function
+       dc      isw, x0
+       dsb     sy
+       isb
+       ret
+
+
+dccisw:; .type dccisw, %function
+       dc      cisw, x0
+       dsb     sy
+       isb
+       ret
+
+
+dccsw:; .type dccsw, %function
+       dc      csw, x0
+       dsb     sy
+       isb
+       ret
+
+
+dccvac:; .type dccvac, %function
+       dc      cvac, x0
+       dsb     sy
+       isb
+       ret
+
+
+dcivac:; .type dcivac, %function
+       dc      ivac, x0
+       dsb     sy
+       isb
+       ret
+
+
+dccivac:; .type dccivac, %function
+       dc      civac, x0
+       dsb     sy
+       isb
+       ret
+
+
+dccvau:; .type dccvau, %function
+       dc      cvau, x0
+       dsb     sy
+       isb
+       ret
+
+
+dczva:; .type dczva, %function
+       dc      zva, x0
+       dsb     sy
+       isb
+       ret
+
+
+       /* ------------------------------------------
+        * Clean+Invalidate from base address till
+        * size. 'x0' = addr, 'x1' = size
+        * ------------------------------------------
+        */
+flush_dcache_range:; .type flush_dcache_range, %function
+       dcache_line_size x2, x3
+       add     x1, x0, x1
+       sub     x3, x2, #1
+       bic     x0, x0, x3
+flush_loop:
+       dc      civac, x0
+       add     x0, x0, x2
+       cmp     x0, x1
+       b.lo    flush_loop
+       dsb     sy
+       ret
+
+
+       /* ------------------------------------------
+        * Invalidate from base address till
+        * size. 'x0' = addr, 'x1' = size
+        * ------------------------------------------
+        */
+inv_dcache_range:; .type inv_dcache_range, %function
+       dcache_line_size x2, x3
+       add     x1, x0, x1
+       sub     x3, x2, #1
+       bic     x0, x0, x3
+inv_loop:
+       dc      ivac, x0
+       add     x0, x0, x2
+       cmp     x0, x1
+       b.lo    inv_loop
+       dsb     sy
+       ret
+
+
+       /* ------------------------------------------
+        * Data cache operations by set/way to the
+        * level specified
+        * ------------------------------------------
+        * ----------------------------------
+        * Call this func with the clidr in
+        * x0, starting cache level in x10,
+        * last cache level in x3 & cm op in
+        * x14
+        * ----------------------------------
+        */
+dcsw_op:; .type dcsw_op, %function
+all_start_at_level:
+       add     x2, x10, x10, lsr #1            // work out 3x current cache level
+       lsr     x1, x0, x2                      // extract cache type bits from clidr
+       and     x1, x1, #7                      // mask of the bits for current cache only
+       cmp     x1, #2                          // see what cache we have at this level
+       b.lt    skip                            // skip if no cache, or just i-cache
+       msr     csselr_el1, x10                 // select current cache level in csselr
+       isb                                     // isb to sych the new cssr&csidr
+       mrs     x1, ccsidr_el1                  // read the new ccsidr
+       and     x2, x1, #7                      // extract the length of the cache lines
+       add     x2, x2, #4                      // add 4 (line length offset)
+       mov     x4, #0x3ff
+       and     x4, x4, x1, lsr #3              // find maximum number on the way size
+       clz     w5, w4                          // find bit position of way size increment
+       mov     x7, #0x7fff
+       and     x7, x7, x1, lsr #13             // extract max number of the index size
+loop2:
+       mov     x9, x4                          // create working copy of max way size
+loop3:
+       lsl     x6, x9, x5
+       orr     x11, x10, x6                    // factor way and cache number into x11
+       lsl     x6, x7, x2
+       orr     x11, x11, x6                    // factor index number into x11
+       mov     x12, x0
+       mov     x13, x30 // lr
+       mov     x0, x11
+       blr     x14
+       mov     x0, x12
+       mov     x30, x13 // lr
+       subs    x9, x9, #1                      // decrement the way
+       b.ge    loop3
+       subs    x7, x7, #1                      // decrement the index
+       b.ge    loop2
+skip:
+       add     x10, x10, #2                    // increment cache number
+       cmp     x3, x10
+       b.gt    all_start_at_level
+finished:
+       mov     x10, #0                         // swith back to cache level 0
+       msr     csselr_el1, x10                 // select current cache level in csselr
+       dsb     sy
+       isb
+       ret
+
+
+do_dcsw_op:; .type do_dcsw_op, %function
+       cbz     x3, exit
+       cmp     x0, #DCISW
+       b.eq    dc_isw
+       cmp     x0, #DCCISW
+       b.eq    dc_cisw
+       cmp     x0, #DCCSW
+       b.eq    dc_csw
+dc_isw:
+       mov     x0, x9
+       adr     x14, dcisw
+       b       dcsw_op
+dc_cisw:
+       mov     x0, x9
+       adr     x14, dccisw
+       b       dcsw_op
+dc_csw:
+       mov     x0, x9
+       adr     x14, dccsw
+       b       dcsw_op
+exit:
+       ret
+
+
+dcsw_op_louis:; .type dcsw_op_louis, %function
+       dsb     sy
+       setup_dcsw_op_args x10, x3, x9, #LOUIS_SHIFT, #CLIDR_FIELD_WIDTH, #LEVEL_SHIFT
+       b       do_dcsw_op
+
+
+dcsw_op_all:; .type dcsw_op_all, %function
+       dsb     sy
+       setup_dcsw_op_args x10, x3, x9, #LOC_SHIFT, #CLIDR_FIELD_WIDTH, #LEVEL_SHIFT
+       b       do_dcsw_op
diff --git a/lib/arch/aarch64/misc_helpers.S b/lib/arch/aarch64/misc_helpers.S
new file mode 100644 (file)
index 0000000..8c1f740
--- /dev/null
@@ -0,0 +1,274 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+
+       .globl  enable_irq
+       .globl  disable_irq
+
+       .globl  enable_fiq
+       .globl  disable_fiq
+
+       .globl  enable_serror
+       .globl  disable_serror
+
+       .globl  read_daif
+       .globl  write_daif
+
+       .globl  read_spsr
+       .globl  read_spsr_el1
+       .globl  read_spsr_el2
+       .globl  read_spsr_el3
+
+       .globl  write_spsr
+       .globl  write_spsr_el1
+       .globl  write_spsr_el2
+       .globl  write_spsr_el3
+
+       .globl  read_elr
+       .globl  read_elr_el1
+       .globl  read_elr_el2
+       .globl  read_elr_el3
+
+       .globl  write_elr
+       .globl  write_elr_el1
+       .globl  write_elr_el2
+       .globl  write_elr_el3
+
+       .globl  get_afflvl_shift
+       .globl  mpidr_mask_lower_afflvls
+       .globl  dsb
+       .globl  isb
+       .globl  sev
+       .globl  wfe
+       .globl  wfi
+       .globl  eret
+       .globl  smc
+
+
+       .section        .text, "ax"
+
+get_afflvl_shift:; .type get_afflvl_shift, %function
+       cmp     x0, #3
+       cinc    x0, x0, eq
+       mov     x1, #MPIDR_AFFLVL_SHIFT
+       lsl     x0, x0, x1
+       ret
+
+mpidr_mask_lower_afflvls:; .type mpidr_mask_lower_afflvls, %function
+       cmp     x1, #3
+       cinc    x1, x1, eq
+       mov     x2, #MPIDR_AFFLVL_SHIFT
+       lsl     x2, x1, x2
+       lsr     x0, x0, x2
+       lsl     x0, x0, x2
+       ret
+
+       /* -----------------------------------------------------
+        * Asynchronous exception manipulation accessors
+        * -----------------------------------------------------
+        */
+enable_irq:; .type enable_irq, %function
+       msr     daifclr, #DAIF_IRQ_BIT
+       ret
+
+
+enable_fiq:; .type enable_fiq, %function
+       msr     daifclr, #DAIF_FIQ_BIT
+       ret
+
+
+enable_serror:; .type enable_serror, %function
+       msr     daifclr, #DAIF_ABT_BIT
+       ret
+
+
+disable_irq:; .type disable_irq, %function
+       msr     daifset, #DAIF_IRQ_BIT
+       ret
+
+
+disable_fiq:; .type disable_fiq, %function
+       msr     daifset, #DAIF_FIQ_BIT
+       ret
+
+
+disable_serror:; .type disable_serror, %function
+       msr     daifset, #DAIF_ABT_BIT
+       ret
+
+
+read_daif:; .type read_daif, %function
+       mrs     x0, daif
+       ret
+
+
+write_daif:; .type write_daif, %function
+       msr     daif, x0
+       ret
+
+
+read_spsr:; .type read_spsr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_spsr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_spsr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_spsr_el3
+
+
+read_spsr_el1:; .type read_spsr_el1, %function
+       mrs     x0, spsr_el1
+       ret
+
+
+read_spsr_el2:; .type read_spsr_el2, %function
+       mrs     x0, spsr_el2
+       ret
+
+
+read_spsr_el3:; .type read_spsr_el3, %function
+       mrs     x0, spsr_el3
+       ret
+
+
+write_spsr:; .type write_spsr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_spsr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_spsr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_spsr_el3
+
+
+write_spsr_el1:; .type write_spsr_el1, %function
+       msr     spsr_el1, x0
+       isb
+       ret
+
+
+write_spsr_el2:; .type write_spsr_el2, %function
+       msr     spsr_el2, x0
+       isb
+       ret
+
+
+write_spsr_el3:; .type write_spsr_el3, %function
+       msr     spsr_el3, x0
+       isb
+       ret
+
+
+read_elr:; .type read_elr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_elr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_elr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_elr_el3
+
+
+read_elr_el1:; .type read_elr_el1, %function
+       mrs     x0, elr_el1
+       ret
+
+
+read_elr_el2:; .type read_elr_el2, %function
+       mrs     x0, elr_el2
+       ret
+
+
+read_elr_el3:; .type read_elr_el3, %function
+       mrs     x0, elr_el3
+       ret
+
+
+write_elr:; .type write_elr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_elr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_elr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_elr_el3
+
+
+write_elr_el1:; .type write_elr_el1, %function
+       msr     elr_el1, x0
+       isb
+       ret
+
+
+write_elr_el2:; .type write_elr_el2, %function
+       msr     elr_el2, x0
+       isb
+       ret
+
+
+write_elr_el3:; .type write_elr_el3, %function
+       msr     elr_el3, x0
+       isb
+       ret
+
+
+dsb:; .type dsb, %function
+       dsb     sy
+       ret
+
+
+isb:; .type isb, %function
+       isb
+       ret
+
+
+sev:; .type sev, %function
+       sev
+       ret
+
+
+wfe:; .type wfe, %function
+       wfe
+       ret
+
+
+wfi:; .type wfi, %function
+       wfi
+       ret
+
+
+eret:; .type eret, %function
+       eret
+
+
+smc:; .type smc, %function
+       smc     #0
diff --git a/lib/arch/aarch64/sysreg_helpers.S b/lib/arch/aarch64/sysreg_helpers.S
new file mode 100644 (file)
index 0000000..e68192f
--- /dev/null
@@ -0,0 +1,1154 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+
+       .globl  read_vbar
+       .globl  read_vbar_el1
+       .globl  read_vbar_el2
+       .globl  read_vbar_el3
+       .globl  write_vbar
+       .globl  write_vbar_el1
+       .globl  write_vbar_el2
+       .globl  write_vbar_el3
+
+       .globl  read_sctlr
+       .globl  read_sctlr_el1
+       .globl  read_sctlr_el2
+       .globl  read_sctlr_el3
+       .globl  write_sctlr
+       .globl  write_sctlr_el1
+       .globl  write_sctlr_el2
+       .globl  write_sctlr_el3
+
+       .globl  read_actlr
+       .globl  read_actlr_el1
+       .globl  read_actlr_el2
+       .globl  read_actlr_el3
+       .globl  write_actlr
+       .globl  write_actlr_el1
+       .globl  write_actlr_el2
+       .globl  write_actlr_el3
+
+       .globl  read_esr
+       .globl  read_esr_el1
+       .globl  read_esr_el2
+       .globl  read_esr_el3
+       .globl  write_esr
+       .globl  write_esr_el1
+       .globl  write_esr_el2
+       .globl  write_esr_el3
+
+       .globl  read_afsr0
+       .globl  read_afsr0_el1
+       .globl  read_afsr0_el2
+       .globl  read_afsr0_el3
+       .globl  write_afsr0
+       .globl  write_afsr0_el1
+       .globl  write_afsr0_el2
+       .globl  write_afsr0_el3
+
+       .globl  read_afsr1
+       .globl  read_afsr1_el1
+       .globl  read_afsr1_el2
+       .globl  read_afsr1_el3
+       .globl  write_afsr1
+       .globl  write_afsr1_el1
+       .globl  write_afsr1_el2
+       .globl  write_afsr1_el3
+
+       .globl  read_far
+       .globl  read_far_el1
+       .globl  read_far_el2
+       .globl  read_far_el3
+       .globl  write_far
+       .globl  write_far_el1
+       .globl  write_far_el2
+       .globl  write_far_el3
+
+       .globl  read_mair
+       .globl  read_mair_el1
+       .globl  read_mair_el2
+       .globl  read_mair_el3
+       .globl  write_mair
+       .globl  write_mair_el1
+       .globl  write_mair_el2
+       .globl  write_mair_el3
+
+       .globl  read_amair
+       .globl  read_amair_el1
+       .globl  read_amair_el2
+       .globl  read_amair_el3
+       .globl  write_amair
+       .globl  write_amair_el1
+       .globl  write_amair_el2
+       .globl  write_amair_el3
+
+       .globl  read_rvbar
+       .globl  read_rvbar_el1
+       .globl  read_rvbar_el2
+       .globl  read_rvbar_el3
+
+       .globl  read_rmr
+       .globl  read_rmr_el1
+       .globl  read_rmr_el2
+       .globl  read_rmr_el3
+       .globl  write_rmr
+       .globl  write_rmr_el1
+       .globl  write_rmr_el2
+       .globl  write_rmr_el3
+
+       .globl  read_tcr
+       .globl  read_tcr_el1
+       .globl  read_tcr_el2
+       .globl  read_tcr_el3
+       .globl  write_tcr
+       .globl  write_tcr_el1
+       .globl  write_tcr_el2
+       .globl  write_tcr_el3
+
+       .globl  read_cptr
+       .globl  read_cptr_el2
+       .globl  read_cptr_el3
+       .globl  write_cptr
+       .globl  write_cptr_el2
+       .globl  write_cptr_el3
+
+       .globl  read_ttbr0
+       .globl  read_ttbr0_el1
+       .globl  read_ttbr0_el2
+       .globl  read_ttbr0_el3
+       .globl  write_ttbr0
+       .globl  write_ttbr0_el1
+       .globl  write_ttbr0_el2
+       .globl  write_ttbr0_el3
+
+       .globl  read_ttbr1
+       .globl  read_ttbr1_el1
+       .globl  read_ttbr1_el2
+       .globl  write_ttbr1
+       .globl  write_ttbr1_el1
+       .globl  write_ttbr1_el2
+
+       .globl  read_cpacr
+       .globl  write_cpacr
+
+       .globl  read_cntfrq
+       .globl  write_cntfrq
+
+       .globl  read_cpuectlr
+       .globl  write_cpuectlr
+
+       .globl  read_cnthctl_el2
+       .globl  write_cnthctl_el2
+
+       .globl  read_cntfrq_el0
+       .globl  write_cntfrq_el0
+
+       .globl  read_scr
+       .globl  write_scr
+
+       .globl  read_hcr
+       .globl  write_hcr
+
+       .globl  read_midr
+       .globl  read_mpidr
+
+       .globl  read_current_el
+       .globl  read_id_pfr1_el1
+       .globl  read_id_aa64pfr0_el1
+
+#if SUPPORT_VFP
+       .globl  enable_vfp
+       .globl  read_fpexc
+       .globl  write_fpexc
+#endif
+
+
+       .section        .text, "ax"
+
+read_current_el:; .type read_current_el, %function
+       mrs     x0, CurrentEl
+       ret
+
+
+read_id_pfr1_el1:; .type read_id_pfr1_el1, %function
+       mrs     x0, id_pfr1_el1
+       ret
+
+
+read_id_aa64pfr0_el1:; .type read_id_aa64pfr0_el1, %function
+       mrs     x0, id_aa64pfr0_el1
+       ret
+
+
+       /* -----------------------------------------------------
+        * VBAR accessors
+        * -----------------------------------------------------
+        */
+read_vbar:; .type read_vbar, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_vbar_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_vbar_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_vbar_el3
+
+
+read_vbar_el1:; .type read_vbar_el1, %function
+       mrs     x0, vbar_el1
+       ret
+
+
+read_vbar_el2:; .type read_vbar_el2, %function
+       mrs     x0, vbar_el2
+       ret
+
+
+read_vbar_el3:; .type read_vbar_el3, %function
+       mrs     x0, vbar_el3
+       ret
+
+
+write_vbar:; .type write_vbar, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_vbar_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_vbar_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_vbar_el3
+
+
+write_vbar_el1:; .type write_vbar_el1, %function
+       msr     vbar_el1, x0
+       isb
+       ret
+
+
+write_vbar_el2:; .type write_vbar_el2, %function
+       msr     vbar_el2, x0
+       isb
+       ret
+
+
+write_vbar_el3:; .type write_vbar_el3, %function
+       msr     vbar_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * AFSR0 accessors
+        * -----------------------------------------------------
+        */
+read_afsr0:; .type read_afsr0, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_afsr0_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_afsr0_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_afsr0_el3
+
+
+read_afsr0_el1:; .type read_afsr0_el1, %function
+       mrs     x0, afsr0_el1
+       ret
+
+
+read_afsr0_el2:; .type read_afsr0_el2, %function
+       mrs     x0, afsr0_el2
+       ret
+
+
+read_afsr0_el3:; .type read_afsr0_el3, %function
+       mrs     x0, afsr0_el3
+       ret
+
+
+write_afsr0:; .type write_afsr0, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_afsr0_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_afsr0_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_afsr0_el3
+
+
+write_afsr0_el1:; .type write_afsr0_el1, %function
+       msr     afsr0_el1, x0
+       isb
+       ret
+
+
+write_afsr0_el2:; .type write_afsr0_el2, %function
+       msr     afsr0_el2, x0
+       isb
+       ret
+
+
+write_afsr0_el3:; .type write_afsr0_el3, %function
+       msr     afsr0_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * FAR accessors
+        * -----------------------------------------------------
+        */
+read_far:; .type read_far, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_far_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_far_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_far_el3
+
+
+read_far_el1:; .type read_far_el1, %function
+       mrs     x0, far_el1
+       ret
+
+
+read_far_el2:; .type read_far_el2, %function
+       mrs     x0, far_el2
+       ret
+
+
+read_far_el3:; .type read_far_el3, %function
+       mrs     x0, far_el3
+       ret
+
+
+write_far:; .type write_far, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_far_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_far_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_far_el3
+
+
+write_far_el1:; .type write_far_el1, %function
+       msr     far_el1, x0
+       isb
+       ret
+
+
+write_far_el2:; .type write_far_el2, %function
+       msr     far_el2, x0
+       isb
+       ret
+
+
+write_far_el3:; .type write_far_el3, %function
+       msr     far_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * MAIR accessors
+        * -----------------------------------------------------
+        */
+read_mair:; .type read_mair, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_mair_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_mair_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_mair_el3
+
+
+read_mair_el1:; .type read_mair_el1, %function
+       mrs     x0, mair_el1
+       ret
+
+
+read_mair_el2:; .type read_mair_el2, %function
+       mrs     x0, mair_el2
+       ret
+
+
+read_mair_el3:; .type read_mair_el3, %function
+       mrs     x0, mair_el3
+       ret
+
+
+write_mair:; .type write_mair, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_mair_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_mair_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_mair_el3
+
+
+write_mair_el1:; .type write_mair_el1, %function
+       msr     mair_el1, x0
+       isb
+       ret
+
+
+write_mair_el2:; .type write_mair_el2, %function
+       msr     mair_el2, x0
+       isb
+       ret
+
+
+write_mair_el3:; .type write_mair_el3, %function
+       msr     mair_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * AMAIR accessors
+        * -----------------------------------------------------
+        */
+read_amair:; .type read_amair, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_amair_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_amair_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_amair_el3
+
+
+read_amair_el1:; .type read_amair_el1, %function
+       mrs     x0, amair_el1
+       ret
+
+
+read_amair_el2:; .type read_amair_el2, %function
+       mrs     x0, amair_el2
+       ret
+
+
+read_amair_el3:; .type read_amair_el3, %function
+       mrs     x0, amair_el3
+       ret
+
+
+write_amair:; .type write_amair, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_amair_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_amair_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_amair_el3
+
+
+write_amair_el1:; .type write_amair_el1, %function
+       msr     amair_el1, x0
+       isb
+       ret
+
+
+write_amair_el2:; .type write_amair_el2, %function
+       msr     amair_el2, x0
+       isb
+       ret
+
+
+write_amair_el3:; .type write_amair_el3, %function
+       msr     amair_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * RVBAR accessors
+        * -----------------------------------------------------
+        */
+read_rvbar:; .type read_rvbar, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_rvbar_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_rvbar_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_rvbar_el3
+
+
+read_rvbar_el1:; .type read_rvbar_el1, %function
+       mrs     x0, rvbar_el1
+       ret
+
+
+read_rvbar_el2:; .type read_rvbar_el2, %function
+       mrs     x0, rvbar_el2
+       ret
+
+
+read_rvbar_el3:; .type read_rvbar_el3, %function
+       mrs     x0, rvbar_el3
+       ret
+
+
+       /* -----------------------------------------------------
+        * RMR accessors
+        * -----------------------------------------------------
+        */
+read_rmr:; .type read_rmr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_rmr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_rmr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_rmr_el3
+
+
+read_rmr_el1:; .type read_rmr_el1, %function
+       mrs     x0, rmr_el1
+       ret
+
+
+read_rmr_el2:; .type read_rmr_el2, %function
+       mrs     x0, rmr_el2
+       ret
+
+
+read_rmr_el3:; .type read_rmr_el3, %function
+       mrs     x0, rmr_el3
+       ret
+
+
+write_rmr:; .type write_rmr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_rmr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_rmr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_rmr_el3
+
+
+write_rmr_el1:; .type write_rmr_el1, %function
+       msr     rmr_el1, x0
+       isb
+       ret
+
+
+write_rmr_el2:; .type write_rmr_el2, %function
+       msr     rmr_el2, x0
+       isb
+       ret
+
+
+write_rmr_el3:; .type write_rmr_el3, %function
+       msr     rmr_el3, x0
+       isb
+       ret
+
+
+read_afsr1:; .type read_afsr1, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_afsr1_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_afsr1_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_afsr1_el3
+
+
+       /* -----------------------------------------------------
+        * AFSR1 accessors
+        * -----------------------------------------------------
+        */
+read_afsr1_el1:; .type read_afsr1_el1, %function
+       mrs     x0, afsr1_el1
+       ret
+
+
+read_afsr1_el2:; .type read_afsr1_el2, %function
+       mrs     x0, afsr1_el2
+       ret
+
+
+read_afsr1_el3:; .type read_afsr1_el3, %function
+       mrs     x0, afsr1_el3
+       ret
+
+
+write_afsr1:; .type write_afsr1, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_afsr1_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_afsr1_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_afsr1_el3
+
+
+write_afsr1_el1:; .type write_afsr1_el1, %function
+       msr     afsr1_el1, x0
+       isb
+       ret
+
+
+write_afsr1_el2:; .type write_afsr1_el2, %function
+       msr     afsr1_el2, x0
+       isb
+       ret
+
+
+write_afsr1_el3:; .type write_afsr1_el3, %function
+       msr     afsr1_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * SCTLR accessors
+        * -----------------------------------------------------
+        */
+read_sctlr:; .type read_sctlr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_sctlr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_sctlr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_sctlr_el3
+
+
+read_sctlr_el1:; .type read_sctlr_el1, %function
+       mrs     x0, sctlr_el1
+       ret
+
+
+read_sctlr_el2:; .type read_sctlr_el2, %function
+       mrs     x0, sctlr_el2
+       ret
+
+
+read_sctlr_el3:; .type read_sctlr_el3, %function
+       mrs     x0, sctlr_el3
+       ret
+
+
+write_sctlr:; .type write_sctlr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_sctlr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_sctlr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_sctlr_el3
+
+
+write_sctlr_el1:; .type write_sctlr_el1, %function
+       msr     sctlr_el1, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_sctlr_el2:; .type write_sctlr_el2, %function
+       msr     sctlr_el2, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_sctlr_el3:; .type write_sctlr_el3, %function
+       msr     sctlr_el3, x0
+       dsb     sy
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * ACTLR accessors
+        * -----------------------------------------------------
+        */
+read_actlr:; .type read_actlr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_actlr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_actlr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_actlr_el3
+
+
+read_actlr_el1:; .type read_actlr_el1, %function
+       mrs     x0, actlr_el1
+       ret
+
+
+read_actlr_el2:; .type read_actlr_el2, %function
+       mrs     x0, actlr_el2
+       ret
+
+
+read_actlr_el3:; .type read_actlr_el3, %function
+       mrs     x0, actlr_el3
+       ret
+
+
+write_actlr:; .type write_actlr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_actlr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_actlr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_actlr_el3
+
+
+write_actlr_el1:; .type write_actlr_el1, %function
+       msr     actlr_el1, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_actlr_el2:; .type write_actlr_el2, %function
+       msr     actlr_el2, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_actlr_el3:; .type write_actlr_el3, %function
+       msr     actlr_el3, x0
+       dsb     sy
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * ESR accessors
+        * -----------------------------------------------------
+        */
+read_esr:; .type read_esr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_esr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_esr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_esr_el3
+
+
+read_esr_el1:; .type read_esr_el1, %function
+       mrs     x0, esr_el1
+       ret
+
+
+read_esr_el2:; .type read_esr_el2, %function
+       mrs     x0, esr_el2
+       ret
+
+
+read_esr_el3:; .type read_esr_el3, %function
+       mrs     x0, esr_el3
+       ret
+
+
+write_esr:; .type write_esr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_esr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_esr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_esr_el3
+
+
+write_esr_el1:; .type write_esr_el1, %function
+       msr     esr_el1, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_esr_el2:; .type write_esr_el2, %function
+       msr     esr_el2, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_esr_el3:; .type write_esr_el3, %function
+       msr     esr_el3, x0
+       dsb     sy
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * TCR accessors
+        * -----------------------------------------------------
+        */
+read_tcr:; .type read_tcr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_tcr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_tcr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_tcr_el3
+
+
+read_tcr_el1:; .type read_tcr_el1, %function
+       mrs     x0, tcr_el1
+       ret
+
+
+read_tcr_el2:; .type read_tcr_el2, %function
+       mrs     x0, tcr_el2
+       ret
+
+
+read_tcr_el3:; .type read_tcr_el3, %function
+       mrs     x0, tcr_el3
+       ret
+
+
+write_tcr:; .type write_tcr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_tcr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_tcr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_tcr_el3
+
+
+write_tcr_el1:; .type write_tcr_el1, %function
+       msr     tcr_el1, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_tcr_el2:; .type write_tcr_el2, %function
+       msr     tcr_el2, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_tcr_el3:; .type write_tcr_el3, %function
+       msr     tcr_el3, x0
+       dsb     sy
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * CPTR accessors
+        * -----------------------------------------------------
+        */
+read_cptr:; .type read_cptr, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_cptr_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_cptr_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_cptr_el3
+
+
+read_cptr_el1:; .type read_cptr_el1, %function
+       b       read_cptr_el1
+       ret
+
+
+read_cptr_el2:; .type read_cptr_el2, %function
+       mrs     x0, cptr_el2
+       ret
+
+
+read_cptr_el3:; .type read_cptr_el3, %function
+       mrs     x0, cptr_el3
+       ret
+
+
+write_cptr:; .type write_cptr, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_cptr_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_cptr_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_cptr_el3
+
+
+write_cptr_el1:; .type write_cptr_el1, %function
+       b       write_cptr_el1
+
+
+write_cptr_el2:; .type write_cptr_el2, %function
+       msr     cptr_el2, x0
+       dsb     sy
+       isb
+       ret
+
+
+write_cptr_el3:; .type write_cptr_el3, %function
+       msr     cptr_el3, x0
+       dsb     sy
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * TTBR0 accessors
+        * -----------------------------------------------------
+        */
+read_ttbr0:; .type read_ttbr0, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_ttbr0_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_ttbr0_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_ttbr0_el3
+
+
+read_ttbr0_el1:; .type read_ttbr0_el1, %function
+       mrs     x0, ttbr0_el1
+       ret
+
+
+read_ttbr0_el2:; .type read_ttbr0_el2, %function
+       mrs     x0, ttbr0_el2
+       ret
+
+
+read_ttbr0_el3:; .type read_ttbr0_el3, %function
+       mrs     x0, ttbr0_el3
+       ret
+
+
+write_ttbr0:; .type write_ttbr0, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_ttbr0_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_ttbr0_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_ttbr0_el3
+
+
+write_ttbr0_el1:; .type write_ttbr0_el1, %function
+       msr     ttbr0_el1, x0
+       isb
+       ret
+
+
+write_ttbr0_el2:; .type write_ttbr0_el2, %function
+       msr     ttbr0_el2, x0
+       isb
+       ret
+
+
+write_ttbr0_el3:; .type write_ttbr0_el3, %function
+       msr     ttbr0_el3, x0
+       isb
+       ret
+
+
+       /* -----------------------------------------------------
+        * TTBR1 accessors
+        * -----------------------------------------------------
+        */
+read_ttbr1:; .type read_ttbr1, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    read_ttbr1_el1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    read_ttbr1_el2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    read_ttbr1_el3
+
+
+read_ttbr1_el1:; .type read_ttbr1_el1, %function
+       mrs     x0, ttbr1_el1
+       ret
+
+
+read_ttbr1_el2:; .type read_ttbr1_el2, %function
+       b       read_ttbr1_el2
+
+
+read_ttbr1_el3:; .type read_ttbr1_el3, %function
+       b       read_ttbr1_el3
+
+
+write_ttbr1:; .type write_ttbr1, %function
+       mrs     x1, CurrentEl
+       cmp     x1, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    write_ttbr1_el1
+       cmp     x1, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    write_ttbr1_el2
+       cmp     x1, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    write_ttbr1_el3
+
+
+write_ttbr1_el1:; .type write_ttbr1_el1, %function
+       msr     ttbr1_el1, x0
+       isb
+       ret
+
+
+write_ttbr1_el2:; .type write_ttbr1_el2, %function
+       b       write_ttbr1_el2
+
+
+write_ttbr1_el3:; .type write_ttbr1_el3, %function
+       b       write_ttbr1_el3
+
+
+read_hcr:; .type read_hcr, %function
+       mrs     x0, hcr_el2
+       ret
+
+
+write_hcr:; .type write_hcr, %function
+       msr     hcr_el2, x0
+       dsb     sy
+       isb
+       ret
+
+
+read_cpacr:; .type read_cpacr, %function
+       mrs     x0, cpacr_el1
+       ret
+
+
+write_cpacr:; .type write_cpacr, %function
+       msr     cpacr_el1, x0
+       ret
+
+
+read_cntfrq_el0:; .type read_cntfrq_el0, %function
+       mrs     x0, cntfrq_el0
+       ret
+
+
+write_cntfrq_el0:; .type write_cntfrq_el0, %function
+       msr     cntfrq_el0, x0
+       ret
+
+
+read_cpuectlr:; .type read_cpuectlr, %function
+       mrs     x0, CPUECTLR_EL1
+       ret
+
+
+write_cpuectlr:; .type write_cpuectlr, %function
+       msr     CPUECTLR_EL1, x0
+       dsb     sy
+       isb
+       ret
+
+
+read_cnthctl_el2:; .type read_cnthctl_el2, %function
+       mrs     x0, cnthctl_el2
+       ret
+
+
+write_cnthctl_el2:; .type write_cnthctl_el2, %function
+       msr     cnthctl_el2, x0
+       ret
+
+
+read_cntfrq:; .type read_cntfrq, %function
+       mrs     x0, cntfrq_el0
+       ret
+
+
+write_cntfrq:; .type write_cntfrq, %function
+       msr     cntfrq_el0, x0
+       ret
+
+
+write_scr:; .type write_scr, %function
+       msr     scr_el3, x0
+       dsb     sy
+       isb
+       ret
+
+
+read_scr:; .type read_scr, %function
+       mrs     x0, scr_el3
+       ret
+
+
+read_midr:; .type read_midr, %function
+       mrs     x0, midr_el1
+       ret
+
+
+read_mpidr:; .type read_mpidr, %function
+       mrs     x0, mpidr_el1
+       ret
+
+
+#if SUPPORT_VFP
+enable_vfp:; .type enable_vfp, %function
+       mrs     x0, cpacr_el1
+       orr     x0, x0, #CPACR_VFP_BITS
+       msr     cpacr_el1, x0
+       mrs     x0, cptr_el3
+       mov     x1, #AARCH64_CPTR_TFP
+       bic     x0, x0, x1
+       msr     cptr_el3, x0
+       ret
+
+
+       // int read_fpexc(void)
+read_fpexc:; .type read_fpexc, %function
+       b       read_fpexc
+       ret
+
+
+       // void write_fpexc(int fpexc)
+write_fpexc:; .type write_fpexc, %function
+       b       write_fpexc
+       ret
+
+#endif
diff --git a/lib/arch/aarch64/tlb_helpers.S b/lib/arch/aarch64/tlb_helpers.S
new file mode 100644 (file)
index 0000000..8377f2c
--- /dev/null
@@ -0,0 +1,111 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch_helpers.h>
+
+       .globl  tlbiall
+       .globl  tlbiallis
+       .globl  tlbialle1
+       .globl  tlbialle1is
+       .globl  tlbialle2
+       .globl  tlbialle2is
+       .globl  tlbialle3
+       .globl  tlbialle3is
+       .globl  tlbivmalle1
+
+
+       .section        .text, "ax"
+
+tlbiall:; .type tlbiall, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    tlbialle1
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    tlbialle2
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    tlbialle3
+
+
+tlbiallis:; .type tlbiallis, %function
+       mrs     x0, CurrentEl
+       cmp     x0, #(MODE_EL1 << MODE_EL_SHIFT)
+       b.eq    tlbialle1is
+       cmp     x0, #(MODE_EL2 << MODE_EL_SHIFT)
+       b.eq    tlbialle2is
+       cmp     x0, #(MODE_EL3 << MODE_EL_SHIFT)
+       b.eq    tlbialle3is
+
+
+tlbialle1:; .type tlbialle1, %function
+       tlbi    alle1
+       dsb     sy
+       isb
+       ret
+
+
+tlbialle1is:; .type tlbialle1is, %function
+       tlbi    alle1is
+       dsb     sy
+       isb
+       ret
+
+
+tlbialle2:; .type tlbialle2, %function
+       tlbi    alle2
+       dsb     sy
+       isb
+       ret
+
+
+tlbialle2is:; .type tlbialle2is, %function
+       tlbi    alle2is
+       dsb     sy
+       isb
+       ret
+
+
+tlbialle3:; .type tlbialle3, %function
+       tlbi    alle3
+       dsb     sy
+       isb
+       ret
+
+
+tlbialle3is:; .type tlbialle3is, %function
+       tlbi    alle3is
+       dsb     sy
+       isb
+       ret
+
+tlbivmalle1:; .type tlbivmalle1, %function
+       tlbi    vmalle1
+       dsb     sy
+       isb
+       ret
diff --git a/lib/mmio.c b/lib/mmio.c
new file mode 100644 (file)
index 0000000..bf35e36
--- /dev/null
@@ -0,0 +1,41 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+
+void mmio_write_32(uintptr_t addr, uint32_t value)
+{
+       *(volatile uint32_t*)addr = value;
+}
+
+unsigned mmio_read_32(uintptr_t addr)
+{
+       return *(volatile uint32_t*)addr;
+}
diff --git a/lib/non-semihosting/ctype.h b/lib/non-semihosting/ctype.h
new file mode 100644 (file)
index 0000000..88e7da1
--- /dev/null
@@ -0,0 +1,60 @@
+/*-
+ * Copyright (c) 1982, 1988, 1991, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ * (c) UNIX System Laboratories, Inc.
+ * All or some portions of this file are derived from material licensed
+ * to the University of California by American Telephone and Telegraph
+ * Co. or Unix System Laboratories, Inc. and are reproduced herein with
+ * the permission of UNIX System Laboratories, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * $FreeBSD$
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: include/lib/ctype.h
+ */
+
+#ifndef _SYS_CTYPE_H_
+#define        _SYS_CTYPE_H_
+
+#define isspace(c)     ((c) == ' ' || ((c) >= '\t' && (c) <= '\r'))
+#define isascii(c)     (((c) & ~0x7f) == 0)
+#define isupper(c)     ((c) >= 'A' && (c) <= 'Z')
+#define islower(c)     ((c) >= 'a' && (c) <= 'z')
+#define isalpha(c)     (isupper(c) || islower(c))
+#define isdigit(c)     ((c) >= '0' && (c) <= '9')
+#define isxdigit(c)    (isdigit(c) \
+                         || ((c) >= 'A' && (c) <= 'F') \
+                         || ((c) >= 'a' && (c) <= 'f'))
+#define isprint(c)     ((c) >= ' ' && (c) <= '~')
+
+#define toupper(c)     ((c) - 0x20 * (((c) >= 'a') && ((c) <= 'z')))
+#define tolower(c)     ((c) + 0x20 * (((c) >= 'A') && ((c) <= 'Z')))
+
+#endif /* !_SYS_CTYPE_H_ */
diff --git a/lib/non-semihosting/mem.c b/lib/non-semihosting/mem.c
new file mode 100644 (file)
index 0000000..bca9ab5
--- /dev/null
@@ -0,0 +1,103 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stddef.h> /* size_t */
+
+/*
+ * Fill @count bytes of memory pointed to by @dst with @val
+ */
+void *memset(void *dst, int val, size_t count)
+{
+       char *ptr = dst;
+
+       while (count--)
+               *ptr++ = val;
+
+       return dst;
+}
+
+/*
+ * Compare @len bytes of @s1 and @s2
+ */
+int memcmp(const void *s1, const void *s2, size_t len)
+{
+       const char *s = s1;
+       const char *d = s2;
+       char dc;
+       char sc;
+
+       while (len--) {
+               sc = *s++;
+               dc = *d++;
+               if (sc - dc)
+                       return (sc - dc);
+       }
+
+       return 0;
+}
+
+
+/*
+ * Move @len bytes from @src to @dst
+ */
+void *memmove(void *dst, const void *src, size_t len)
+{
+       const char *s = src;
+       char *d = dst;
+
+       while (len--)
+               *d++ = *s++;
+       return d;
+}
+
+/*
+ * Copy @len bytes from @src to @dst
+ */
+void *memcpy(void *dst, const void *src, size_t len)
+{
+       return memmove(dst, src, len);
+}
+
+
+/*
+ * Scan @len bytes of @src for value @c
+ */
+void *memchr(const void *src, int c, size_t len)
+{
+       const char *s = src;
+
+       while (len--) {
+               if (*s == c)
+                       return (void *) s;
+               s++;
+       }
+
+       return NULL;
+}
diff --git a/lib/non-semihosting/std.c b/lib/non-semihosting/std.c
new file mode 100644 (file)
index 0000000..ea91d5f
--- /dev/null
@@ -0,0 +1,106 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <console.h>
+
+#if defined (__GNUC__)
+
+#include <stdio.h>
+#include <stddef.h> /* size_t */
+#include <stdarg.h> /* va_list */
+
+// Code from VTB.
+#include "mem.c"
+
+// Make mem functions that will operate on DEV mem. "memset_io"?
+
+
+//Code from VTB
+#include "strlen.c"
+
+int puts(const char *s)
+{
+       int count = 0;
+       while(*s)
+       {
+               if (console_putc(*s++)) {
+                       count++;
+               } else {
+                       count = EOF; // -1 in stdio.h
+                       break;
+               }
+       }
+       return count;
+}
+
+// From VTB
+#include "ctype.h"
+#include "subr_prf.c"
+
+ // Choose max of 128 chars for now.
+#define PRINT_BUFFER_SIZE 128
+int printf(const char *fmt, ...)
+{
+       va_list args;
+       va_start(args, fmt);
+       char buf[PRINT_BUFFER_SIZE];
+       vsnprintf(buf, sizeof(buf) - 1, fmt, args);
+       buf[PRINT_BUFFER_SIZE - 1] = '\0';
+       return puts(buf);
+}
+
+
+// I just made this up. Probably make it beter.
+void __assert_func (const char *file, int l, const char *func, const char *error)
+{
+       printf("ASSERT: %s <%d> : %s\n\r", func, l, error);
+       while(1);
+}
+
+extern void __assert_fail (const char *assertion, const char *file,
+                          unsigned int line, const char *function)
+{
+       printf("ASSERT: %s <%d> : %s\n\r", function, line, assertion);
+       while(1);
+}
+
+
+// I just made this up. Probably make it beter.
+void abort (void)
+{
+       printf("ABORT\n\r");
+       while(1);
+}
+
+
+#else
+#error "No standard library binding defined."
+#endif
diff --git a/lib/non-semihosting/strcmp.c b/lib/non-semihosting/strcmp.c
new file mode 100644 (file)
index 0000000..e5921ba
--- /dev/null
@@ -0,0 +1,49 @@
+/*-
+ * Copyright (c) 1990, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * Chris Torek.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strcmp.c
+ */
+
+/*
+ * Compare strings.
+ */
+int
+strcmp(const char *s1, const char *s2)
+{
+       while (*s1 == *s2++)
+               if (*s1++ == '\0')
+                       return (0);
+       return (*(const unsigned char *)s1 - *(const unsigned char *)(s2 - 1));
+}
diff --git a/lib/non-semihosting/string.c b/lib/non-semihosting/string.c
new file mode 100644 (file)
index 0000000..5bb01a1
--- /dev/null
@@ -0,0 +1,40 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+
+#include "ctype.h"
+
+/* Return pointer to the first non-space character */
+const char *skip_spaces(const char *str)
+{
+       while (isspace(*str))
+               ++str;
+       return str;
+}
diff --git a/lib/non-semihosting/strlen.c b/lib/non-semihosting/strlen.c
new file mode 100644 (file)
index 0000000..5c1e7a6
--- /dev/null
@@ -0,0 +1,46 @@
+/*-
+ * Copyright (c) 1990, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strlen.c
+ */
+
+#include <stddef.h>
+
+size_t
+strlen(str)
+       const char *str;
+{
+       register const char *s;
+
+       for (s = str; *s; ++s);
+       return(s - str);
+}
diff --git a/lib/non-semihosting/strncmp.c b/lib/non-semihosting/strncmp.c
new file mode 100644 (file)
index 0000000..984b7a0
--- /dev/null
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 1989, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strncmp.c
+ */
+
+#include "types.h"
+
+int
+strncmp(const char *s1, const char *s2, size_t n)
+{
+
+       if (n == 0)
+               return (0);
+       do {
+               if (*s1 != *s2++)
+                       return (*(const unsigned char *)s1 -
+                               *(const unsigned char *)(s2 - 1));
+               if (*s1++ == '\0')
+                       break;
+       } while (--n != 0);
+       return (0);
+}
diff --git a/lib/non-semihosting/strncpy.c b/lib/non-semihosting/strncpy.c
new file mode 100644 (file)
index 0000000..56a8a69
--- /dev/null
@@ -0,0 +1,62 @@
+/*-
+ * Copyright (c) 1990, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * Chris Torek.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strncpy.c
+ */
+
+#include "types.h"
+
+/*
+ * Copy src to dst, truncating or null-padding to always copy n bytes.
+ * Return dst.
+ */
+char *
+strncpy(char *dst, const char *src, size_t n)
+{
+       if (n != 0) {
+               char *d = dst;
+               const char *s = src;
+
+               do {
+                       if ((*d++ = *s++) == '\0') {
+                               /* NUL pad the remaining n-1 bytes */
+                               while (--n != 0)
+                                       *d++ = '\0';
+                               break;
+                       }
+               } while (--n != 0);
+       }
+       return (dst);
+}
diff --git a/lib/non-semihosting/strsep.c b/lib/non-semihosting/strsep.c
new file mode 100644 (file)
index 0000000..1f80af4
--- /dev/null
@@ -0,0 +1,74 @@
+/*-
+ * Copyright (c) 1990, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strsep.c
+ */
+
+#include "types.h"
+
+/*
+ * Get next token from string *stringp, where tokens are possibly-empty
+ * strings separated by characters from delim.
+ *
+ * Writes NULs into the string at *stringp to end tokens.
+ * delim need not remain constant from call to call.
+ * On return, *stringp points past the last NUL written (if there might
+ * be further tokens), or is NULL (if there are definitely no more tokens).
+ *
+ * If *stringp is NULL, strsep returns NULL.
+ */
+char *
+strsep(char **stringp, const char *delim)
+{
+       char *s;
+       const char *spanp;
+       int c, sc;
+       char *tok;
+
+       if ((s = *stringp) == NULL)
+               return (NULL);
+       for (tok = s;;) {
+               c = *s++;
+               spanp = delim;
+               do {
+                       if ((sc = *spanp++) == c) {
+                               if (c == 0)
+                                       s = NULL;
+                               else
+                                       s[-1] = 0;
+                               *stringp = s;
+                               return (tok);
+                       }
+               } while (sc != 0);
+       }
+       /* NOTREACHED */
+}
diff --git a/lib/non-semihosting/strtol.c b/lib/non-semihosting/strtol.c
new file mode 100644 (file)
index 0000000..4a5a404
--- /dev/null
@@ -0,0 +1,146 @@
+/*-
+ * Copyright (c) 1990, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * Chris Torek.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * From: @(#)strtol.c  8.1 (Berkeley) 6/4/93
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strtol.c
+ */
+
+#include "types.h"
+#include "ctype.h"
+#include "limits.h"
+
+/*
+ * Convert a string to a long integer.
+ *
+ * Ignores `locale' stuff.  Assumes that the upper and lower case
+ * alphabets and digits are each contiguous.
+ */
+static long
+bsd_strtol(nptr, endptr, base)
+       const char *nptr;
+       char **endptr;
+       int base;
+{
+       const char *s = nptr;
+       unsigned long acc;
+       unsigned char c;
+       unsigned long cutoff;
+       int neg = 0, any, cutlim;
+
+       /*
+        * Skip white space and pick up leading +/- sign if any.
+        * If base is 0, allow 0x for hex and 0 for octal, else
+        * assume decimal; if base is already 16, allow 0x.
+        */
+       do {
+               c = *s++;
+       } while (isspace(c));
+       if (c == '-') {
+               neg = 1;
+               c = *s++;
+       } else if (c == '+')
+               c = *s++;
+       if ((base == 0 || base == 16) &&
+           c == '0' && (*s == 'x' || *s == 'X')) {
+               c = s[1];
+               s += 2;
+               base = 16;
+       }
+       if (base == 0)
+               base = c == '0' ? 8 : 10;
+
+       /*
+        * Compute the cutoff value between legal numbers and illegal
+        * numbers.  That is the largest legal value, divided by the
+        * base.  An input number that is greater than this value, if
+        * followed by a legal input character, is too big.  One that
+        * is equal to this value may be valid or not; the limit
+        * between valid and invalid numbers is then based on the last
+        * digit.  For instance, if the range for longs is
+        * [-2147483648..2147483647] and the input base is 10,
+        * cutoff will be set to 214748364 and cutlim to either
+        * 7 (neg==0) or 8 (neg==1), meaning that if we have accumulated
+        * a value > 214748364, or equal but the next digit is > 7 (or 8),
+        * the number is too big, and we will return a range error.
+        *
+        * Set any if any `digits' consumed; make it negative to indicate
+        * overflow.
+        */
+       cutoff = neg ? -(unsigned long)LONG_MIN : LONG_MAX;
+       cutlim = cutoff % (unsigned long)base;
+       cutoff /= (unsigned long)base;
+       for (acc = 0, any = 0;; c = *s++) {
+               if (!isascii(c))
+                       break;
+               if (isdigit(c))
+                       c -= '0';
+               else if (isalpha(c))
+                       c -= isupper(c) ? 'A' - 10 : 'a' - 10;
+               else
+                       break;
+               if (c >= base)
+                       break;
+               if (any < 0 || acc > cutoff || (acc == cutoff && c > cutlim))
+                       any = -1;
+               else {
+                       any = 1;
+                       acc *= base;
+                       acc += c;
+               }
+       }
+       if (any < 0) {
+               acc = neg ? LONG_MIN : LONG_MAX;
+       } else if (neg)
+               acc = -acc;
+       if (endptr != 0)
+               *((const char **)endptr) = any ? s - 1 : nptr;
+       return (acc);
+}
+
+int strict_strtol(const char *str, unsigned int base, long *result)
+{
+       if (*str == '-')
+               *result = 0 - bsd_strtol(str + 1, NULL, base);
+       else
+               *result = bsd_strtol(str, NULL, base);
+       return 0;
+}
+
+int strict_strtoul(const char *str, unsigned int base, unsigned long *result)
+{
+       *result = bsd_strtol(str, NULL, base);
+       return 0;
+}
diff --git a/lib/non-semihosting/strtoull.c b/lib/non-semihosting/strtoull.c
new file mode 100644 (file)
index 0000000..e46ef4c
--- /dev/null
@@ -0,0 +1,117 @@
+/*-
+ * Copyright (c) 1992, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/strtoull.c
+ */
+
+#include "types.h"
+#include "ctype.h"
+#include "limits.h"
+
+/*
+ * Convert a string to an unsigned long long integer.
+ *
+ * Assumes that the upper and lower case
+ * alphabets and digits are each contiguous.
+ */
+static unsigned long long
+bsd_strtoull(const char *nptr, char **endptr, int base)
+{
+       const char *s;
+       unsigned long long acc;
+       char c;
+       unsigned long long cutoff;
+       int neg, any, cutlim;
+
+       /*
+        * See strtoq for comments as to the logic used.
+        */
+       s = nptr;
+       do {
+               c = *s++;
+       } while (isspace((unsigned char)c));
+       if (c == '-') {
+               neg = 1;
+               c = *s++;
+       } else {
+               neg = 0;
+               if (c == '+')
+                       c = *s++;
+       }
+       if ((base == 0 || base == 16) &&
+           c == '0' && (*s == 'x' || *s == 'X') &&
+           ((s[1] >= '0' && s[1] <= '9') ||
+           (s[1] >= 'A' && s[1] <= 'F') ||
+           (s[1] >= 'a' && s[1] <= 'f'))) {
+               c = s[1];
+               s += 2;
+               base = 16;
+       }
+       if (base == 0)
+               base = c == '0' ? 8 : 10;
+       acc = any = 0;
+
+       cutoff = ULLONG_MAX / base;
+       cutlim = ULLONG_MAX % base;
+       for ( ; ; c = *s++) {
+               if (c >= '0' && c <= '9')
+                       c -= '0';
+               else if (c >= 'A' && c <= 'Z')
+                       c -= 'A' - 10;
+               else if (c >= 'a' && c <= 'z')
+                       c -= 'a' - 10;
+               else
+                       break;
+               if (c >= base)
+                       break;
+               if (any < 0 || acc > cutoff || (acc == cutoff && c > cutlim))
+                       any = -1;
+               else {
+                       any = 1;
+                       acc *= base;
+                       acc += c;
+               }
+       }
+       if (any < 0) {
+               acc = ULLONG_MAX;
+       } else if (neg)
+               acc = -acc;
+       if (endptr != NULL)
+               *endptr = (char *)(any ? s - 1 : nptr);
+       return (acc);
+}
+
+int strict_strtoull(const char *str, unsigned int base, long long *result)
+{
+       *result = bsd_strtoull(str, NULL, base);
+       return 0;
+}
diff --git a/lib/non-semihosting/subr_prf.c b/lib/non-semihosting/subr_prf.c
new file mode 100644 (file)
index 0000000..6e2a1ac
--- /dev/null
@@ -0,0 +1,557 @@
+/*-
+ * Copyright (c) 1986, 1988, 1991, 1993
+ *     The Regents of the University of California.  All rights reserved.
+ * (c) UNIX System Laboratories, Inc.
+ * All or some portions of this file are derived from material licensed
+ * to the University of California by American Telephone and Telegraph
+ * Co. or Unix System Laboratories, Inc. and are reproduced herein with
+ * the permission of UNIX System Laboratories, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *     @(#)subr_prf.c  8.3 (Berkeley) 1/21/94
+ */
+
+/*
+ * Portions copyright (c) 2009-2013, ARM Ltd. All rights reserved.
+ * ---------------------------------------------------------------
+ * File: lib/subr_prf.c
+ */
+
+/*
+#include "types.h"
+#include "varargs.h"
+#include "ctype.h"
+#include "string.h"
+*/
+#include <stddef.h>
+#include <sys/types.h>  /* For ssize_t */
+#include <stdint.h>
+#include <string.h>
+
+#include "ctype.h"
+
+typedef uint64_t uintmax_t;
+typedef int64_t intmax_t;
+typedef unsigned char u_char;
+typedef unsigned int u_int;
+typedef int64_t quad_t;
+typedef uint64_t u_quad_t;
+typedef unsigned long u_long;
+typedef unsigned short u_short;
+
+static inline int imax(int a, int b) { return (a > b ? a : b); }
+
+/*
+ * Note that stdarg.h and the ANSI style va_start macro is used for both
+ * ANSI and traditional C compilers.
+ */
+
+#define TOCONS 0x01
+#define TOTTY  0x02
+#define TOLOG  0x04
+
+/* Max number conversion buffer length: a u_quad_t in base 2, plus NUL byte. */
+#define MAXNBUF        (sizeof(intmax_t) * 8 + 1)
+
+struct putchar_arg {
+       int     flags;
+       int     pri;
+       struct  tty *tty;
+       char    *p_bufr;
+       size_t  n_bufr;
+       char    *p_next;
+       size_t  remain;
+};
+
+struct snprintf_arg {
+       char    *str;
+       size_t  remain;
+};
+
+extern int log_open;
+
+static char *ksprintn(char *nbuf, uintmax_t num, int base, int *len, int upper);
+static void  snprintf_func(int ch, void *arg);
+static int kvprintf(char const *fmt, void (*func)(int, void*), void *arg, int radix, va_list ap);
+
+int vsnprintf(char *str, size_t size, const char *format, va_list ap);
+
+static char const hex2ascii_data[] = "0123456789abcdefghijklmnopqrstuvwxyz";
+#define hex2ascii(hex) (hex2ascii_data[hex])
+
+/*
+ * Scaled down version of sprintf(3).
+ */
+int
+sprintf(char *buf, const char *cfmt, ...)
+{
+       int retval;
+       va_list ap;
+
+       va_start(ap, cfmt);
+       retval = kvprintf(cfmt, NULL, (void *)buf, 10, ap);
+       buf[retval] = '\0';
+       va_end(ap);
+       return (retval);
+}
+
+/*
+ * Scaled down version of vsprintf(3).
+ */
+int
+vsprintf(char *buf, const char *cfmt, va_list ap)
+{
+       int retval;
+
+       retval = kvprintf(cfmt, NULL, (void *)buf, 10, ap);
+       buf[retval] = '\0';
+       return (retval);
+}
+
+/*
+ * Scaled down version of snprintf(3).
+ */
+int
+snprintf(char *str, size_t size, const char *format, ...)
+{
+       int retval;
+       va_list ap;
+
+       va_start(ap, format);
+       retval = vsnprintf(str, size, format, ap);
+       va_end(ap);
+       return(retval);
+}
+
+/*
+ * Scaled down version of vsnprintf(3).
+ */
+int
+vsnprintf(char *str, size_t size, const char *format, va_list ap)
+{
+       struct snprintf_arg info;
+       int retval;
+
+       info.str = str;
+       info.remain = size;
+       retval = kvprintf(format, snprintf_func, &info, 10, ap);
+       if (info.remain >= 1)
+               *info.str++ = '\0';
+       return (retval);
+}
+
+static void
+snprintf_func(int ch, void *arg)
+{
+       struct snprintf_arg *const info = arg;
+
+       if (info->remain >= 2) {
+               *info->str++ = ch;
+               info->remain--;
+       }
+}
+
+
+/*
+ * Kernel version which takes radix argument vsnprintf(3).
+ */
+int
+vsnrprintf(char *str, size_t size, int radix, const char *format, va_list ap)
+{
+       struct snprintf_arg info;
+       int retval;
+
+       info.str = str;
+       info.remain = size;
+       retval = kvprintf(format, snprintf_func, &info, radix, ap);
+       if (info.remain >= 1)
+               *info.str++ = '\0';
+       return (retval);
+}
+
+
+/*
+ * Put a NUL-terminated ASCII number (base <= 36) in a buffer in reverse
+ * order; return an optional length and a pointer to the last character
+ * written in the buffer (i.e., the first character of the string).
+ * The buffer pointed to by `nbuf' must have length >= MAXNBUF.
+ */
+static char *
+ksprintn(char *nbuf, uintmax_t num, int base, int *lenp, int upper)
+{
+       char *p, c;
+
+       p = nbuf;
+       *p = '\0';
+       do {
+               c = hex2ascii(num % base);
+               *++p = upper ? toupper(c) : c;
+       } while (num /= base);
+       if (lenp)
+               *lenp = p - nbuf;
+       return (p);
+}
+
+/*
+ * Scaled down version of printf(3).
+ *
+ * Two additional formats:
+ *
+ * The format %b is supported to decode error registers.
+ * Its usage is:
+ *
+ *     printf("reg=%b\n", regval, "<base><arg>*");
+ *
+ * where <base> is the output base expressed as a control character, e.g.
+ * \10 gives octal; \20 gives hex.  Each arg is a sequence of characters,
+ * the first of which gives the bit number to be inspected (origin 1), and
+ * the next characters (up to a control character, i.e. a character <= 32),
+ * give the name of the register.  Thus:
+ *
+ *     kvprintf("reg=%b\n", 3, "\10\2BITTWO\1BITONE\n");
+ *
+ * would produce output:
+ *
+ *     reg=3<BITTWO,BITONE>
+ *
+ * XXX:  %D  -- Hexdump, takes pointer and separator string:
+ *             ("%6D", ptr, ":")   -> XX:XX:XX:XX:XX:XX
+ *             ("%*D", len, ptr, " " -> XX XX XX XX ...
+ */
+int
+kvprintf(char const *fmt, void (*func)(int, void*), void *arg, int radix, va_list ap)
+{
+#define PCHAR(c) {int cc=(c); if (func) (*func)(cc,arg); else *d++ = cc; retval++; }
+       char nbuf[MAXNBUF];
+       char *d;
+       const char *p, *percent, *q;
+       u_char *up;
+       int ch, n;
+       uintmax_t num;
+       int base, lflag, qflag, tmp, width, ladjust, sharpflag, neg, sign, dot;
+       int cflag, hflag, jflag, tflag, zflag;
+       int dwidth, upper;
+       char padc;
+       int stop = 0, retval = 0;
+
+       num = 0;
+       if (!func)
+               d = (char *) arg;
+       else
+               d = NULL;
+
+       if (fmt == NULL)
+               fmt = "(fmt null)\n";
+
+       if (radix < 2 || radix > 36)
+               radix = 10;
+
+       for (;;) {
+               padc = ' ';
+               width = 0;
+               while ((ch = (u_char)*fmt++) != '%' || stop) {
+                       if (ch == '\0')
+                               return (retval);
+                       PCHAR(ch);
+               }
+               percent = fmt - 1;
+               qflag = 0; lflag = 0; ladjust = 0; sharpflag = 0; neg = 0;
+               sign = 0; dot = 0; dwidth = 0; upper = 0;
+               cflag = 0; hflag = 0; jflag = 0; tflag = 0; zflag = 0;
+reswitch:      switch (ch = (u_char)*fmt++) {
+               case '.':
+                       dot = 1;
+                       goto reswitch;
+               case '#':
+                       sharpflag = 1;
+                       goto reswitch;
+               case '+':
+                       sign = 1;
+                       goto reswitch;
+               case '-':
+                       ladjust = 1;
+                       goto reswitch;
+               case '%':
+                       PCHAR(ch);
+                       break;
+               case '*':
+                       if (!dot) {
+                               width = va_arg(ap, int);
+                               if (width < 0) {
+                                       ladjust = !ladjust;
+                                       width = -width;
+                               }
+                       } else {
+                               dwidth = va_arg(ap, int);
+                       }
+                       goto reswitch;
+               case '0':
+                       if (!dot) {
+                               padc = '0';
+                               goto reswitch;
+                       }
+               case '1': case '2': case '3': case '4':
+               case '5': case '6': case '7': case '8': case '9':
+                               for (n = 0;; ++fmt) {
+                                       n = n * 10 + ch - '0';
+                                       ch = *fmt;
+                                       if (ch < '0' || ch > '9')
+                                               break;
+                               }
+                       if (dot)
+                               dwidth = n;
+                       else
+                               width = n;
+                       goto reswitch;
+               case 'b':
+                       num = (u_int)va_arg(ap, int);
+                       p = va_arg(ap, char *);
+                       for (q = ksprintn(nbuf, num, *p++, NULL, 0); *q;)
+                               PCHAR(*q--);
+
+                       if (num == 0)
+                               break;
+
+                       for (tmp = 0; *p;) {
+                               n = *p++;
+                               if (num & (1 << (n - 1))) {
+                                       PCHAR(tmp ? ',' : '<');
+                                       for (; (n = *p) > ' '; ++p)
+                                               PCHAR(n);
+                                       tmp = 1;
+                               } else
+                                       for (; *p > ' '; ++p)
+                                               continue;
+                       }
+                       if (tmp)
+                               PCHAR('>');
+                       break;
+               case 'c':
+                       PCHAR(va_arg(ap, int));
+                       break;
+               case 'D':
+                       up = va_arg(ap, u_char *);
+                       p = va_arg(ap, char *);
+                       if (!width)
+                               width = 16;
+                       while(width--) {
+                               PCHAR(hex2ascii(*up >> 4));
+                               PCHAR(hex2ascii(*up & 0x0f));
+                               up++;
+                               if (width)
+                                       for (q=p;*q;q++)
+                                               PCHAR(*q);
+                       }
+                       break;
+               case 'd':
+               case 'i':
+                       base = 10;
+                       sign = 1;
+                       goto handle_sign;
+               case 'h':
+                       if (hflag) {
+                               hflag = 0;
+                               cflag = 1;
+                       } else
+                               hflag = 1;
+                       goto reswitch;
+               case 'j':
+                       jflag = 1;
+                       goto reswitch;
+               case 'l':
+                       if (lflag) {
+                               lflag = 0;
+                               qflag = 1;
+                       } else
+                               lflag = 1;
+                       goto reswitch;
+               case 'n':
+                       if (jflag)
+                               *(va_arg(ap, intmax_t *)) = retval;
+                       else if (qflag)
+                               *(va_arg(ap, quad_t *)) = retval;
+                       else if (lflag)
+                               *(va_arg(ap, long *)) = retval;
+                       else if (zflag)
+                               *(va_arg(ap, size_t *)) = retval;
+                       else if (hflag)
+                               *(va_arg(ap, short *)) = retval;
+                       else if (cflag)
+                               *(va_arg(ap, char *)) = retval;
+                       else
+                               *(va_arg(ap, int *)) = retval;
+                       break;
+               case 'o':
+                       base = 8;
+                       goto handle_nosign;
+               case 'p':
+                       base = 16;
+                       sharpflag = (width == 0);
+                       sign = 0;
+                       num = (uintptr_t)va_arg(ap, void *);
+                       goto number;
+               case 'q':
+                       qflag = 1;
+                       goto reswitch;
+               case 'r':
+                       base = radix;
+                       if (sign)
+                               goto handle_sign;
+                       goto handle_nosign;
+               case 's':
+                       p = va_arg(ap, char *);
+                       if (p == NULL)
+                               p = "(null)";
+                       if (!dot)
+                               n = strlen (p);
+                       else
+                               for (n = 0; n < dwidth && p[n]; n++)
+                                       continue;
+
+                       width -= n;
+
+                       if (!ladjust && width > 0)
+                               while (width--)
+                                       PCHAR(padc);
+                       while (n--)
+                               PCHAR(*p++);
+                       if (ladjust && width > 0)
+                               while (width--)
+                                       PCHAR(padc);
+                       break;
+               case 't':
+                       tflag = 1;
+                       goto reswitch;
+               case 'u':
+                       base = 10;
+                       goto handle_nosign;
+               case 'X':
+                       upper = 1;
+               case 'x':
+                       base = 16;
+                       goto handle_nosign;
+               case 'y':
+                       base = 16;
+                       sign = 1;
+                       goto handle_sign;
+               case 'z':
+                       zflag = 1;
+                       goto reswitch;
+handle_nosign:
+                       sign = 0;
+                       if (jflag)
+                               num = va_arg(ap, uintmax_t);
+                       else if (qflag)
+                               num = va_arg(ap, u_quad_t);
+                       else if (tflag)
+                               num = va_arg(ap, ptrdiff_t);
+                       else if (lflag)
+                               num = va_arg(ap, u_long);
+                       else if (zflag)
+                               num = va_arg(ap, size_t);
+                       else if (hflag)
+                               num = (u_short)va_arg(ap, int);
+                       else if (cflag)
+                               num = (u_char)va_arg(ap, int);
+                       else
+                               num = va_arg(ap, u_int);
+                       goto number;
+handle_sign:
+                       if (jflag)
+                               num = va_arg(ap, intmax_t);
+                       else if (qflag)
+                               num = va_arg(ap, quad_t);
+                       else if (tflag)
+                               num = va_arg(ap, ptrdiff_t);
+                       else if (lflag)
+                               num = va_arg(ap, long);
+                       else if (zflag)
+                               num = va_arg(ap, ssize_t);
+                       else if (hflag)
+                               num = (short)va_arg(ap, int);
+                       else if (cflag)
+                               num = (char)va_arg(ap, int);
+                       else
+                               num = va_arg(ap, int);
+number:
+                       if (sign && (intmax_t)num < 0) {
+                               neg = 1;
+                               num = -(intmax_t)num;
+                       }
+                       p = ksprintn(nbuf, num, base, &n, upper);
+                       tmp = 0;
+                       if (sharpflag && num != 0) {
+                               if (base == 8)
+                                       tmp++;
+                               else if (base == 16)
+                                       tmp += 2;
+                       }
+                       if (neg)
+                               tmp++;
+
+                       if (!ladjust && padc == '0')
+                               dwidth = width - tmp;
+                       width -= tmp + imax(dwidth, n);
+                       dwidth -= n;
+                       if (!ladjust)
+                               while (width-- > 0)
+                                       PCHAR(' ');
+                       if (neg)
+                               PCHAR('-');
+                       if (sharpflag && num != 0) {
+                               if (base == 8) {
+                                       PCHAR('0');
+                               } else if (base == 16) {
+                                       PCHAR('0');
+                                       PCHAR('x');
+                               }
+                       }
+                       while (dwidth-- > 0)
+                               PCHAR('0');
+
+                       while (*p)
+                               PCHAR(*p--);
+
+                       if (ladjust)
+                               while (width-- > 0)
+                                       PCHAR(' ');
+
+                       break;
+               default:
+                       while (percent < fmt)
+                               PCHAR(*percent++);
+                       /*
+                        * Since we ignore an formatting argument it is no 
+                        * longer safe to obey the remaining formatting
+                        * arguments as the arguments will no longer match
+                        * the format specs.
+                        */
+                       stop = 1;
+                       break;
+               }
+       }
+#undef PCHAR
+}
diff --git a/lib/semihosting/aarch64/semihosting_call.S b/lib/semihosting/aarch64/semihosting_call.S
new file mode 100644 (file)
index 0000000..cc72ec2
--- /dev/null
@@ -0,0 +1,37 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+       .globl  semihosting_call
+
+       .section        .text, "ax"
+
+semihosting_call:; .type semihosting_call, %function
+       hlt     #0xf000
+       ret
diff --git a/lib/semihosting/semihosting.c b/lib/semihosting/semihosting.c
new file mode 100644 (file)
index 0000000..558973a
--- /dev/null
@@ -0,0 +1,234 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <semihosting.h>
+
+#ifndef SEMIHOSTING_SUPPORTED
+#define SEMIHOSTING_SUPPORTED  1
+#endif
+
+extern int semihosting_call(unsigned int operation,
+                           void *system_block_address);
+
+typedef struct {
+       const char *file_name;
+       unsigned int mode;
+       unsigned int name_length;
+} smh_file_open_block;
+
+typedef struct {
+       int handle;
+       void *buffer;
+       unsigned int length;
+} smh_file_read_write_block;
+
+typedef struct {
+       int handle;
+       unsigned int location;
+} smh_file_seek_block;
+
+typedef struct {
+       char *command_line;
+       unsigned int command_length;
+} smh_system_block;
+
+int semihosting_connection_supported(void)
+{
+       return SEMIHOSTING_SUPPORTED;
+}
+
+int semihosting_file_open(const char *file_name, unsigned int mode)
+{
+       smh_file_open_block open_block;
+
+       open_block.file_name = file_name;
+       open_block.mode = mode;
+       open_block.name_length = strlen(file_name);
+
+       return semihosting_call(SEMIHOSTING_SYS_OPEN,
+                               (void *) &open_block);
+}
+
+int semihosting_file_seek(int file_handle, unsigned int offset)
+{
+       smh_file_seek_block seek_block;
+       int result;
+
+       seek_block.handle = file_handle;
+       seek_block.location = offset;
+
+       result = semihosting_call(SEMIHOSTING_SYS_SEEK,
+                                 (void *) &seek_block);
+
+       if (result)
+               result = semihosting_call(SEMIHOSTING_SYS_ERRNO, 0);
+
+       return result;
+}
+
+int semihosting_file_read(int file_handle, int *length, void *buffer)
+{
+       smh_file_read_write_block read_block;
+       int result = -EINVAL;
+
+       if ((length == NULL) || (buffer == NULL))
+               return result;
+
+       read_block.handle = file_handle;
+       read_block.buffer = buffer;
+       read_block.length = *length;
+
+       result = semihosting_call(SEMIHOSTING_SYS_READ,
+                                 (void *) &read_block);
+
+       if (result == *length) {
+               return -EINVAL;
+       } else if (result < *length) {
+               *length -= result;
+               return 0;
+       } else
+               return result;
+}
+
+int semihosting_file_write(int file_handle, int *length, void *buffer)
+{
+       smh_file_read_write_block write_block;
+
+       if ((length == NULL) || (buffer == NULL))
+               return -EINVAL;
+
+       write_block.handle = file_handle;
+       write_block.buffer = buffer;
+       write_block.length = *length;
+
+       *length = semihosting_call(SEMIHOSTING_SYS_WRITE,
+                                  (void *) &write_block);
+
+       return *length;
+}
+
+int semihosting_file_close(int file_handle)
+{
+       return semihosting_call(SEMIHOSTING_SYS_CLOSE,
+                               (void *) &file_handle);
+}
+
+int semihosting_file_length(int file_handle)
+{
+       return semihosting_call(SEMIHOSTING_SYS_FLEN,
+                               (void *) &file_handle);
+}
+
+char semihosting_read_char(void)
+{
+       return semihosting_call(SEMIHOSTING_SYS_READC, NULL);
+}
+
+void semihosting_write_char(char character)
+{
+       semihosting_call(SEMIHOSTING_SYS_WRITEC, (void *) &character);
+}
+
+void semihosting_write_string(char *string)
+{
+       semihosting_call(SEMIHOSTING_SYS_WRITE0, (void *) string);
+}
+
+int semihosting_system(char *command_line)
+{
+       smh_system_block system_block;
+
+       system_block.command_line = command_line;
+       system_block.command_length = strlen(command_line);
+
+       return semihosting_call(SEMIHOSTING_SYS_SYSTEM,
+                               (void *) &system_block);
+}
+
+int semihosting_get_flen(const char *file_name)
+{
+       int file_handle, length;
+
+       assert(semihosting_connection_supported());
+
+       file_handle = semihosting_file_open(file_name, FOPEN_MODE_RB);
+       if (file_handle == -1)
+               return file_handle;
+
+       /* Find the length of the file */
+       length = semihosting_file_length(file_handle);
+
+       return semihosting_file_close(file_handle) ? -1 : length;
+}
+
+int semihosting_download_file(const char *file_name,
+                             int buf_size,
+                             void *buf)
+{
+       int ret = -EINVAL, file_handle, length;
+
+       /* Null pointer check */
+       if (!buf)
+               return ret;
+
+       assert(semihosting_connection_supported());
+
+       file_handle = semihosting_file_open(file_name, FOPEN_MODE_RB);
+       if (file_handle == -1)
+               return ret;
+
+       /* Find the actual length of the file */
+       length = semihosting_file_length(file_handle);
+       if (length == -1)
+               goto semihosting_fail;
+
+       /* Signal error if we do not have enough space for the file */
+       if (length > buf_size)
+               goto semihosting_fail;
+
+       /*
+        * A successful read will return 0 in which case we pass back
+        * the actual number of bytes read. Else we pass a negative
+        * value indicating an error.
+        */
+       ret = semihosting_file_read(file_handle, &length, buf);
+       if (ret)
+               goto semihosting_fail;
+       else
+               ret = length;
+
+semihosting_fail:
+       semihosting_file_close(file_handle);
+       return ret;
+}
diff --git a/lib/sync/locks/bakery/bakery_lock.c b/lib/sync/locks/bakery/bakery_lock.c
new file mode 100644 (file)
index 0000000..d3c780c
--- /dev/null
@@ -0,0 +1,104 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <string.h>
+
+#include <bakery_lock.h>
+
+#define assert_bakery_entry_valid(entry, bakery) do {  \
+       assert(bakery);                                 \
+       assert(entry < BAKERY_LOCK_MAX_CPUS);           \
+} while(0)
+
+void bakery_lock_init(bakery_lock * bakery)
+{
+       assert(bakery);
+       memset(bakery, 0, sizeof(*bakery));
+       bakery->owner = NO_OWNER;
+}
+
+void bakery_lock_get(unsigned long mpidr, bakery_lock * bakery)
+{
+       unsigned int i, max = 0, my_full_number, his_full_number, entry;
+
+       entry = platform_get_core_pos(mpidr);
+
+       assert_bakery_entry_valid(entry, bakery);
+
+       // Catch recursive attempts to take the lock under the same entry:
+       assert(bakery->owner != entry);
+
+       // Get a ticket
+       bakery->entering[entry] = 1;
+       for (i = 0; i < BAKERY_LOCK_MAX_CPUS; ++i) {
+               if (bakery->number[i] > max) {
+                       max = bakery->number[i];
+               }
+       }
+       ++max;
+       bakery->number[entry] = max;
+       bakery->entering[entry] = 0;
+
+       // Wait for our turn
+       my_full_number = (max << 8) + entry;
+       for (i = 0; i < BAKERY_LOCK_MAX_CPUS; ++i) {
+               while (bakery->entering[i]) ;   /* Wait */
+               do {
+                       his_full_number = bakery->number[i];
+                       if (his_full_number) {
+                               his_full_number = (his_full_number << 8) + i;
+                       }
+               }
+               while (his_full_number && (his_full_number < my_full_number));
+       }
+
+       bakery->owner = entry;
+}
+
+void bakery_lock_release(unsigned long mpidr, bakery_lock * bakery)
+{
+       unsigned int entry = platform_get_core_pos(mpidr);
+
+       assert_bakery_entry_valid(entry, bakery);
+       assert(bakery_lock_held(entry, bakery));
+
+       bakery->owner = NO_OWNER;
+       bakery->number[entry] = 0;
+}
+
+int bakery_lock_held(unsigned long mpidr, const bakery_lock * bakery)
+{
+       unsigned int entry = platform_get_core_pos(mpidr);
+
+       assert_bakery_entry_valid(entry, bakery);
+
+       return bakery->owner == entry;
+}
diff --git a/lib/sync/locks/exclusive/spinlock.S b/lib/sync/locks/exclusive/spinlock.S
new file mode 100644 (file)
index 0000000..4269d95
--- /dev/null
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+       .globl  spin_lock
+       .globl  spin_unlock
+
+
+       .section        .text, "ax";
+
+spin_lock:; .type spin_lock, %function
+       mov     w2, #1
+       sevl
+l1:    wfe
+l2:    ldaxr   w1, [x0]
+       cbnz    w1, l1
+       stxr    w1, w2, [x0]
+       cbnz    w1, l2
+       ret
+
+
+spin_unlock:; .type spin_unlock, %function
+       stlr    wzr, [x0]
+       ret
diff --git a/license.md b/license.md
new file mode 100644 (file)
index 0000000..7652f10
--- /dev/null
@@ -0,0 +1,26 @@
+Copyright (c) 2013, ARM Limited. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this
+  list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice, this
+  list of conditions and the following disclaimer in the documentation and/or
+  other materials provided with the distribution.
+
+* Neither the name of ARM nor the names of its contributors may be used to
+  endorse or promote products derived from this software without specific prior
+  written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/plat/common/aarch64/platform_helpers.S b/plat/common/aarch64/platform_helpers.S
new file mode 100644 (file)
index 0000000..c574eb9
--- /dev/null
@@ -0,0 +1,141 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+#include <platform.h>
+
+
+       .globl  pcpu_dv_mem_stack
+       .weak   platform_get_core_pos
+       .weak   platform_set_stack
+       .weak   platform_is_primary_cpu
+       .weak   platform_set_coherent_stack
+       .weak   platform_check_mpidr
+       .weak   plat_report_exception
+
+       /* -----------------------------------------------------
+        * 512 bytes of coherent stack for each cpu
+        * -----------------------------------------------------
+        */
+#define PCPU_DV_MEM_STACK_SIZE 0x200
+
+
+       .section        .text, "ax"; .align 3
+
+       /* -----------------------------------------------------
+        * unsigned long long platform_set_coherent_stack
+        *                                    (unsigned mpidr);
+        * For a given mpidr, this function returns the stack
+        * pointer allocated in device memory. This stack can
+        * be used by C code which enables/disables the SCTLR.M
+        * SCTLR.C bit e.g. while powering down a cpu
+        * -----------------------------------------------------
+        */
+platform_set_coherent_stack:; .type platform_set_coherent_stack, %function
+       mov     x5, x30 // lr
+       bl      platform_get_core_pos
+       add     x0, x0, #1
+       mov     x1, #PCPU_DV_MEM_STACK_SIZE
+       mul     x0, x0, x1
+       ldr     x1, =pcpu_dv_mem_stack
+       add     sp, x1, x0
+       ret     x5
+
+
+       /* -----------------------------------------------------
+        *  int platform_get_core_pos(int mpidr);
+        *  With this function: CorePos = (ClusterId * 4) +
+        *                                CoreId
+        * -----------------------------------------------------
+        */
+platform_get_core_pos:; .type platform_get_core_pos, %function
+       and     x1, x0, #MPIDR_CPU_MASK
+       and     x0, x0, #MPIDR_CLUSTER_MASK
+       add     x0, x1, x0, LSR #6
+       ret
+
+
+       /* -----------------------------------------------------
+        * void platform_is_primary_cpu (unsigned int mpid);
+        *
+        * Given the mpidr say whether this cpu is the primary
+        * cpu (applicable ony after a cold boot)
+        * -----------------------------------------------------
+        */
+platform_is_primary_cpu:; .type platform_is_primary_cpu, %function
+       and     x0, x0, #(MPIDR_CLUSTER_MASK | MPIDR_CPU_MASK)
+       cmp     x0, #PRIMARY_CPU
+       cset    x0, eq
+       ret
+
+
+       /* -----------------------------------------------------
+        * void platform_set_stack (int mpidr)
+        * -----------------------------------------------------
+        */
+platform_set_stack:; .type platform_set_stack, %function
+       mov     x9, x30 // lr
+       bl      platform_get_core_pos
+       add     x0, x0, #1
+       mov     x1, #PLATFORM_STACK_SIZE
+       mul     x0, x0, x1
+       ldr     x1, =platform_normal_stacks
+       add     sp, x1, x0
+       ret     x9
+
+       /* -----------------------------------------------------
+        * Placeholder function which should be redefined by
+        * each platform.
+        * -----------------------------------------------------
+        */
+platform_check_mpidr:; .type platform_check_mpidr, %function
+       mov     x0, xzr
+       ret
+
+       /* -----------------------------------------------------
+        * Placeholder function which should be redefined by
+        * each platform.
+        * -----------------------------------------------------
+        */
+plat_report_exception:
+       ret
+
+       /* -----------------------------------------------------
+        * Per-cpu stacks in device memory.
+        * Used for C code just before power down or right after
+        * power up when the MMU or caches need to be turned on
+        * or off. Each cpu gets a stack of 512 bytes.
+        * -----------------------------------------------------
+        */
+       .section        tzfw_coherent_mem, "aw", %nobits; .align 6
+
+pcpu_dv_mem_stack:
+       /* Zero fill */
+       .space (PLATFORM_CORE_COUNT * PCPU_DV_MEM_STACK_SIZE), 0
diff --git a/plat/fvp/aarch64/bl1_plat_helpers.S b/plat/fvp/aarch64/bl1_plat_helpers.S
new file mode 100644 (file)
index 0000000..d72dc39
--- /dev/null
@@ -0,0 +1,264 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+#include <platform.h>
+#include <fvp_pwrc.h>
+#include <gic.h>
+
+       .globl  platform_get_entrypoint
+       .globl  platform_cold_boot_init
+       .globl  plat_secondary_cold_boot_setup
+
+
+       .section        platform_code, "ax"; .align 3
+
+
+       .macro  platform_choose_gicmmap  param1, param2, x_tmp, w_tmp, res
+       ldr     \x_tmp, =VE_SYSREGS_BASE + V2M_SYS_ID
+       ldr     \w_tmp, [\x_tmp]
+       ubfx \w_tmp, \w_tmp, #SYS_ID_BLD_SHIFT, #SYS_ID_BLD_LENGTH
+       cmp     \w_tmp, #BLD_GIC_VE_MMAP
+       csel    \res, \param1, \param2, eq
+       .endm
+
+       /* -----------------------------------------------------
+        * void plat_secondary_cold_boot_setup (void);
+        *
+        * This function performs any platform specific actions
+        * needed for a secondary cpu after a cold reset e.g
+        * mark the cpu's presence, mechanism to place it in a
+        * holding pen etc.
+        * TODO: Should we read the PSYS register to make sure
+        * that the request has gone through.
+        * -----------------------------------------------------
+        */
+plat_secondary_cold_boot_setup:; .type plat_secondary_cold_boot_setup, %function
+       bl      read_mpidr
+       mov     x19, x0
+       bl      platform_get_core_pos
+       mov     x20, x0
+
+       /* ---------------------------------------------
+        * Mark this cpu as being present. This is a
+        * SO write. This array will be read using
+        * normal memory so invalidate any prefetched
+        * stale copies first.
+        * ---------------------------------------------
+        */
+       ldr     x1, =TZDRAM_BASE
+       mov     x0, #AFFMAP_OFF
+       add     x1, x0, x1
+       mov     x2, #PLATFORM_CACHE_LINE_SIZE
+       mul     x2, x2, x20
+       add     x0, x1, x2
+       bl      dcivac
+       str     x19, [x1, x2]
+
+       /* ---------------------------------------------
+        * Power down this cpu.
+        * TODO: Do we need to worry about powering the
+        * cluster down as well here. That will need
+        * locks which we won't have unless an elf-
+        * loader zeroes out the zi section.
+        * ---------------------------------------------
+        */
+       ldr     x1, =PWRC_BASE
+       str     w19, [x1, #PPOFFR_OFF]
+
+       /* ---------------------------------------------
+        * Deactivate the gic cpu interface as well
+        * ---------------------------------------------
+        */
+       ldr     x0, =VE_GICC_BASE
+       ldr     x1, =BASE_GICC_BASE
+       platform_choose_gicmmap x0, x1, x2, w2, x1
+       mov     w0, #(IRQ_BYP_DIS_GRP1 | FIQ_BYP_DIS_GRP1)
+       orr     w0, w0, #(IRQ_BYP_DIS_GRP0 | FIQ_BYP_DIS_GRP0)
+       str     w0, [x1, #GICC_CTLR]
+
+       /* ---------------------------------------------
+        * There is no sane reason to come out of this
+        * wfi so panic if we do. This cpu will be pow-
+        * ered on and reset by the cpu_on pm api
+        * ---------------------------------------------
+        */
+       dsb     sy
+       wfi
+cb_panic:
+       b       cb_panic
+
+
+       /* -----------------------------------------------------
+        * void platform_get_entrypoint (unsigned int mpid);
+        *
+        * Main job of this routine is to distinguish between
+        * a cold and warm boot.
+        * On a cold boot the secondaries first wait for the
+        * platform to be initialized after which they are
+        * hotplugged in. The primary proceeds to perform the
+        * platform initialization.
+        * On a warm boot, each cpu jumps to the address in its
+        * mailbox.
+        *
+        * TODO: Not a good idea to save lr in a temp reg
+        * TODO: PSYSR is a common register and should be
+        *      accessed using locks. Since its not possible
+        *      to use locks immediately after a cold reset
+        *      we are relying on the fact that after a cold
+        *      reset all cpus will read the same WK field
+        * -----------------------------------------------------
+        */
+platform_get_entrypoint:; .type platform_get_entrypoint, %function
+       mov     x9, x30 // lr
+       mov     x2, x0
+       ldr     x1, =PWRC_BASE
+       str     w2, [x1, #PSYSR_OFF]
+       ldr     w2, [x1, #PSYSR_OFF]
+       ubfx    w2, w2, #PSYSR_WK_SHIFT, #PSYSR_WK_MASK
+       cbnz    w2, warm_reset
+       mov     x0, x2
+       b       exit
+warm_reset:
+       /* ---------------------------------------------
+        * A per-cpu mailbox is maintained in the tru-
+        * sted DRAM. Its flushed out of the caches
+        * after every update using normal memory so
+        * its safe to read it here with SO attributes
+        * ---------------------------------------------
+        */
+       ldr     x10, =TZDRAM_BASE + MBOX_OFF
+       bl      platform_get_core_pos
+       lsl     x0, x0, #CACHE_WRITEBACK_SHIFT
+       ldr     x0, [x10, x0]
+       cbz     x0, _panic
+exit:
+       ret     x9
+_panic:        b       _panic
+
+
+       /* -----------------------------------------------------
+        * void platform_mem_init (void);
+        *
+        * Zero out the mailbox registers in the TZDRAM. The
+        * mmu is turned off right now and only the primary can
+        * ever execute this code. Secondaries will read the
+        * mailboxes using SO accesses. In short, BL31 will
+        * update the mailboxes after mapping the tzdram as
+        * normal memory. It will flush its copy after update.
+        * BL1 will always read the mailboxes with the MMU off
+        * -----------------------------------------------------
+        */
+platform_mem_init:; .type platform_mem_init, %function
+       ldr     x0, =TZDRAM_BASE + MBOX_OFF
+       stp     xzr, xzr, [x0, #0]
+       stp     xzr, xzr, [x0, #0x10]
+       stp     xzr, xzr, [x0, #0x20]
+       stp     xzr, xzr, [x0, #0x30]
+       ret
+
+
+       /* -----------------------------------------------------
+        * void platform_cold_boot_init (bl1_main function);
+        *
+        * Routine called only by the primary cpu after a cold
+        * boot to perform early platform initialization
+        * -----------------------------------------------------
+        */
+platform_cold_boot_init:; .type platform_cold_boot_init, %function
+       mov     x20, x0
+       bl      platform_mem_init
+       bl      read_mpidr
+       mov     x19, x0
+
+       /* ---------------------------------------------
+        * Give ourselves a small coherent stack to
+        * ease the pain of initializing the MMU and
+        * CCI in assembler
+        * ---------------------------------------------
+        */
+       bl      platform_set_coherent_stack
+
+       /* ---------------------------------------------
+        * Mark this cpu as being present. This is a
+        * SO write. Invalidate any stale copies out of
+        * paranoia as there is no one else around.
+        * ---------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_get_core_pos
+       mov     x21, x0
+
+       ldr     x1, =TZDRAM_BASE
+       mov     x0, #AFFMAP_OFF
+       add     x1, x0, x1
+       mov     x2, #PLATFORM_CACHE_LINE_SIZE
+       mul     x2, x2, x21
+       add     x0, x1, x2
+       bl      dcivac
+       str     x19, [x1, x2]
+
+       /* ---------------------------------------------
+        * Enable CCI-400 for this cluster. No need
+        * for locks as no other cpu is active at the
+        * moment
+        * ---------------------------------------------
+        */
+       mov     x0, x19
+       bl      cci_enable_coherency
+
+       /* ---------------------------------------------
+        * Architectural init. can be generic e.g.
+        * enabling stack alignment and platform spec-
+        * ific e.g. MMU & page table setup as per the
+        * platform memory map. Perform the latter here
+        * and the former in bl1_main.
+        * ---------------------------------------------
+        */
+       bl      bl1_early_platform_setup
+       bl      bl1_plat_arch_setup
+
+       /* ---------------------------------------------
+        * Give ourselves a stack allocated in Normal
+        * -IS-WBWA memory
+        * ---------------------------------------------
+        */
+       mov     x0, x19
+       bl      platform_set_stack
+
+       /* ---------------------------------------------
+        * Jump to the main function. Returning from it
+        * is a terminal error.
+        * ---------------------------------------------
+        */
+       blr     x20
+
+cb_init_panic:
+       b       cb_init_panic
diff --git a/plat/fvp/aarch64/fvp_common.c b/plat/fvp/aarch64/fvp_common.c
new file mode 100644 (file)
index 0000000..762f542
--- /dev/null
@@ -0,0 +1,600 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <platform.h>
+#include <bl_common.h>
+/* Included only for error codes */
+#include <psci.h>
+
+unsigned char platform_normal_stacks[PLATFORM_STACK_SIZE][PLATFORM_CORE_COUNT]
+__attribute__ ((aligned(PLATFORM_CACHE_LINE_SIZE),
+               section("tzfw_normal_stacks")));
+
+/*******************************************************************************
+ * This array holds the characteristics of the differences between the three
+ * FVP platforms (Base, A53_A57 & Foundation). It will be populated during cold
+ * boot at each boot stage by the primary before enabling the MMU (to allow cci
+ * configuration) & used thereafter. Each BL will have its own copy to allow
+ * independent operation.
+ ******************************************************************************/
+static unsigned long platform_config[CONFIG_LIMIT];
+
+/*******************************************************************************
+ * TODO: Check page table alignment to avoid space wastage
+ ******************************************************************************/
+
+/*******************************************************************************
+ * Level 1 translation tables need 4 entries for the 4GB address space accessib-
+ * le by the secure firmware. Input address space will be restricted using the
+ * T0SZ settings in the TCR.
+ ******************************************************************************/
+static unsigned long l1_xlation_table[ADDR_SPACE_SIZE >> 30]
+__attribute__ ((aligned((ADDR_SPACE_SIZE >> 30) << 3)));
+
+/*******************************************************************************
+ * Level 2 translation tables describe the first & second gb of the address
+ * space needed to address secure peripherals e.g. trusted ROM and RAM.
+ ******************************************************************************/
+static unsigned long l2_xlation_table[NUM_L2_PAGETABLES][NUM_2MB_IN_GB]
+__attribute__ ((aligned(NUM_2MB_IN_GB << 3)));
+
+/*******************************************************************************
+ * Level 3 translation tables (2 sets) describe the trusted & non-trusted RAM
+ * regions at a granularity of 4K.
+ ******************************************************************************/
+static unsigned long l3_xlation_table[NUM_L3_PAGETABLES][NUM_4K_IN_2MB]
+__attribute__ ((aligned(NUM_4K_IN_2MB << 3)));
+
+/*******************************************************************************
+ * Helper to create a level 1/2 table descriptor which points to a level 2/3
+ * table.
+ ******************************************************************************/
+static unsigned long create_table_desc(unsigned long *next_table_ptr)
+{
+       unsigned long desc = (unsigned long) next_table_ptr;
+
+       /* Clear the last 12 bits */
+       desc >>= FOUR_KB_SHIFT;
+       desc <<= FOUR_KB_SHIFT;
+
+       desc |= TABLE_DESC;
+
+       return desc;
+}
+
+/*******************************************************************************
+ * Helper to create a level 1/2/3 block descriptor which maps the va to addr
+ ******************************************************************************/
+static unsigned long create_block_desc(unsigned long desc,
+                                      unsigned long addr,
+                                      unsigned int level)
+{
+       switch (level) {
+       case LEVEL1:
+               desc |= (addr << FIRST_LEVEL_DESC_N) | BLOCK_DESC;
+               break;
+       case LEVEL2:
+               desc |= (addr << SECOND_LEVEL_DESC_N) | BLOCK_DESC;
+               break;
+       case LEVEL3:
+               desc |= (addr << THIRD_LEVEL_DESC_N) | TABLE_DESC;
+               break;
+       default:
+               assert(0);
+       }
+
+       return desc;
+}
+
+/*******************************************************************************
+ * Helper to create a level 1/2/3 block descriptor which maps the va to output_
+ * addr with Device nGnRE attributes.
+ ******************************************************************************/
+static unsigned long create_device_block(unsigned long output_addr,
+                                        unsigned int level,
+                                        unsigned int ns)
+{
+       unsigned long upper_attrs, lower_attrs, desc;
+
+       lower_attrs = LOWER_ATTRS(ACCESS_FLAG | OSH | AP_RW);
+       lower_attrs |= LOWER_ATTRS(ns | ATTR_DEVICE_INDEX);
+       upper_attrs = UPPER_ATTRS(XN);
+       desc = upper_attrs | lower_attrs;
+
+       return create_block_desc(desc, output_addr, level);
+}
+
+/*******************************************************************************
+ * Helper to create a level 1/2/3 block descriptor which maps the va to output_
+ * addr with inner-shareable normal wbwa read-only memory attributes.
+ ******************************************************************************/
+static unsigned long create_romem_block(unsigned long output_addr,
+                                       unsigned int level,
+                                       unsigned int ns)
+{
+       unsigned long upper_attrs, lower_attrs, desc;
+
+       lower_attrs = LOWER_ATTRS(ACCESS_FLAG | ISH | AP_RO);
+       lower_attrs |= LOWER_ATTRS(ns | ATTR_IWBWA_OWBWA_NTR_INDEX);
+       upper_attrs = UPPER_ATTRS(0ull);
+       desc = upper_attrs | lower_attrs;
+
+       return create_block_desc(desc, output_addr, level);
+}
+
+/*******************************************************************************
+ * Helper to create a level 1/2/3 block descriptor which maps the va to output_
+ * addr with inner-shareable normal wbwa read-write memory attributes.
+ ******************************************************************************/
+static unsigned long create_rwmem_block(unsigned long output_addr,
+                                       unsigned int level,
+                                       unsigned int ns)
+{
+       unsigned long upper_attrs, lower_attrs, desc;
+
+       lower_attrs = LOWER_ATTRS(ACCESS_FLAG | ISH | AP_RW);
+       lower_attrs |= LOWER_ATTRS(ns | ATTR_IWBWA_OWBWA_NTR_INDEX);
+       upper_attrs = UPPER_ATTRS(XN);
+       desc = upper_attrs | lower_attrs;
+
+       return create_block_desc(desc, output_addr, level);
+}
+
+/*******************************************************************************
+ * Create page tables as per the platform memory map. Certain aspects of page
+ * talble creating have been abstracted in the above routines. This can be impr-
+ * oved further.
+ * TODO: Move the page table setup helpers into the arch or lib directory
+ *******************************************************************************/
+static unsigned long fill_xlation_tables(meminfo *tzram_layout,
+                                        unsigned long ro_start,
+                                        unsigned long ro_limit,
+                                        unsigned long coh_start,
+                                        unsigned long coh_limit)
+{
+       unsigned long l2_desc, l3_desc;
+       unsigned long *xt_addr = 0, *pt_addr, off = 0;
+       unsigned long trom_start_index, trom_end_index;
+       unsigned long tzram_start_index, tzram_end_index;
+       unsigned long flash0_start_index, flash0_end_index;
+       unsigned long flash1_start_index, flash1_end_index;
+       unsigned long vram_start_index, vram_end_index;
+       unsigned long nsram_start_index, nsram_end_index;
+       unsigned long tdram_start_index, tdram_end_index;
+       unsigned long dram_start_index, dram_end_index;
+       unsigned long dev0_start_index, dev0_end_index;
+       unsigned long dev1_start_index, dev1_end_index;
+       unsigned int idx;
+
+
+       /*****************************************************************
+        * LEVEL1 PAGETABLE SETUP
+        *
+        * Find the start and end indices of the memory peripherals in the
+        * first level pagetables. These are the main areas we care about.
+        * Also bump the end index by one if its equal to the start to
+        * allow for regions which lie completely in a GB.
+        *****************************************************************/
+       trom_start_index = ONE_GB_INDEX(TZROM_BASE);
+       dev0_start_index = ONE_GB_INDEX(TZRNG_BASE);
+       dram_start_index = ONE_GB_INDEX(DRAM_BASE);
+       dram_end_index = ONE_GB_INDEX(DRAM_BASE + DRAM_SIZE);
+
+       if (dram_end_index == dram_start_index)
+               dram_end_index++;
+
+       /*
+        * Fill up the level1 translation table first
+        */
+       for (idx = 0; idx < (ADDR_SPACE_SIZE >> 30); idx++) {
+
+               /*
+                * Fill up the entry for the TZROM. This will cover
+                * everything in the first GB.
+                */
+               if (idx == trom_start_index) {
+                       xt_addr = &l2_xlation_table[GB1_L2_PAGETABLE][0];
+                       l1_xlation_table[idx] = create_table_desc(xt_addr);
+                       continue;
+               }
+
+               /*
+                * Mark the second gb as device
+                */
+               if (idx == dev0_start_index) {
+                       xt_addr = &l2_xlation_table[GB2_L2_PAGETABLE][0];
+                       l1_xlation_table[idx] = create_table_desc(xt_addr);
+                       continue;
+               }
+
+               /*
+                * Fill up the block entry for the DRAM with Normal
+                * inner-WBWA outer-WBWA non-transient attributes.
+                * This will cover 2-4GB. Note that the acesses are
+                * marked as non-secure.
+                */
+               if ((idx >= dram_start_index) && (idx < dram_end_index)) {
+                       l1_xlation_table[idx] = create_rwmem_block(idx, LEVEL1,
+                                                                  NS);
+                       continue;
+               }
+
+               assert(0);
+       }
+
+
+       /*****************************************************************
+        * LEVEL2 PAGETABLE SETUP
+        *
+        * Find the start and end indices of the memory & peripherals in the
+        * second level pagetables.
+        ******************************************************************/
+
+       /* Initializations for the 1st GB */
+       trom_start_index = TWO_MB_INDEX(TZROM_BASE);
+       trom_end_index = TWO_MB_INDEX(TZROM_BASE + TZROM_SIZE);
+       if (trom_end_index == trom_start_index)
+               trom_end_index++;
+
+       tdram_start_index = TWO_MB_INDEX(TZDRAM_BASE);
+       tdram_end_index = TWO_MB_INDEX(TZDRAM_BASE + TZDRAM_SIZE);
+       if (tdram_end_index == tdram_start_index)
+               tdram_end_index++;
+
+       flash0_start_index = TWO_MB_INDEX(FLASH0_BASE);
+       flash0_end_index = TWO_MB_INDEX(FLASH0_BASE + TZROM_SIZE);
+       if (flash0_end_index == flash0_start_index)
+               flash0_end_index++;
+
+       flash1_start_index = TWO_MB_INDEX(FLASH1_BASE);
+       flash1_end_index = TWO_MB_INDEX(FLASH1_BASE + FLASH1_SIZE);
+       if (flash1_end_index == flash1_start_index)
+               flash1_end_index++;
+
+       vram_start_index = TWO_MB_INDEX(VRAM_BASE);
+       vram_end_index = TWO_MB_INDEX(VRAM_BASE + VRAM_SIZE);
+       if (vram_end_index == vram_start_index)
+               vram_end_index++;
+
+       dev0_start_index = TWO_MB_INDEX(DEVICE0_BASE);
+       dev0_end_index = TWO_MB_INDEX(DEVICE0_BASE + DEVICE0_SIZE);
+       if (dev0_end_index == dev0_start_index)
+               dev0_end_index++;
+
+       dev1_start_index = TWO_MB_INDEX(DEVICE1_BASE);
+       dev1_end_index = TWO_MB_INDEX(DEVICE1_BASE + DEVICE1_SIZE);
+       if (dev1_end_index == dev1_start_index)
+               dev1_end_index++;
+
+       /* Since the size is < 2M this is a single index */
+       tzram_start_index = TWO_MB_INDEX(tzram_layout->total_base);
+       nsram_start_index = TWO_MB_INDEX(NSRAM_BASE);
+
+       /*
+        * Fill up the level2 translation table for the first GB next
+        */
+       for (idx = 0; idx < NUM_2MB_IN_GB; idx++) {
+
+               l2_desc = INVALID_DESC;
+               xt_addr = &l2_xlation_table[GB1_L2_PAGETABLE][idx];
+
+               /* Block entries for 64M of trusted Boot ROM */
+               if ((idx >= trom_start_index) && (idx < trom_end_index))
+                       l2_desc = create_romem_block(idx, LEVEL2, 0);
+
+               /* Single L3 page table entry for 256K of TZRAM */
+               if (idx == tzram_start_index) {
+                       pt_addr = &l3_xlation_table[TZRAM_PAGETABLE][0];
+                       l2_desc = create_table_desc(pt_addr);
+               }
+
+               /* Block entries for 32M of trusted DRAM */
+               if ((idx >= tdram_start_index) && (idx <= tdram_end_index))
+                       l2_desc = create_rwmem_block(idx, LEVEL2, 0);
+
+               /* Block entries for 64M of aliased trusted Boot ROM */
+               if ((idx >= flash0_start_index) && (idx < flash0_end_index))
+                       l2_desc = create_romem_block(idx, LEVEL2, 0);
+
+               /* Block entries for 64M of flash1 */
+               if ((idx >= flash1_start_index) && (idx < flash1_end_index))
+                       l2_desc = create_romem_block(idx, LEVEL2, 0);
+
+               /* Block entries for 32M of VRAM */
+               if ((idx >= vram_start_index) && (idx < vram_end_index))
+                       l2_desc = create_rwmem_block(idx, LEVEL2, 0);
+
+               /* Block entries for all the devices in the first gb */
+               if ((idx >= dev0_start_index) && (idx < dev0_end_index))
+                       l2_desc = create_device_block(idx, LEVEL2, 0);
+
+               /* Block entries for all the devices in the first gb */
+               if ((idx >= dev1_start_index) && (idx < dev1_end_index))
+                       l2_desc = create_device_block(idx, LEVEL2, 0);
+
+               /* Single L3 page table entry for 64K of NSRAM */
+               if (idx == nsram_start_index) {
+                       pt_addr = &l3_xlation_table[NSRAM_PAGETABLE][0];
+                       l2_desc = create_table_desc(pt_addr);
+               }
+
+               *xt_addr = l2_desc;
+       }
+
+
+       /*
+        * Initializations for the 2nd GB. Mark everything as device
+        * for the time being as the memory map is not final. Each
+        * index will need to be offset'ed to allow absolute values
+        */
+       off = NUM_2MB_IN_GB;
+       for (idx = off; idx < (NUM_2MB_IN_GB + off); idx++) {
+               l2_desc = create_device_block(idx, LEVEL2, 0);
+               xt_addr = &l2_xlation_table[GB2_L2_PAGETABLE][idx - off];
+               *xt_addr = l2_desc;
+       }
+
+
+       /*****************************************************************
+        * LEVEL3 PAGETABLE SETUP
+        * The following setup assumes knowledge of the scatter file. This
+        * should be reasonable as this is platform specific code.
+        *****************************************************************/
+
+       /* Fill up the level3 pagetable for the trusted SRAM. */
+       tzram_start_index = FOUR_KB_INDEX(tzram_layout->total_base);
+       tzram_end_index = FOUR_KB_INDEX(tzram_layout->total_base +
+                                       tzram_layout->total_size);
+       if (tzram_end_index == tzram_start_index)
+               tzram_end_index++;
+
+       /*
+        * Reusing trom* to mark RO memory. BLX_STACKS follows BLX_RO in the
+        * scatter file. Using BLX_RO$$Limit does not work as it might not
+        * cross the page boundary thus leading to truncation of valid RO
+        * memory
+        */
+       trom_start_index = FOUR_KB_INDEX(ro_start);
+       trom_end_index = FOUR_KB_INDEX(ro_limit);
+       if (trom_end_index == trom_start_index)
+               trom_end_index++;
+
+       /*
+        * Reusing dev* to mark coherent device memory. $$Limit works here
+        * 'cause the coherent memory section is known to be 4k in size
+        */
+       dev0_start_index = FOUR_KB_INDEX(coh_start);
+       dev0_end_index = FOUR_KB_INDEX(coh_limit);
+       if (dev0_end_index == dev0_start_index)
+               dev0_end_index++;
+
+
+       /* Each index will need to be offset'ed to allow absolute values */
+       off = FOUR_KB_INDEX(TZRAM_BASE);
+       for (idx = off; idx < (NUM_4K_IN_2MB + off); idx++) {
+
+               l3_desc = INVALID_DESC;
+               xt_addr = &l3_xlation_table[TZRAM_PAGETABLE][idx - off];
+
+               if (idx >= tzram_start_index && idx < tzram_end_index)
+                       l3_desc = create_rwmem_block(idx, LEVEL3, 0);
+
+               if (idx >= trom_start_index && idx < trom_end_index)
+                       l3_desc = create_romem_block(idx, LEVEL3, 0);
+
+               if (idx >= dev0_start_index && idx < dev0_end_index)
+                       l3_desc = create_device_block(idx, LEVEL3, 0);
+
+               *xt_addr = l3_desc;
+       }
+
+       /* Fill up the level3 pagetable for the non-trusted SRAM. */
+       nsram_start_index = FOUR_KB_INDEX(NSRAM_BASE);
+       nsram_end_index = FOUR_KB_INDEX(NSRAM_BASE + NSRAM_SIZE);
+       if (nsram_end_index == nsram_start_index)
+               nsram_end_index++;
+
+        /* Each index will need to be offset'ed to allow absolute values */
+       off = FOUR_KB_INDEX(NSRAM_BASE);
+       for (idx = off; idx < (NUM_4K_IN_2MB + off); idx++) {
+
+               l3_desc = INVALID_DESC;
+               xt_addr = &l3_xlation_table[NSRAM_PAGETABLE][idx - off];
+
+               if (idx >= nsram_start_index && idx < nsram_end_index)
+                       l3_desc = create_rwmem_block(idx, LEVEL3, NS);
+
+               *xt_addr = l3_desc;
+       }
+
+       return (unsigned long) l1_xlation_table;
+}
+
+/*******************************************************************************
+ * Enable the MMU assuming that the pagetables have already been created
+ *******************************************************************************/
+void enable_mmu()
+{
+       unsigned long mair, tcr, ttbr, sctlr;
+       unsigned long current_el = read_current_el();
+
+       /* Set the attributes in the right indices of the MAIR */
+       mair = MAIR_ATTR_SET(ATTR_DEVICE, ATTR_DEVICE_INDEX);
+       mair |= MAIR_ATTR_SET(ATTR_IWBWA_OWBWA_NTR,
+                                 ATTR_IWBWA_OWBWA_NTR_INDEX);
+       write_mair(mair);
+
+       /*
+        * Set TCR bits as well. Inner & outer WBWA & shareable + T0SZ = 32
+        */
+       tcr = TCR_SH_INNER_SHAREABLE | TCR_RGN_OUTER_WBA |
+                 TCR_RGN_INNER_WBA | TCR_T0SZ_4GB;
+       if (GET_EL(current_el) == MODE_EL3) {
+               tcr |= TCR_EL3_RES1;
+               /* Invalidate all TLBs */
+               tlbialle3();
+       } else {
+               /* Invalidate EL1 TLBs */
+               tlbivmalle1();
+       }
+
+       write_tcr(tcr);
+
+       /* Set TTBR bits as well */
+       assert(((unsigned long)l1_xlation_table & (sizeof(l1_xlation_table) - 1)) == 0);
+       ttbr = (unsigned long) l1_xlation_table;
+       write_ttbr0(ttbr);
+
+       sctlr = read_sctlr();
+       sctlr |= SCTLR_WXN_BIT | SCTLR_M_BIT | SCTLR_I_BIT;
+       sctlr |= SCTLR_A_BIT | SCTLR_C_BIT;
+       write_sctlr(sctlr);
+
+       return;
+}
+
+void disable_mmu(void)
+{
+       /* Zero out the MMU related registers */
+       write_mair(0);
+       write_tcr(0);
+       write_ttbr0(0);
+       write_sctlr(0);
+
+       /* Invalidate TLBs of the CurrentEL */
+       tlbiall();
+
+       /* Flush the caches */
+       dcsw_op_all(DCCISW);
+
+       return;
+}
+
+/*******************************************************************************
+ * Setup the pagetables as per the platform memory map & initialize the mmu
+ *******************************************************************************/
+void configure_mmu(meminfo *mem_layout,
+                  unsigned long ro_start,
+                  unsigned long ro_limit,
+                  unsigned long coh_start,
+                  unsigned long coh_limit)
+{
+       fill_xlation_tables(mem_layout,
+                           ro_start,
+                           ro_limit,
+                           coh_start,
+                           coh_limit);
+       enable_mmu();
+       return;
+}
+
+/* Simple routine which returns a configuration variable value */
+unsigned long platform_get_cfgvar(unsigned int var_id)
+{
+       assert(var_id < CONFIG_LIMIT);
+       return platform_config[var_id];
+}
+
+/*******************************************************************************
+ * A single boot loader stack is expected to work on both the Foundation FVP
+ * models and the two flavours of the Base FVP models (AEMv8 & Cortex). The
+ * SYS_ID register provides a mechanism for detecting the differences between
+ * these platforms. This information is stored in a per-BL array to allow the
+ * code to take the correct path.Per BL platform configuration.
+ ******************************************************************************/
+int platform_config_setup(void)
+{
+       unsigned int rev, hbi, bld, arch, sys_id, midr_pn;
+
+       sys_id = mmio_read_32(VE_SYSREGS_BASE + V2M_SYS_ID);
+       rev = (sys_id >> SYS_ID_REV_SHIFT) & SYS_ID_REV_MASK;
+       hbi = (sys_id >> SYS_ID_HBI_SHIFT) & SYS_ID_HBI_MASK;
+       bld = (sys_id >> SYS_ID_BLD_SHIFT) & SYS_ID_BLD_MASK;
+       arch = (sys_id >> SYS_ID_ARCH_SHIFT) & SYS_ID_ARCH_MASK;
+
+       assert(rev == REV_FVP);
+       assert(arch == ARCH_MODEL);
+
+       /*
+        * The build field in the SYS_ID tells which variant of the GIC
+        * memory is implemented by the model.
+        */
+       switch (bld) {
+       case BLD_GIC_VE_MMAP:
+               platform_config[CONFIG_GICD_ADDR] = VE_GICD_BASE;
+               platform_config[CONFIG_GICC_ADDR] = VE_GICC_BASE;
+               platform_config[CONFIG_GICH_ADDR] = VE_GICH_BASE;
+               platform_config[CONFIG_GICV_ADDR] = VE_GICV_BASE;
+               break;
+       case BLD_GIC_A53A57_MMAP:
+               platform_config[CONFIG_GICD_ADDR] = BASE_GICD_BASE;
+               platform_config[CONFIG_GICC_ADDR] = BASE_GICC_BASE;
+               platform_config[CONFIG_GICH_ADDR] = BASE_GICH_BASE;
+               platform_config[CONFIG_GICV_ADDR] = BASE_GICV_BASE;
+               break;
+       default:
+               assert(0);
+       }
+
+       /*
+        * The hbi field in the SYS_ID is 0x020 for the Base FVP & 0x010
+        * for the Foundation FVP.
+        */
+       switch (hbi) {
+       case HBI_FOUNDATION:
+               platform_config[CONFIG_MAX_AFF0] = 4;
+               platform_config[CONFIG_MAX_AFF1] = 1;
+               platform_config[CONFIG_CPU_SETUP] = 0;
+               platform_config[CONFIG_BASE_MMAP] = 0;
+               break;
+       case HBI_FVP_BASE:
+               midr_pn = (read_midr() >> MIDR_PN_SHIFT) & MIDR_PN_MASK;
+               if ((midr_pn == MIDR_PN_A57) || (midr_pn == MIDR_PN_A53))
+                       platform_config[CONFIG_CPU_SETUP] = 1;
+               else
+                       platform_config[CONFIG_CPU_SETUP] = 0;
+
+               platform_config[CONFIG_MAX_AFF0] = 4;
+               platform_config[CONFIG_MAX_AFF1] = 2;
+               platform_config[CONFIG_BASE_MMAP] = 1;
+               break;
+       default:
+               assert(0);
+       }
+
+       return 0;
+}
+
+unsigned long plat_get_ns_image_entrypoint(void) {
+       return NS_IMAGE_OFFSET;
+}
diff --git a/plat/fvp/aarch64/fvp_helpers.S b/plat/fvp/aarch64/fvp_helpers.S
new file mode 100644 (file)
index 0000000..5cb0660
--- /dev/null
@@ -0,0 +1,57 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <arch.h>
+#include <platform.h>
+
+       .globl  plat_report_exception
+
+       .section platform_code, "ax"
+
+       /* ---------------------------------------------
+        * void plat_report_exception(unsigned int type)
+        * Function to report an unhandled exception
+        * with platform-specific means.
+        * On FVP platform, it updates the LEDs
+        * to indicate where we are
+        * ---------------------------------------------
+        */
+plat_report_exception:
+       mrs     x1, CurrentEl
+       lsr     x1, x1, #MODE_EL_SHIFT
+       lsl     x1, x1, #SYS_LED_EL_SHIFT
+       lsl     x0, x0, #SYS_LED_EC_SHIFT
+       mov     x2, #(SECURE << SYS_LED_SS_SHIFT)
+       orr     x0, x0, x2
+       orr     x0, x0, x1
+       mov     x1, #VE_SYSREGS_BASE
+       add     x1, x1, #V2M_SYS_LED
+       str     x0, [x1]
+       ret
diff --git a/plat/fvp/bl1_plat_setup.c b/plat/fvp/bl1_plat_setup.c
new file mode 100644 (file)
index 0000000..7131f7a
--- /dev/null
@@ -0,0 +1,169 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <platform.h>
+#include <bl1.h>
+#include <console.h>
+
+/*******************************************************************************
+ * Declarations of linker defined symbols which will help us find the layout
+ * of trusted SRAM
+ ******************************************************************************/
+#if defined (__GNUC__)
+extern unsigned long __FIRMWARE_ROM_START__;
+extern unsigned long __FIRMWARE_ROM_SIZE__;
+extern unsigned long __FIRMWARE_DATA_START__;
+extern unsigned long __FIRMWARE_DATA_SIZE__;
+extern unsigned long __FIRMWARE_BSS_START__;
+extern unsigned long __FIRMWARE_BSS_SIZE__;
+extern unsigned long __DATA_RAM_START__;
+extern unsigned long __DATA_RAM_SIZE__;
+extern unsigned long __BSS_RAM_START__;
+extern unsigned long __BSS_RAM_SIZE__;
+extern unsigned long __FIRMWARE_RAM_STACKS_START__;
+extern unsigned long __FIRMWARE_RAM_STACKS_SIZE__;
+extern unsigned long __FIRMWARE_RAM_PAGETABLES_START__;
+extern unsigned long __FIRMWARE_RAM_PAGETABLES_SIZE__;
+extern unsigned long __FIRMWARE_RAM_COHERENT_START__;
+extern unsigned long __FIRMWARE_RAM_COHERENT_SIZE__;
+
+#define BL1_COHERENT_MEM_BASE  (&__FIRMWARE_RAM_COHERENT_START__)
+#define BL1_COHERENT_MEM_LIMIT \
+       ((unsigned long long)&__FIRMWARE_RAM_COHERENT_START__ + \
+        (unsigned long long)&__FIRMWARE_RAM_COHERENT_SIZE__)
+
+#define BL1_FIRMWARE_RAM_GLOBALS_ZI_BASE \
+       (unsigned long)(&__BSS_RAM_START__)
+#define BL1_FIRMWARE_RAM_GLOBALS_ZI_LENGTH \
+       (unsigned long)(&__FIRMWARE_BSS_SIZE__)
+
+#define BL1_FIRMWARE_RAM_COHERENT_ZI_BASE \
+       (unsigned long)(&__FIRMWARE_RAM_COHERENT_START__)
+#define BL1_FIRMWARE_RAM_COHERENT_ZI_LENGTH\
+       (unsigned long)(&__FIRMWARE_RAM_COHERENT_SIZE__)
+
+#define BL1_NORMAL_RAM_BASE (unsigned long)(&__BSS_RAM_START__)
+#define BL1_NORMAL_RAM_LIMIT \
+       ((unsigned long)&__FIRMWARE_RAM_COHERENT_START__ +      \
+        (unsigned long)&__FIRMWARE_RAM_COHERENT_SIZE__)
+#else
+ #error "Unknown compiler."
+#endif
+
+
+/* Data structure which holds the extents of the trusted SRAM for BL1*/
+static meminfo bl1_tzram_layout = {0};
+
+meminfo bl1_get_sec_mem_layout(void)
+{
+       return bl1_tzram_layout;
+}
+
+/*******************************************************************************
+ * Perform any BL1 specific platform actions.
+ ******************************************************************************/
+void bl1_early_platform_setup(void)
+{
+       unsigned long bl1_normal_ram_base;
+       unsigned long bl1_coherent_ram_limit;
+       unsigned long tzram_limit = TZRAM_BASE + TZRAM_SIZE;
+
+       /*
+        * Initialize extents of the bl1 sections as per the platform
+        * defined values.
+        */
+       bl1_normal_ram_base  = BL1_NORMAL_RAM_BASE;
+       bl1_coherent_ram_limit = BL1_NORMAL_RAM_LIMIT;
+
+       /*
+        * Calculate how much ram is BL1 using & how much remains free.
+        * This also includes a rudimentary mechanism to detect whether
+        * the BL1 data is loaded at the top or bottom of memory.
+        * TODO: add support for discontigous chunks of free ram if
+        *       needed. Might need dynamic memory allocation support
+        *       et al.
+        *       Also assuming that the section for coherent memory is
+        *       the last and for globals the first in the scatter file.
+        */
+       bl1_tzram_layout.total_base = TZRAM_BASE;
+       bl1_tzram_layout.total_size = TZRAM_SIZE;
+
+       if (bl1_coherent_ram_limit == tzram_limit) {
+               bl1_tzram_layout.free_base = TZRAM_BASE;
+               bl1_tzram_layout.free_size = bl1_normal_ram_base - TZRAM_BASE;
+       } else {
+               bl1_tzram_layout.free_base = bl1_coherent_ram_limit;
+               bl1_tzram_layout.free_size =
+                       tzram_limit - bl1_coherent_ram_limit;
+       }
+}
+
+/*******************************************************************************
+ * Function which will evaluate how much of the trusted ram has been gobbled
+ * up by BL1 and return the base and size of whats available for loading BL2.
+ * Its called after coherency and the MMU have been turned on.
+ ******************************************************************************/
+void bl1_platform_setup(void)
+{
+       /*
+        * This should zero out our coherent stacks as well but we don't care
+        * as they are not being used right now.
+        */
+       memset((void *) BL1_FIRMWARE_RAM_COHERENT_ZI_BASE, 0,
+              (size_t) BL1_FIRMWARE_RAM_COHERENT_ZI_LENGTH);
+
+       /* Enable and initialize the System level generic timer */
+       mmio_write_32(SYS_CNTCTL_BASE + CNTCR_OFF, CNTCR_EN);
+
+       /* Initialize the console */
+       console_init();
+
+       return;
+}
+
+/*******************************************************************************
+ * Perform the very early platform specific architecture setup here. At the
+ * moment this is only intializes the mmu in a quick and dirty way. Later arch-
+ * itectural setup (bl1_arch_setup()) does not do anything platform specific.
+ ******************************************************************************/
+void bl1_plat_arch_setup(void)
+{
+       configure_mmu(&bl1_tzram_layout,
+               TZROM_BASE,                     /* Read_only region start */
+               TZROM_BASE + TZROM_SIZE,        /* Read_only region size */
+               /* Coherent region start */
+               BL1_FIRMWARE_RAM_COHERENT_ZI_BASE,
+               /* Coherent region size */
+               BL1_FIRMWARE_RAM_COHERENT_ZI_BASE +
+                       BL1_FIRMWARE_RAM_COHERENT_ZI_LENGTH);
+}
diff --git a/plat/fvp/bl2_plat_setup.c b/plat/fvp/bl2_plat_setup.c
new file mode 100644 (file)
index 0000000..e38f00b
--- /dev/null
@@ -0,0 +1,147 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <platform.h>
+#include <bl2.h>
+#include <bl_common.h>
+
+/*******************************************************************************
+ * Declarations of linker defined symbols which will help us find the layout
+ * of trusted SRAM
+ ******************************************************************************/
+#if defined (__GNUC__)
+extern unsigned long __BL2_RO_BASE__;
+extern unsigned long __BL2_STACKS_BASE__;
+extern unsigned long __BL2_COHERENT_RAM_BASE__;
+extern unsigned long __BL2_RW_BASE__;
+
+#define BL2_RO_BASE            __BL2_RO_BASE__
+#define BL2_STACKS_BASE                __BL2_STACKS_BASE__
+#define BL2_COHERENT_RAM_BASE  __BL2_COHERENT_RAM_BASE__
+#define BL2_RW_BASE            __BL2_RW_BASE__
+
+#else
+ #error "Unknown compiler."
+#endif
+
+/* Pointer to memory visible to both BL2 and BL31 for passing data */
+extern unsigned char **bl2_el_change_mem_ptr;
+
+/* Data structure which holds the extents of the trusted SRAM for BL2 */
+static meminfo bl2_tzram_layout
+__attribute__ ((aligned(PLATFORM_CACHE_LINE_SIZE),
+               section("tzfw_coherent_mem"))) = {0};
+
+/* Data structure which holds the extents of the non-trusted DRAM for BL2*/
+static meminfo dram_layout = {0};
+
+meminfo bl2_get_sec_mem_layout(void)
+{
+       return bl2_tzram_layout;
+}
+
+meminfo bl2_get_ns_mem_layout(void)
+{
+       return dram_layout;
+}
+
+/*******************************************************************************
+ * BL1 has passed the extents of the trusted SRAM that should be visible to BL2
+ * in x0. This memory layout is sitting at the base of the free trusted SRAM.
+ * Copy it to a safe loaction before its reclaimed by later BL2 functionality.
+ ******************************************************************************/
+void bl2_early_platform_setup(meminfo *mem_layout,
+                             void *data)
+{
+       /* Setup the BL2 memory layout */
+       bl2_tzram_layout.total_base = mem_layout->total_base;
+       bl2_tzram_layout.total_size = mem_layout->total_size;
+       bl2_tzram_layout.free_base = mem_layout->free_base;
+       bl2_tzram_layout.free_size = mem_layout->free_size;
+       bl2_tzram_layout.attr = mem_layout->attr;
+       bl2_tzram_layout.next = 0;
+
+       /* Initialize the platform config for future decision making */
+       platform_config_setup();
+
+       return;
+}
+
+/*******************************************************************************
+ * Not much to do here aprt from finding out the extents of non-trusted DRAM
+ * which will be used for loading the non-trusted software images. We are
+ * relying on pre-iniitialized zi memory so there is nothing to zero out like
+ * in BL1. This is 'cause BL2 is raw PIC binary. Its load address is determined
+ * at runtime. The ZI section might be lost if its not already there.
+ ******************************************************************************/
+void bl2_platform_setup()
+{
+       dram_layout.total_base = DRAM_BASE;
+       dram_layout.total_size = DRAM_SIZE;
+       dram_layout.free_base = DRAM_BASE;
+       dram_layout.free_size = DRAM_SIZE;
+       dram_layout.attr = 0;
+
+       /* Use the Trusted DRAM for passing args to BL31 */
+       bl2_el_change_mem_ptr = (unsigned char **) TZDRAM_BASE;
+
+       return;
+}
+
+/*******************************************************************************
+ * Perform the very early platform specific architectural setup here. At the
+ * moment this is only intializes the mmu in a quick and dirty way.
+ ******************************************************************************/
+void bl2_plat_arch_setup()
+{
+       unsigned long sctlr;
+
+       /* Enable instruction cache. */
+       sctlr = read_sctlr();
+       sctlr |= SCTLR_I_BIT;
+       write_sctlr(sctlr);
+
+       /*
+        * Very simple exception vectors which assert if any exception other
+        * than a single SMC call from BL2 to pass control to BL31 in EL3 is
+        * received.
+        */
+       write_vbar((unsigned long) early_exceptions);
+
+       configure_mmu(&bl2_tzram_layout,
+                     (unsigned long) &BL2_RO_BASE,
+                     (unsigned long) &BL2_STACKS_BASE,
+                     (unsigned long) &BL2_COHERENT_RAM_BASE,
+                     (unsigned long) &BL2_RW_BASE);
+       return;
+}
diff --git a/plat/fvp/bl31_plat_setup.c b/plat/fvp/bl31_plat_setup.c
new file mode 100644 (file)
index 0000000..6c8635f
--- /dev/null
@@ -0,0 +1,424 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <platform.h>
+#include <bl31.h>
+#include <bl_common.h>
+#include <pl011.h>
+#include <bakery_lock.h>
+#include <cci400.h>
+#include <gic.h>
+#include <fvp_pwrc.h>
+
+/*******************************************************************************
+ * Declarations of linker defined symbols which will help us find the layout
+ * of trusted SRAM
+ ******************************************************************************/
+#if defined (__GNUC__)
+extern unsigned long __BL31_RO_BASE__;
+extern unsigned long __BL31_STACKS_BASE__;
+extern unsigned long __BL31_COHERENT_RAM_BASE__;
+extern unsigned long __BL31_RW_BASE__;
+
+#define BL31_RO_BASE           __BL31_RO_BASE__
+#define BL31_STACKS_BASE       __BL31_STACKS_BASE__
+#define BL31_COHERENT_RAM_BASE __BL31_COHERENT_RAM_BASE__
+#define BL31_RW_BASE           __BL31_RW_BASE__
+
+#else
+ #error "Unknown compiler."
+#endif
+
+/*******************************************************************************
+ * This data structures holds information copied by BL31 from BL2 to pass
+ * control to the non-trusted software images. A per-cpu entry was created to
+ * use the same structure in the warm boot path but that's not the case right
+ * now. Persisting with this approach for the time being. TODO: Can this be
+ * moved out of device memory.
+ ******************************************************************************/
+el_change_info ns_entry_info[PLATFORM_CORE_COUNT]
+__attribute__ ((aligned(PLATFORM_CACHE_LINE_SIZE),
+               section("tzfw_coherent_mem"))) = {0};
+
+/* Data structure which holds the extents of the trusted SRAM for BL31 */
+static meminfo bl31_tzram_layout
+__attribute__ ((aligned(PLATFORM_CACHE_LINE_SIZE),
+               section("tzfw_coherent_mem"))) = {0};
+
+meminfo bl31_get_sec_mem_layout(void)
+{
+       return bl31_tzram_layout;
+}
+
+/*******************************************************************************
+ * Return information about passing control to the non-trusted software images
+ * to common code.TODO: In the initial architecture, the image after BL31 will
+ * always run in the non-secure state. In the final architecture there
+ * will be a series of images. This function will need enhancement then
+ ******************************************************************************/
+el_change_info *bl31_get_next_image_info(unsigned long mpidr)
+{
+       return &ns_entry_info[platform_get_core_pos(mpidr)];
+}
+
+/*******************************************************************************
+ * Perform any BL31 specific platform actions. Here we copy parameters passed
+ * by the calling EL (S-EL1 in BL2 & S-EL3 in BL1) before they are lost
+ * (potentially). This is done before the MMU is initialized so that the memory
+ * layout can be used while creating page tables.
+ ******************************************************************************/
+void bl31_early_platform_setup(meminfo *mem_layout,
+                              void *data,
+                              unsigned long mpidr)
+{
+       el_change_info *image_info = (el_change_info *) data;
+       unsigned int lin_index = platform_get_core_pos(mpidr);
+
+       /* Setup the BL31 memory layout */
+       bl31_tzram_layout.total_base = mem_layout->total_base;
+       bl31_tzram_layout.total_size = mem_layout->total_size;
+       bl31_tzram_layout.free_base = mem_layout->free_base;
+       bl31_tzram_layout.free_size = mem_layout->free_size;
+       bl31_tzram_layout.attr = mem_layout->attr;
+       bl31_tzram_layout.next = 0;
+
+       /* Save information about jumping into the NS world */
+       ns_entry_info[lin_index].entrypoint = image_info->entrypoint;
+       ns_entry_info[lin_index].spsr = image_info->spsr;
+       ns_entry_info[lin_index].args = image_info->args;
+       ns_entry_info[lin_index].security_state = image_info->security_state;
+       ns_entry_info[lin_index].next = image_info->next;
+
+       /* Initialize the platform config for future decision making */
+       platform_config_setup();
+}
+
+/*******************************************************************************
+ * Initialize the gic, configure the CLCD and zero out variables needed by the
+ * secondaries to boot up correctly.
+ ******************************************************************************/
+void bl31_platform_setup()
+{
+       unsigned int reg_val;
+
+        /* Initialize the gic cpu and distributor interfaces */
+        gic_setup();
+
+       /*
+        * TODO: Configure the CLCD before handing control to
+        * linux. Need to see if a separate driver is needed
+        * instead.
+        */
+       mmio_write_32(VE_SYSREGS_BASE + V2M_SYS_CFGDATA, 0);
+       mmio_write_32(VE_SYSREGS_BASE + V2M_SYS_CFGCTRL,
+                     (1ull << 31) | (1 << 30) | (7 << 20) | (0 << 16));
+
+       /* Allow access to the System counter timer module */
+       reg_val = (1 << CNTACR_RPCT_SHIFT) | (1 << CNTACR_RVCT_SHIFT);
+       reg_val |= (1 << CNTACR_RFRQ_SHIFT) | (1 << CNTACR_RVOFF_SHIFT);
+       reg_val |= (1 << CNTACR_RWVT_SHIFT) | (1 << CNTACR_RWPT_SHIFT);
+       mmio_write_32(SYS_TIMCTL_BASE + CNTACR_BASE(0), reg_val);
+       mmio_write_32(SYS_TIMCTL_BASE + CNTACR_BASE(1), reg_val);
+
+       reg_val = (1 << CNTNSAR_NS_SHIFT(0)) | (1 << CNTNSAR_NS_SHIFT(1));
+       mmio_write_32(SYS_TIMCTL_BASE + CNTNSAR, reg_val);
+
+       /* Intialize the power controller */
+       fvp_pwrc_setup();
+
+        /* Topologies are best known to the platform. */
+       plat_setup_topology();
+}
+
+/*******************************************************************************
+ * Perform the very early platform specific architectural setup here. At the
+ * moment this is only intializes the mmu in a quick and dirty way.
+ ******************************************************************************/
+void bl31_plat_arch_setup()
+{
+       unsigned long sctlr;
+
+       /* Enable instruction cache. */
+       sctlr = read_sctlr();
+       sctlr |= SCTLR_I_BIT;
+       write_sctlr(sctlr);
+
+       write_vbar((unsigned long) runtime_exceptions);
+       configure_mmu(&bl31_tzram_layout,
+                     (unsigned long) &BL31_RO_BASE,
+                     (unsigned long) &BL31_STACKS_BASE,
+                     (unsigned long) &BL31_COHERENT_RAM_BASE,
+                     (unsigned long) &BL31_RW_BASE);
+}
+
+/*******************************************************************************
+ * TODO: Move GIC setup to a separate file in case it is needed by other BL
+ * stages or ELs
+ * TODO: Revisit if priorities are being set such that no non-secure interrupt
+ * can have a higher priority than a secure one as recommended in the GICv2 spec
+ *******************************************************************************/
+
+/*******************************************************************************
+ * This function does some minimal GICv3 configuration. The Firmware itself does
+ * not fully support GICv3 at this time and relies on GICv2 emulation as
+ * provided by GICv3. This function allows software (like Linux) in later stages
+ * to use full GICv3 features.
+ *******************************************************************************/
+void gicv3_cpuif_setup(void)
+{
+       unsigned int scr_val, val, base;
+
+       /*
+        * When CPUs come out of reset they have their GICR_WAKER.ProcessorSleep
+        * bit set. In order to allow interrupts to get routed to the CPU we
+        * need to clear this bit if set and wait for GICR_WAKER.ChildrenAsleep
+        * to clear (GICv3 Architecture specification 5.4.23).
+        * GICR_WAKER is NOT banked per CPU, compute the correct base address
+        * per CPU.
+        *
+        * TODO:
+        * For GICv4 we also need to adjust the Base address based on
+        * GICR_TYPER.VLPIS
+        */
+       base = BASE_GICR_BASE +
+               (platform_get_core_pos(read_mpidr()) << GICR_PCPUBASE_SHIFT);
+       val = gicr_read_waker(base);
+
+       val &= ~WAKER_PS;
+       gicr_write_waker(base, val);
+       dsb();
+
+       /* We need to wait for ChildrenAsleep to clear. */
+       val = gicr_read_waker(base);
+       while (val & WAKER_CA) {
+               val = gicr_read_waker(base);
+       }
+
+       /*
+        * We need to set SCR_EL3.NS in order to see GICv3 non-secure state.
+        * Restore SCR_EL3.NS again before exit.
+        */
+       scr_val = read_scr();
+       write_scr(scr_val | SCR_NS_BIT);
+
+       /*
+        * By default EL2 and NS-EL1 software should be able to enable GICv3
+        * System register access without any configuration at EL3. But it turns
+        * out that GICC PMR as set in GICv2 mode does not affect GICv3 mode. So
+        * we need to set it here again. In order to do that we need to enable
+        * register access. We leave it enabled as it should be fine and might
+        * prevent problems with later software trying to access GIC System
+        * Registers.
+        */
+       val = read_icc_sre_el3();
+       write_icc_sre_el3(val | ICC_SRE_EN | ICC_SRE_SRE);
+
+       val = read_icc_sre_el2();
+       write_icc_sre_el2(val | ICC_SRE_EN | ICC_SRE_SRE);
+
+       write_icc_pmr_el1(MAX_PRI_VAL);
+
+       /* Restore SCR_EL3 */
+       write_scr(scr_val);
+}
+
+/*******************************************************************************
+ * This function does some minimal GICv3 configuration when cores go
+ * down.
+ *******************************************************************************/
+void gicv3_cpuif_deactivate(void)
+{
+       unsigned int val, base;
+
+       /*
+        * When taking CPUs down we need to set GICR_WAKER.ProcessorSleep and
+        * wait for GICR_WAKER.ChildrenAsleep to get set.
+        * (GICv3 Architecture specification 5.4.23).
+        * GICR_WAKER is NOT banked per CPU, compute the correct base address
+        * per CPU.
+        *
+        * TODO:
+        * For GICv4 we also need to adjust the Base address based on
+        * GICR_TYPER.VLPIS
+        */
+       base = BASE_GICR_BASE +
+               (platform_get_core_pos(read_mpidr()) << GICR_PCPUBASE_SHIFT);
+       val = gicr_read_waker(base);
+       val |= WAKER_PS;
+       gicr_write_waker(base, val);
+       dsb();
+
+       /* We need to wait for ChildrenAsleep to set. */
+       val = gicr_read_waker(base);
+       while ((val & WAKER_CA) == 0) {
+               val = gicr_read_waker(base);
+       }
+}
+
+
+/*******************************************************************************
+ * Enable secure interrupts and use FIQs to route them. Disable legacy bypass
+ * and set the priority mask register to allow all interrupts to trickle in.
+ ******************************************************************************/
+void gic_cpuif_setup(unsigned int gicc_base)
+{
+       unsigned int val;
+
+       val = gicc_read_iidr(gicc_base);
+
+       /*
+        * If GICv3 we need to do a bit of additional setup. We want to
+        * allow default GICv2 behaviour but allow the next stage to
+        * enable full gicv3 features.
+        */
+       if (((val >> GICC_IIDR_ARCH_SHIFT) & GICC_IIDR_ARCH_MASK) >= 3) {
+               gicv3_cpuif_setup();
+       }
+
+       val = ENABLE_GRP0 | FIQ_EN | FIQ_BYP_DIS_GRP0;
+       val |= IRQ_BYP_DIS_GRP0 | FIQ_BYP_DIS_GRP1 | IRQ_BYP_DIS_GRP1;
+
+       gicc_write_pmr(gicc_base, MAX_PRI_VAL);
+       gicc_write_ctlr(gicc_base, val);
+}
+
+/*******************************************************************************
+ * Place the cpu interface in a state where it can never make a cpu exit wfi as
+ * as result of an asserted interrupt. This is critical for powering down a cpu
+ ******************************************************************************/
+void gic_cpuif_deactivate(unsigned int gicc_base)
+{
+       unsigned int val;
+
+       /* Disable secure, non-secure interrupts and disable their bypass */
+       val = gicc_read_ctlr(gicc_base);
+       val &= ~(ENABLE_GRP0 | ENABLE_GRP1);
+       val |= FIQ_BYP_DIS_GRP1 | FIQ_BYP_DIS_GRP0;
+       val |= IRQ_BYP_DIS_GRP0 | IRQ_BYP_DIS_GRP1;
+       gicc_write_ctlr(gicc_base, val);
+
+       val = gicc_read_iidr(gicc_base);
+
+       /*
+        * If GICv3 we need to do a bit of additional setup. Make sure the
+        * RDIST is put to sleep.
+        */
+       if (((val >> GICC_IIDR_ARCH_SHIFT) & GICC_IIDR_ARCH_MASK) >= 3) {
+               gicv3_cpuif_deactivate();
+       }
+}
+
+/*******************************************************************************
+ * Per cpu gic distributor setup which will be done by all cpus after a cold
+ * boot/hotplug. This marks out the secure interrupts & enables them.
+ ******************************************************************************/
+void gic_pcpu_distif_setup(unsigned int gicd_base)
+{
+       gicd_write_igroupr(gicd_base, 0, ~0);
+
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_PHY_TIMER);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_0);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_1);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_2);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_3);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_4);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_5);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_6);
+       gicd_clr_igroupr(gicd_base, IRQ_SEC_SGI_7);
+
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_PHY_TIMER, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_0, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_1, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_2, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_3, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_4, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_5, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_6, MAX_PRI_VAL);
+       gicd_set_ipriorityr(gicd_base, IRQ_SEC_SGI_7, MAX_PRI_VAL);
+
+       gicd_set_isenabler(gicd_base, IRQ_SEC_PHY_TIMER);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_0);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_1);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_2);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_3);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_4);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_5);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_6);
+       gicd_set_isenabler(gicd_base, IRQ_SEC_SGI_7);
+}
+
+/*******************************************************************************
+ * Global gic distributor setup which will be done by the primary cpu after a
+ * cold boot. It marks out the secure SPIs, PPIs & SGIs and enables them. It
+ * then enables the secure GIC distributor interface.
+ ******************************************************************************/
+void gic_distif_setup(unsigned int gicd_base)
+{
+       unsigned int ctr, num_ints, ctlr;
+
+       /* Disable the distributor before going further */
+       ctlr = gicd_read_ctlr(gicd_base);
+       ctlr &= ~(ENABLE_GRP0 | ENABLE_GRP1);
+       gicd_write_ctlr(gicd_base, ctlr);
+
+       /*
+        * Mark out non-secure interrupts. Calculate number of
+        * IGROUPR registers to consider. Will be equal to the
+        * number of IT_LINES
+        */
+       num_ints = gicd_read_typer(gicd_base) & IT_LINES_NO_MASK;
+       num_ints++;
+       for (ctr = 0; ctr < num_ints; ctr++)
+               gicd_write_igroupr(gicd_base, ctr << IGROUPR_SHIFT, ~0);
+
+       /* Configure secure interrupts now */
+       gicd_clr_igroupr(gicd_base, IRQ_TZ_WDOG);
+       gicd_set_ipriorityr(gicd_base, IRQ_TZ_WDOG, MAX_PRI_VAL);
+       gicd_set_itargetsr(gicd_base, IRQ_TZ_WDOG,
+                          platform_get_core_pos(read_mpidr()));
+       gicd_set_isenabler(gicd_base, IRQ_TZ_WDOG);
+       gic_pcpu_distif_setup(gicd_base);
+
+       gicd_write_ctlr(gicd_base, ctlr | ENABLE_GRP0);
+}
+
+void gic_setup(void)
+{
+       unsigned int gicd_base, gicc_base;
+
+       gicd_base = platform_get_cfgvar(CONFIG_GICD_ADDR);
+       gicc_base = platform_get_cfgvar(CONFIG_GICC_ADDR);
+
+       gic_cpuif_setup(gicc_base);
+       gic_distif_setup(gicd_base);
+}
diff --git a/plat/fvp/fvp_pm.c b/plat/fvp/fvp_pm.c
new file mode 100644 (file)
index 0000000..9621319
--- /dev/null
@@ -0,0 +1,372 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include <arch_helpers.h>
+#include <console.h>
+#include <platform.h>
+#include <bl_common.h>
+#include <bl31.h>
+#include <bakery_lock.h>
+#include <cci400.h>
+#include <gic.h>
+#include <fvp_pwrc.h>
+/* Only included for error codes */
+#include <psci.h>
+
+/*******************************************************************************
+ * FVP handler called when an affinity instance is about to be turned on. The
+ * level and mpidr determine the affinity instance.
+ ******************************************************************************/
+int fvp_affinst_on(unsigned long mpidr,
+                  unsigned long sec_entrypoint,
+                  unsigned long ns_entrypoint,
+                  unsigned int afflvl,
+                  unsigned int state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned long linear_id;
+       mailbox *fvp_mboxes;
+       unsigned int psysr;
+
+       if (ns_entrypoint < DRAM_BASE) {
+               rc = PSCI_E_INVALID_PARAMS;
+               goto exit;
+       }
+
+       /*
+        * It's possible to turn on only affinity level 0 i.e. a cpu
+        * on the FVP. Ignore any other affinity level.
+        */
+       if (afflvl != MPIDR_AFFLVL0)
+               goto exit;
+
+       /*
+        * Ensure that we do not cancel an inflight power off request
+        * for the target cpu. That would leave it in a zombie wfi.
+        * Wait for it to power off, program the jump address for the
+        * target cpu and then program the power controller to turn
+        * that cpu on
+        */
+       do {
+               psysr = fvp_pwrc_read_psysr(mpidr);
+       } while (psysr & PSYSR_AFF_L0);
+
+       linear_id = platform_get_core_pos(mpidr);
+       fvp_mboxes = (mailbox *) (TZDRAM_BASE + MBOX_OFF);
+       fvp_mboxes[linear_id].value = sec_entrypoint;
+       flush_dcache_range((unsigned long) &fvp_mboxes[linear_id],
+                          sizeof(unsigned long));
+
+       fvp_pwrc_write_pponr(mpidr);
+
+exit:
+       return rc;
+}
+
+/*******************************************************************************
+ * FVP handler called when an affinity instance is about to be turned off. The
+ * level and mpidr determine the affinity instance. The 'state' arg. allows the
+ * platform to decide whether the cluster is being turned off and take apt
+ * actions.
+ *
+ * CAUTION: This function is called with coherent stacks so that caches can be
+ * turned off, flushed and coherency disabled. There is no guarantee that caches
+ * will remain turned on across calls to this function as each affinity level is
+ * dealt with. So do not write & read global variables across calls. It will be
+ * wise to do flush a write to the global to prevent unpredictable results.
+ ******************************************************************************/
+int fvp_affinst_off(unsigned long mpidr,
+                   unsigned int afflvl,
+                   unsigned int state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int gicc_base, ectlr;
+       unsigned long cpu_setup;
+
+       switch (afflvl) {
+       case MPIDR_AFFLVL1:
+               if (state == PSCI_STATE_OFF) {
+                       /*
+                        * Disable coherency if this cluster is to be
+                        * turned off
+                        */
+                       cci_disable_coherency(mpidr);
+
+                       /*
+                        * Program the power controller to turn the
+                        * cluster off
+                        */
+                       fvp_pwrc_write_pcoffr(mpidr);
+
+               }
+               break;
+
+       case MPIDR_AFFLVL0:
+               if (state == PSCI_STATE_OFF) {
+
+                       /*
+                        * Take this cpu out of intra-cluster coherency if
+                        * the FVP flavour supports the SMP bit.
+                        */
+                       cpu_setup = platform_get_cfgvar(CONFIG_CPU_SETUP);
+                       if (cpu_setup) {
+                               ectlr = read_cpuectlr();
+                               ectlr &= ~CPUECTLR_SMP_BIT;
+                               write_cpuectlr(ectlr);
+                       }
+
+                       /*
+                        * Prevent interrupts from spuriously waking up
+                        * this cpu
+                        */
+                       gicc_base = platform_get_cfgvar(CONFIG_GICC_ADDR);
+                       gic_cpuif_deactivate(gicc_base);
+
+                       /*
+                        * Program the power controller to power this
+                        * cpu off
+                        */
+                       fvp_pwrc_write_ppoffr(mpidr);
+               }
+               break;
+
+       default:
+               assert(0);
+       }
+
+       return rc;
+}
+
+/*******************************************************************************
+ * FVP handler called when an affinity instance is about to be suspended. The
+ * level and mpidr determine the affinity instance. The 'state' arg. allows the
+ * platform to decide whether the cluster is being turned off and take apt
+ * actions.
+ *
+ * CAUTION: This function is called with coherent stacks so that caches can be
+ * turned off, flushed and coherency disabled. There is no guarantee that caches
+ * will remain turned on across calls to this function as each affinity level is
+ * dealt with. So do not write & read global variables across calls. It will be
+ * wise to do flush a write to the global to prevent unpredictable results.
+ ******************************************************************************/
+int fvp_affinst_suspend(unsigned long mpidr,
+                       unsigned long sec_entrypoint,
+                       unsigned long ns_entrypoint,
+                       unsigned int afflvl,
+                       unsigned int state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned int gicc_base, ectlr;
+       unsigned long cpu_setup, linear_id;
+       mailbox *fvp_mboxes;
+
+       /* Cannot allow NS world to execute trusted firmware code */
+       if (ns_entrypoint < DRAM_BASE) {
+               rc = PSCI_E_INVALID_PARAMS;
+               goto exit;
+       }
+
+       switch (afflvl) {
+       case MPIDR_AFFLVL1:
+               if (state == PSCI_STATE_OFF) {
+                       /*
+                        * Disable coherency if this cluster is to be
+                        * turned off
+                        */
+                       cci_disable_coherency(mpidr);
+
+                       /*
+                        * Program the power controller to turn the
+                        * cluster off
+                        */
+                       fvp_pwrc_write_pcoffr(mpidr);
+
+               }
+               break;
+
+       case MPIDR_AFFLVL0:
+               if (state == PSCI_STATE_OFF) {
+                       /*
+                        * Take this cpu out of intra-cluster coherency if
+                        * the FVP flavour supports the SMP bit.
+                        */
+                       cpu_setup = platform_get_cfgvar(CONFIG_CPU_SETUP);
+                       if (cpu_setup) {
+                               ectlr = read_cpuectlr();
+                               ectlr &= ~CPUECTLR_SMP_BIT;
+                               write_cpuectlr(ectlr);
+                       }
+
+                       /* Program the jump address for the target cpu */
+                       linear_id = platform_get_core_pos(mpidr);
+                       fvp_mboxes = (mailbox *) (TZDRAM_BASE + MBOX_OFF);
+                       fvp_mboxes[linear_id].value = sec_entrypoint;
+                       flush_dcache_range((unsigned long) &fvp_mboxes[linear_id],
+                                          sizeof(unsigned long));
+
+                       /*
+                        * Prevent interrupts from spuriously waking up
+                        * this cpu
+                        */
+                       gicc_base = platform_get_cfgvar(CONFIG_GICC_ADDR);
+                       gic_cpuif_deactivate(gicc_base);
+
+                       /*
+                        * Program the power controller to power this
+                        * cpu off and enable wakeup interrupts.
+                        */
+                       fvp_pwrc_write_pwkupr(mpidr);
+                       fvp_pwrc_write_ppoffr(mpidr);
+               }
+               break;
+
+       default:
+               assert(0);
+       }
+
+exit:
+       return rc;
+}
+
+/*******************************************************************************
+ * FVP handler called when an affinity instance has just been powered on after
+ * being turned off earlier. The level and mpidr determine the affinity
+ * instance. The 'state' arg. allows the platform to decide whether the cluster
+ * was turned off prior to wakeup and do what's necessary to setup it up
+ * correctly.
+ ******************************************************************************/
+int fvp_affinst_on_finish(unsigned long mpidr,
+                         unsigned int afflvl,
+                         unsigned int state)
+{
+       int rc = PSCI_E_SUCCESS;
+       unsigned long linear_id, cpu_setup;
+       mailbox *fvp_mboxes;
+       unsigned int gicd_base, gicc_base, reg_val, ectlr;
+
+       switch (afflvl) {
+
+       case MPIDR_AFFLVL1:
+               /* Enable coherency if this cluster was off */
+               if (state == PSCI_STATE_OFF)
+                       cci_enable_coherency(mpidr);
+               break;
+
+       case MPIDR_AFFLVL0:
+               /*
+                * Ignore the state passed for a cpu. It could only have
+                * been off if we are here.
+                */
+
+               /*
+                * Turn on intra-cluster coherency if the FVP flavour supports
+                * it.
+                */
+               cpu_setup = platform_get_cfgvar(CONFIG_CPU_SETUP);
+               if (cpu_setup) {
+                       ectlr = read_cpuectlr();
+                       ectlr |= CPUECTLR_SMP_BIT;
+                       write_cpuectlr(ectlr);
+               }
+
+               /* Zero the jump address in the mailbox for this cpu */
+               fvp_mboxes = (mailbox *) (TZDRAM_BASE + MBOX_OFF);
+               linear_id = platform_get_core_pos(mpidr);
+               fvp_mboxes[linear_id].value = 0;
+               flush_dcache_range((unsigned long) &fvp_mboxes[linear_id],
+                                  sizeof(unsigned long));
+
+               gicd_base = platform_get_cfgvar(CONFIG_GICD_ADDR);
+               gicc_base = platform_get_cfgvar(CONFIG_GICC_ADDR);
+
+               /* Enable the gic cpu interface */
+               gic_cpuif_setup(gicc_base);
+
+               /* TODO: This setup is needed only after a cold boot */
+               gic_pcpu_distif_setup(gicd_base);
+
+               /* Allow access to the System counter timer module */
+               reg_val = (1 << CNTACR_RPCT_SHIFT) | (1 << CNTACR_RVCT_SHIFT);
+               reg_val |= (1 << CNTACR_RFRQ_SHIFT) | (1 << CNTACR_RVOFF_SHIFT);
+               reg_val |= (1 << CNTACR_RWVT_SHIFT) | (1 << CNTACR_RWPT_SHIFT);
+               mmio_write_32(SYS_TIMCTL_BASE + CNTACR_BASE(0), reg_val);
+               mmio_write_32(SYS_TIMCTL_BASE + CNTACR_BASE(1), reg_val);
+
+               reg_val = (1 << CNTNSAR_NS_SHIFT(0)) |
+                       (1 << CNTNSAR_NS_SHIFT(1));
+               mmio_write_32(SYS_TIMCTL_BASE + CNTNSAR, reg_val);
+
+               break;
+
+       default:
+               assert(0);
+       }
+
+       return rc;
+}
+
+/*******************************************************************************
+ * FVP handler called when an affinity instance has just been powered on after
+ * having been suspended earlier. The level and mpidr determine the affinity
+ * instance.
+ * TODO: At the moment we reuse the on finisher and reinitialize the secure
+ * context. Need to implement a separate suspend finisher.
+ ******************************************************************************/
+int fvp_affinst_suspend_finish(unsigned long mpidr,
+                              unsigned int afflvl,
+                              unsigned int state)
+{
+       return fvp_affinst_on_finish(mpidr, afflvl, state);
+}
+
+
+/*******************************************************************************
+ * Export the platform handlers to enable psci to invoke them
+ ******************************************************************************/
+static plat_pm_ops fvp_plat_pm_ops = {
+       0,
+       fvp_affinst_on,
+       fvp_affinst_off,
+       fvp_affinst_suspend,
+       fvp_affinst_on_finish,
+       fvp_affinst_suspend_finish,
+};
+
+/*******************************************************************************
+ * Export the platform specific power ops & initialize the fvp power controller
+ ******************************************************************************/
+int platform_setup_pm(plat_pm_ops **plat_ops)
+{
+       *plat_ops = &fvp_plat_pm_ops;
+       return 0;
+}
diff --git a/plat/fvp/fvp_topology.c b/plat/fvp/fvp_topology.c
new file mode 100644 (file)
index 0000000..20f3324
--- /dev/null
@@ -0,0 +1,241 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <assert.h>
+#include <platform.h>
+#include <fvp_pwrc.h>
+/* TODO: Reusing psci error codes & state information. Get our own! */
+#include <psci.h>
+
+/* We treat '255' as an invalid affinity instance */
+#define AFFINST_INVAL  0xff
+
+/*******************************************************************************
+ * We support 3 flavours of the FVP: Foundation, Base AEM & Base Cortex. Each
+ * flavour has a different topology. The common bit is that there can be a max.
+ * of 2 clusters (affinity 1) and 4 cpus (affinity 0) per cluster. So we define
+ * a tree like data structure which caters to these maximum bounds. It simply
+ * marks the absent affinity level instances as PSCI_AFF_ABSENT e.g. there is no
+ * cluster 1 on the Foundation FVP. The 'data' field is currently unused.
+ ******************************************************************************/
+typedef struct {
+       unsigned char sibling;
+       unsigned char child;
+       unsigned char state;
+       unsigned int data;
+} affinity_info;
+
+/*******************************************************************************
+ * The following two data structures store the topology tree for the fvp. There
+ * is a separate array for each affinity level i.e. cpus and clusters. The child
+ * and sibling references allow traversal inside and in between the two arrays.
+ ******************************************************************************/
+static affinity_info fvp_aff1_topology_map[PLATFORM_CLUSTER_COUNT];
+static affinity_info fvp_aff0_topology_map[PLATFORM_CORE_COUNT];
+
+/* Simple global variable to safeguard us from stupidity */
+static unsigned int topology_setup_done;
+
+/*******************************************************************************
+ * This function implements a part of the critical interface between the psci
+ * generic layer and the platform to allow the former to detect the platform
+ * topology. psci queries the platform to determine how many affinity instances
+ * are present at a particular level for a given mpidr e.g. consider a dual
+ * cluster platform where each cluster has 4 cpus. A call to this function with
+ * (0, 0x100) will return the number of cpus implemented under cluster 1 i.e. 4.
+ * Similarly a call with (1, 0x100) will return 2 i.e. the number of clusters.
+ * This is 'cause we are effectively asking how many affinity level 1 instances
+ * are implemented under affinity level 2 instance 0.
+ ******************************************************************************/
+unsigned int plat_get_aff_count(unsigned int aff_lvl,
+                               unsigned long mpidr)
+{
+       unsigned int aff_count = 1, ctr;
+       unsigned char parent_aff_id;
+
+       assert(topology_setup_done == 1);
+
+       switch (aff_lvl) {
+       case 3:
+       case 2:
+               /*
+                * Assert if the parent affinity instance is not 0.
+                * This also takes care of level 3 in an obfuscated way
+                */
+               parent_aff_id = (mpidr >> MPIDR_AFF3_SHIFT) & MPIDR_AFFLVL_MASK;
+               assert(parent_aff_id == 0);
+
+               /*
+                * Report that we implement a single instance of
+                * affinity levels 2 & 3 which are AFF_ABSENT
+                */
+               break;
+       case 1:
+               /* Assert if the parent affinity instance is not 0. */
+               parent_aff_id = (mpidr >> MPIDR_AFF2_SHIFT) & MPIDR_AFFLVL_MASK;
+               assert(parent_aff_id == 0);
+
+               /* Fetch the starting index in the aff1 array */
+               for (ctr = 0;
+                    fvp_aff1_topology_map[ctr].sibling != AFFINST_INVAL;
+                    ctr = fvp_aff1_topology_map[ctr].sibling) {
+                       aff_count++;
+               }
+
+               break;
+       case 0:
+               /* Assert if the cluster id is anything apart from 0 or 1 */
+               parent_aff_id = (mpidr >> MPIDR_AFF1_SHIFT) & MPIDR_AFFLVL_MASK;
+               assert(parent_aff_id < PLATFORM_CLUSTER_COUNT);
+
+               /* Fetch the starting index in the aff0 array */
+               for (ctr = fvp_aff1_topology_map[parent_aff_id].child;
+                    fvp_aff0_topology_map[ctr].sibling != AFFINST_INVAL;
+                    ctr = fvp_aff0_topology_map[ctr].sibling) {
+                       aff_count++;
+               }
+
+               break;
+       default:
+               assert(0);
+       }
+
+       return aff_count;
+}
+
+/*******************************************************************************
+ * This function implements a part of the critical interface between the psci
+ * generic layer and the platform to allow the former to detect the state of a
+ * affinity instance in the platform topology. psci queries the platform to
+ * determine whether an affinity instance is present or absent. This caters for
+ * topologies where an intermediate affinity level instance is missing e.g.
+ * consider a platform which implements a single cluster with 4 cpus and there
+ * is another cpu sitting directly on the interconnect along with the cluster.
+ * The mpidrs of the cluster would range from 0x0-0x3. The mpidr of the single
+ * cpu would be 0x100 to highlight that it does not belong to cluster 0. Cluster
+ * 1 is however missing but needs to be accounted to reach this single cpu in
+ * the topology tree. Hence it will be marked as PSCI_AFF_ABSENT. This is not
+ * applicable to the FVP but depicted as an example.
+ ******************************************************************************/
+unsigned int plat_get_aff_state(unsigned int aff_lvl,
+                               unsigned long mpidr)
+{
+       unsigned int aff_state = PSCI_AFF_ABSENT, idx;
+       idx = (mpidr >> MPIDR_AFF1_SHIFT) & MPIDR_AFFLVL_MASK;
+
+       assert(topology_setup_done == 1);
+
+       switch (aff_lvl) {
+       case 3:
+       case 2:
+               /* Report affinity levels 2 & 3 as absent */
+               break;
+       case 1:
+               aff_state = fvp_aff1_topology_map[idx].state;
+               break;
+       case 0:
+               /*
+                * First get start index of the aff0 in its array & then add
+                * to it the affinity id that we want the state of
+                */
+               idx = fvp_aff1_topology_map[idx].child;
+               idx += (mpidr >> MPIDR_AFF0_SHIFT) & MPIDR_AFFLVL_MASK;
+               aff_state = fvp_aff0_topology_map[idx].state;
+               break;
+       default:
+               assert(0);
+       }
+
+       return aff_state;
+}
+
+/*******************************************************************************
+ * Handy optimization to prevent the psci implementation from traversing through
+ * affinity levels which are not present while detecting the platform topology.
+ ******************************************************************************/
+int plat_get_max_afflvl()
+{
+       return MPIDR_AFFLVL1;
+}
+
+/*******************************************************************************
+ * This function populates the FVP specific topology information depending upon
+ * the FVP flavour its running on. We construct all the mpidrs we can handle
+ * and rely on the PWRC.PSYSR to flag absent cpus when their status is queried.
+ ******************************************************************************/
+int plat_setup_topology()
+{
+       unsigned char aff0, aff1, aff_state, aff0_offset = 0;
+       unsigned long mpidr;
+
+       topology_setup_done = 0;
+
+       for (aff1 = 0; aff1 < PLATFORM_CLUSTER_COUNT; aff1++) {
+
+               fvp_aff1_topology_map[aff1].child = aff0_offset;
+               fvp_aff1_topology_map[aff1].sibling = aff1 + 1;
+
+               for (aff0 = 0; aff0 < PLATFORM_MAX_CPUS_PER_CLUSTER; aff0++) {
+
+                       mpidr = aff1 << MPIDR_AFF1_SHIFT;
+                       mpidr |= aff0 << MPIDR_AFF0_SHIFT;
+
+                       if (fvp_pwrc_read_psysr(mpidr) != PSYSR_INVALID) {
+                               /*
+                                * Presence of even a single aff0 indicates
+                                * presence of parent aff1 on the FVP.
+                                */
+                               aff_state = PSCI_AFF_PRESENT;
+                               fvp_aff1_topology_map[aff1].state =
+                                       PSCI_AFF_PRESENT;
+                       } else {
+                               aff_state = PSCI_AFF_ABSENT;
+                       }
+
+                       fvp_aff0_topology_map[aff0_offset].child = AFFINST_INVAL;
+                       fvp_aff0_topology_map[aff0_offset].state = aff_state;
+                       fvp_aff0_topology_map[aff0_offset].sibling =
+                               aff0_offset + 1;
+
+                       /* Increment the absolute number of aff0s traversed */
+                       aff0_offset++;
+               }
+
+               /* Tie-off the last aff0 sibling to -1 to avoid overflow */
+               fvp_aff0_topology_map[aff0_offset - 1].sibling = AFFINST_INVAL;
+       }
+
+       /* Tie-off the last aff1 sibling to AFFINST_INVAL to avoid overflow */
+       fvp_aff1_topology_map[aff1 - 1].sibling = AFFINST_INVAL;
+
+       topology_setup_done = 1;
+       return 0;
+}
diff --git a/plat/fvp/platform.h b/plat/fvp/platform.h
new file mode 100644 (file)
index 0000000..21a7912
--- /dev/null
@@ -0,0 +1,341 @@
+/*
+ * Copyright (c) 2013, ARM Limited. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ *
+ * Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * Neither the name of ARM nor the names of its contributors may be used
+ * to endorse or promote products derived from this software without specific
+ * prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __PLATFORM_H__
+#define __PLATFORM_H__
+
+#include <arch.h>
+#include <mmio.h>
+#include <psci.h>
+#include <bl_common.h>
+
+
+/*******************************************************************************
+ * Platform binary types for linking
+ ******************************************************************************/
+#define PLATFORM_LINKER_FORMAT          "elf64-littleaarch64"
+#define PLATFORM_LINKER_ARCH            aarch64
+
+/*******************************************************************************
+ * Generic platform constants
+ ******************************************************************************/
+#define PLATFORM_STACK_SIZE            0x800
+
+#define FIRMWARE_WELCOME_STR           "Booting trusted firmware boot loader stage 1\n\r"
+#define BL2_IMAGE_NAME                 "bl2.bin"
+#define BL31_IMAGE_NAME                        "bl31.bin"
+#define NS_IMAGE_OFFSET                        FLASH0_BASE
+
+#define PLATFORM_CACHE_LINE_SIZE       64
+#define PLATFORM_CLUSTER_COUNT         2ull
+#define PLATFORM_CLUSTER0_CORE_COUNT   4
+#define PLATFORM_CLUSTER1_CORE_COUNT   4
+#define PLATFORM_CORE_COUNT             (PLATFORM_CLUSTER1_CORE_COUNT + \
+                                        PLATFORM_CLUSTER0_CORE_COUNT)
+#define PLATFORM_MAX_CPUS_PER_CLUSTER  4
+#define PRIMARY_CPU                    0x0
+
+/* Constants for accessing platform configuration */
+#define CONFIG_GICD_ADDR               0
+#define CONFIG_GICC_ADDR               1
+#define CONFIG_GICH_ADDR               2
+#define CONFIG_GICV_ADDR               3
+#define CONFIG_MAX_AFF0                4
+#define CONFIG_MAX_AFF1                5
+/* Indicate whether the CPUECTLR SMP bit should be enabled. */
+#define CONFIG_CPU_SETUP               6
+#define CONFIG_BASE_MMAP               7
+#define CONFIG_LIMIT                   8
+
+/*******************************************************************************
+ * Platform memory map related constants
+ ******************************************************************************/
+#define TZROM_BASE             0x00000000
+#define TZROM_SIZE             0x04000000
+
+#define TZRAM_BASE             0x04000000
+#define TZRAM_SIZE             0x40000
+
+#define FLASH0_BASE            0x08000000
+#define FLASH0_SIZE            TZROM_SIZE
+
+#define FLASH1_BASE            0x0c000000
+#define FLASH1_SIZE            0x04000000
+
+#define PSRAM_BASE             0x14000000
+#define PSRAM_SIZE             0x04000000
+
+#define VRAM_BASE              0x18000000
+#define VRAM_SIZE              0x02000000
+
+/* Aggregate of all devices in the first GB */
+#define DEVICE0_BASE           0x1a000000
+#define DEVICE0_SIZE           0x12200000
+
+#define DEVICE1_BASE           0x2f000000
+#define DEVICE1_SIZE           0x200000
+
+#define NSRAM_BASE             0x2e000000
+#define NSRAM_SIZE             0x10000
+
+/* Location of trusted dram on the base fvp */
+#define TZDRAM_BASE            0x06000000
+#define TZDRAM_SIZE            0x02000000
+#define MBOX_OFF               0x1000
+#define AFFMAP_OFF             0x1200
+
+#define DRAM_BASE              0x80000000ull
+#define DRAM_SIZE              0x80000000ull
+
+#define PCIE_EXP_BASE          0x40000000
+#define TZRNG_BASE             0x7fe60000
+#define TZNVCTR_BASE           0x7fe70000
+#define TZROOTKEY_BASE         0x7fe80000
+
+/* Memory mapped Generic timer interfaces  */
+#define SYS_CNTCTL_BASE                0x2a430000
+#define SYS_CNTREAD_BASE       0x2a800000
+#define SYS_TIMCTL_BASE                0x2a810000
+
+/* Counter timer module offsets */
+#define CNTNSAR                        0x4
+#define CNTNSAR_NS_SHIFT(x)    x
+
+#define CNTACR_BASE(x)         (0x40 + (x << 2))
+#define CNTACR_RPCT_SHIFT      0x0
+#define CNTACR_RVCT_SHIFT      0x1
+#define CNTACR_RFRQ_SHIFT      0x2
+#define CNTACR_RVOFF_SHIFT     0x3
+#define CNTACR_RWVT_SHIFT      0x4
+#define CNTACR_RWPT_SHIFT      0x5
+
+/* V2M motherboard system registers & offsets */
+#define VE_SYSREGS_BASE                0x1c010000
+#define V2M_SYS_ID                     0x0
+#define V2M_SYS_LED                    0x8
+#define V2M_SYS_CFGDATA                0xa0
+#define V2M_SYS_CFGCTRL                0xa4
+
+/*
+ * V2M sysled bit definitions. The values written to this
+ * register are defined in arch.h & runtime_svc.h. Only
+ * used by the primary cpu to diagnose any cold boot issues.
+ *
+ * SYS_LED[0]   - Security state (S=0/NS=1)
+ * SYS_LED[2:1] - Exception Level (EL3-EL0)
+ * SYS_LED[7:3] - Exception Class (Sync/Async & origin)
+ *
+ */
+#define SYS_LED_SS_SHIFT               0x0
+#define SYS_LED_EL_SHIFT               0x1
+#define SYS_LED_EC_SHIFT               0x3
+
+#define SYS_LED_SS_MASK                0x1
+#define SYS_LED_EL_MASK                0x3
+#define SYS_LED_EC_MASK                0x1f
+
+/* V2M sysid register bits */
+#define SYS_ID_REV_SHIFT       27
+#define SYS_ID_HBI_SHIFT       16
+#define SYS_ID_BLD_SHIFT       12
+#define SYS_ID_ARCH_SHIFT      8
+#define SYS_ID_FPGA_SHIFT      0
+
+#define SYS_ID_REV_MASK        0xf
+#define SYS_ID_HBI_MASK        0xfff
+#define SYS_ID_BLD_MASK        0xf
+#define SYS_ID_ARCH_MASK       0xf
+#define SYS_ID_FPGA_MASK       0xff
+
+#define SYS_ID_BLD_LENGTH      4
+
+#define REV_FVP                0x0
+#define HBI_FVP_BASE           0x020
+#define HBI_FOUNDATION         0x010
+
+#define BLD_GIC_VE_MMAP        0x0
+#define BLD_GIC_A53A57_MMAP    0x1
+
+#define ARCH_MODEL             0x1
+
+/* FVP Power controller base address*/
+#define PWRC_BASE              0x1c100000
+
+/*******************************************************************************
+ * Platform specific per affinity states. Distinction between off and suspend
+ * is made to allow reporting of a suspended cpu as still being on e.g. in the
+ * affinity_info psci call.
+ ******************************************************************************/
+#define PLATFORM_MAX_AFF0      4
+#define PLATFORM_MAX_AFF1      2
+#define PLAT_AFF_UNK           0xff
+
+#define PLAT_AFF0_OFF          0x0
+#define PLAT_AFF0_ONPENDING    0x1
+#define PLAT_AFF0_SUSPEND      0x2
+#define PLAT_AFF0_ON           0x3
+
+#define PLAT_AFF1_OFF          0x0
+#define PLAT_AFF1_ONPENDING    0x1
+#define PLAT_AFF1_SUSPEND      0x2
+#define PLAT_AFF1_ON           0x3
+
+/*******************************************************************************
+ * BL2 specific defines.
+ ******************************************************************************/
+#define BL2_BASE                       0x0402D000
+
+/*******************************************************************************
+ * BL31 specific defines.
+ ******************************************************************************/
+#define BL31_BASE                      0x0400E000
+
+/*******************************************************************************
+ * Platform specific page table and MMU setup constants
+ ******************************************************************************/
+#define EL3_ADDR_SPACE_SIZE            (1ull << 32)
+#define EL3_NUM_PAGETABLES             2
+#define EL3_TROM_PAGETABLE             0
+#define EL3_TRAM_PAGETABLE             1
+
+#define ADDR_SPACE_SIZE                        (1ull << 32)
+
+#define NUM_L2_PAGETABLES              2
+#define GB1_L2_PAGETABLE               0
+#define GB2_L2_PAGETABLE               1
+
+#define NUM_L3_PAGETABLES              2
+#define TZRAM_PAGETABLE                        0
+#define NSRAM_PAGETABLE                        1
+
+/*******************************************************************************
+ * CCI-400 related constants
+ ******************************************************************************/
+#define CCI400_BASE                    0x2c090000
+#define CCI400_SL_IFACE_CLUSTER0       3
+#define CCI400_SL_IFACE_CLUSTER1       4
+#define CCI400_SL_IFACE_INDEX(mpidr)   (mpidr & MPIDR_CLUSTER_MASK ? \
+                                        CCI400_SL_IFACE_CLUSTER1 :   \
+                                        CCI400_SL_IFACE_CLUSTER0)
+
+/*******************************************************************************
+ * GIC-400 & interrupt handling related constants
+ ******************************************************************************/
+/* VE compatible GIC memory map */
+#define VE_GICD_BASE                   0x2c001000
+#define VE_GICC_BASE                   0x2c002000
+#define VE_GICH_BASE                   0x2c004000
+#define VE_GICV_BASE                   0x2c006000
+
+/* Base FVP compatible GIC memory map */
+#define BASE_GICD_BASE                 0x2f000000
+#define BASE_GICR_BASE                 0x2f100000
+#define BASE_GICC_BASE                 0x2c000000
+#define BASE_GICH_BASE                 0x2c010000
+#define BASE_GICV_BASE                 0x2c02f000
+
+#define IRQ_TZ_WDOG                    56
+#define IRQ_SEC_PHY_TIMER              29
+#define IRQ_SEC_SGI_0                  8
+#define IRQ_SEC_SGI_1                  9
+#define IRQ_SEC_SGI_2                  10
+#define IRQ_SEC_SGI_3                  11
+#define IRQ_SEC_SGI_4                  12
+#define IRQ_SEC_SGI_5                  13
+#define IRQ_SEC_SGI_6                  14
+#define IRQ_SEC_SGI_7                  15
+#define IRQ_SEC_SGI_8                  16
+
+/*******************************************************************************
+ * PL011 related constants
+ ******************************************************************************/
+#define PL011_BASE                     0x1c090000
+
+/*******************************************************************************
+ * Declarations and constants to access the mailboxes safely. Each mailbox is
+ * aligned on the biggest cache line size in the platform. This is known only
+ * to the platform as it might have a combination of integrated and external
+ * caches. Such alignment ensures that two maiboxes do not sit on the same cache
+ * line at any cache level. They could belong to different cpus/clusters &
+ * get written while being protected by different locks causing corruption of
+ * a valid mailbox address.
+ ******************************************************************************/
+#define CACHE_WRITEBACK_SHIFT   6
+#define CACHE_WRITEBACK_GRANULE (1 << CACHE_WRITEBACK_SHIFT)
+
+#ifndef __ASSEMBLY__
+
+typedef volatile struct {
+       unsigned long value
+       __attribute__((__aligned__(CACHE_WRITEBACK_GRANULE)));
+} mailbox;
+
+/*******************************************************************************
+ * Function and variable prototypes
+ ******************************************************************************/
+extern unsigned long *bl1_normal_ram_base;
+extern unsigned long *bl1_normal_ram_len;
+extern unsigned long *bl1_normal_ram_limit;
+extern unsigned long *bl1_normal_ram_zi_base;
+extern unsigned long *bl1_normal_ram_zi_len;
+
+extern unsigned long *bl1_coherent_ram_base;
+extern unsigned long *bl1_coherent_ram_len;
+extern unsigned long *bl1_coherent_ram_limit;
+extern unsigned long *bl1_coherent_ram_zi_base;
+extern unsigned long *bl1_coherent_ram_zi_len;
+extern unsigned long warm_boot_entrypoint;
+
+extern void bl1_plat_arch_setup(void);
+extern void bl2_plat_arch_setup(void);
+extern void bl31_plat_arch_setup(void);
+extern int platform_setup_pm(plat_pm_ops **);
+extern unsigned int platform_get_core_pos(unsigned long mpidr);
+extern void disable_mmu(void);
+extern void enable_mmu(void);
+extern void configure_mmu(meminfo *,
+                         unsigned long,
+                         unsigned long,
+                         unsigned long,
+                         unsigned long);
+extern unsigned long platform_get_cfgvar(unsigned int);
+extern int platform_config_setup(void);
+extern void plat_report_exception(unsigned long);
+extern unsigned long plat_get_ns_image_entrypoint(void);
+
+/* Declarations for fvp_topology.c */
+extern int plat_setup_topology(void);
+extern int plat_get_max_afflvl(void);
+extern unsigned int plat_get_aff_count(unsigned int, unsigned long);
+extern unsigned int plat_get_aff_state(unsigned int, unsigned long);
+
+#endif /*__ASSEMBLY__*/
+
+#endif /* __PLATFORM_H__ */
diff --git a/readme.md b/readme.md
new file mode 100644 (file)
index 0000000..5b076d1
--- /dev/null
+++ b/readme.md
@@ -0,0 +1,125 @@
+ARM Trusted Firmware - version 0.2
+==================================
+
+ARM Trusted Firmware provides a reference implementation of secure world
+software for [ARMv8], including Exception Level 3 (EL3) software. This first
+release focuses on support for ARM's [Fixed Virtual Platforms (FVPs)] [FVP].
+
+The intent is to provide a reference implementation of various ARM interface
+standards, such as the Power State Coordination Interface ([PSCI]), Trusted
+Board Boot Requirements (TBBR) and [Secure Monitor] [TEE-SMC] code. As far as
+possible the code is designed for reuse or porting to other ARMv8 model and
+hardware platforms.
+
+This release is the first one as source code: an initial prototype
+release was available in binary form in the [Linaro AArch64 OpenEmbedded
+Engineering Build] [AArch64 LEB] to support the new FVP Base platform
+models from ARM.
+
+ARM will continue development in collaboration with interested parties to
+provide a full reference implementation of PSCI, TBBR and Secure Monitor code
+to the benefit of all developers working with ARMv8 TrustZone software.
+
+
+License
+-------
+
+The software is provided under a BSD 3-Clause [license]. Certain source files
+are derived from FreeBSD code: the original license is included in these
+source files.
+
+
+This Release
+------------
+
+This software is an early implementation of the Trusted Firmware. Only
+limited functionality is provided at present and it has not been optimized or
+subjected to extended robustness or stress testing.
+
+### Functionality
+
+*   Initial implementation of a subset of the Trusted Board Boot Requirements
+    Platform Design Document (PDD).
+
+*   Initializes the secure world (for example, exception vectors, control
+    registers, GIC and interrupts for the platform), before transitioning into
+    the normal world.
+
+*   Supports both GICv2 and GICv3 initialization for use by normal world
+    software.
+
+*   Starts the normal world at the highest available Exception Level: EL2
+    if available, otherwise EL1.
+
+*   Handles SMCs (Secure Monitor Calls) conforming to the [SMC Calling
+    Convention PDD] [SMCCC].
+
+*   Handles SMCs relating to the [Power State Coordination Interface PDD] [PSCI]
+    for the Secondary CPU Boot and CPU hotplug use-cases.
+
+For a full list of updated functionality and implementation details, please
+see the [User Guide]. The [Change Log] provides details of changes made
+since the last release.
+
+### Platforms
+
+This release of the Trusted Firmware has been tested on the following ARM
+[FVP]s (64-bit versions only):
+
+*   `FVP_Base_AEMv8A-AEMv8A` (Version 5.1 build 8).
+*   `FVP_Base_Cortex-A57x4-A53x4` (Version 5.1 build 8).
+*   `FVP_Base_Cortex-A57x1-A53x1` (Version 5.1 build 8).
+
+These models can be licensed from ARM: see [www.arm.com/fvp] [FVP]
+
+### Still to Come
+
+*   Complete implementation of the [PSCI] specification.
+
+*   Secure memory, Secure monitor, Test Secure OS & Secure interrupts.
+
+*   Booting the firmware from a block device.
+
+*   Completing the currently experimental GICv3 support.
+
+For a full list of detailed issues in the current code, please see the [Change
+Log].
+
+
+Getting Started
+---------------
+
+Get the Trusted Firmware source code from
+[GitHub](https://www.github.com/ARM-software/arm-trusted-firmware).
+
+See the [User Guide] for instructions on how to install, build and use
+the Trusted Firmware with the ARM [FVP]s.
+
+See the [Porting Guide] as well for information about how to use this
+software on another ARMv8 platform.
+
+### Feedback and support
+
+ARM welcomes any feedback on the Trusted Firmware. Please send feedback using
+the [GitHub issue tracker](
+https://github.com/ARM-software/arm-trusted-firmware/issues).
+
+ARM licensees may contact ARM directly via their partner managers.
+
+
+- - - - - - - - - - - - - - - - - - - - - - - - - -
+
+_Copyright (c) 2013 ARM Ltd. All rights reserved._
+
+
+[License]:       license.md "BSD license for ARM Trusted Firmware"
+[Change Log]:    ./docs/change-log.md
+[User Guide]:    ./docs/user-guide.md
+[Porting Guide]: ./docs/porting-guide.md
+
+[ARMv8]:         http://www.arm.com/products/processors/armv8-architecture.php "ARMv8 Architecture"
+[FVP]:           http://www.arm.com/fvp "ARM's Fixed Virtual Platforms"
+[PSCI]:          http://infocenter.arm.com/help/topic/com.arm.doc.den0022b/index.html "Power State Coordination Interface PDD (ARM DEN 0022B.b)"
+[SMCCC]:         http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html "SMC Calling Convention PDD (ARM DEN 0028A)"
+[TEE-SMC]:       http://www.arm.com/products/processors/technologies/trustzone/tee-smc.php "Secure Monitor and TEEs"
+[AArch64 LEB]:   http://releases.linaro.org/13.09/openembedded/aarch64 "Linaro AArch64 OpenEmbedded ARM Fast Model 13.09 Release"