openwrt/staging/blogic.git
6 years agopowerpc/eeh: Clarify arguments to eeh_reset_device()
Sam Bobroff [Mon, 19 Mar 2018 02:48:55 +0000 (13:48 +1100)]
powerpc/eeh: Clarify arguments to eeh_reset_device()

It is currently difficult to understand the behaviour of
eeh_reset_device() due to the way it's parameters are used. In
particular, when 'bus' is NULL, it's value is still necessary so the
same value is looked up again locally under a different name
('frozen_bus') but behaviour is changed.

To clarify this, add a new parameter 'driver_eeh_aware', and have the
caller set it when it would have passed NULL for 'bus' and always pass
a value for 'bus'. Then change any test that was on 'bus' to one on
'!driver_eeh_aware' and replace uses of 'frozen_bus' with 'bus'.

Also update the function's comment.

This should not change behaviour.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Rename frozen_bus to bus in eeh_handle_normal_event()
Sam Bobroff [Mon, 19 Mar 2018 02:47:02 +0000 (13:47 +1100)]
powerpc/eeh: Rename frozen_bus to bus in eeh_handle_normal_event()

The name "frozen_bus" is misleading: it's not necessarily frozen, it's
just the PE's PCI bus.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove misleading test in eeh_handle_normal_event()
Sam Bobroff [Mon, 19 Mar 2018 02:46:51 +0000 (13:46 +1100)]
powerpc/eeh: Remove misleading test in eeh_handle_normal_event()

Remove a test that checks if "frozen_bus" is NULL, because it cannot
have changed since it was tested at the start of the function and so
must be true here.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Fix misleading comment in __eeh_addr_cache_get_device()
Sam Bobroff [Mon, 19 Mar 2018 02:46:40 +0000 (13:46 +1100)]
powerpc/eeh: Fix misleading comment in __eeh_addr_cache_get_device()

Commit "0ba178888b05 powerpc/eeh: Remove reference to PCI device"
removed a call to pci_dev_get() from __eeh_addr_cache_get_device() but
did not update the comment to match.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Manage EEH_PE_RECOVERING inside eeh_handle_normal_event()
Sam Bobroff [Mon, 19 Mar 2018 02:46:30 +0000 (13:46 +1100)]
powerpc/eeh: Manage EEH_PE_RECOVERING inside eeh_handle_normal_event()

Currently the EEH_PE_RECOVERING flag for a PE is managed by both the
caller and callee of eeh_handle_normal_event() (among other places not
considered here). This is complicated by the fact that the PE may
or may not have been invalidated by the call.

So move the callee's handling into eeh_handle_normal_event(), which
clarifies it and allows the return type to be changed to void (because
it no longer needs to indicate at the PE has been invalidated).

This should not change behaviour except in eeh_event_handler() where
it was previously possible to cause eeh_pe_state_clear() to be called
on an invalid PE, which is now avoided.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove eeh_handle_event()
Sam Bobroff [Mon, 19 Mar 2018 02:46:20 +0000 (13:46 +1100)]
powerpc/eeh: Remove eeh_handle_event()

The function eeh_handle_event(pe) does nothing other than switching
between calling eeh_handle_normal_event(pe) and
eeh_handle_special_event(). However it is only called in two places,
one where pe can't be NULL and the other where it must be NULL (see
eeh_event_handler()) so it does nothing but obscure the flow of
control.

So, remove it.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv/npu: Do not try invalidating 32bit table when 64bit table is enabled
Alexey Kardashevskiy [Tue, 13 Feb 2018 05:51:35 +0000 (16:51 +1100)]
powerpc/powernv/npu: Do not try invalidating 32bit table when 64bit table is enabled

GPUs and the corresponding NVLink bridges get different PEs as they
have separate translation validation entries (TVEs). We put these PEs
to the same IOMMU group so they cannot be passed through separately.
So the iommu_table_group_ops::set_window/unset_window for GPUs do set
tables to the NPU PEs as well which means that iommu_table's list of
attached PEs (iommu_table_group_link) has both GPU and NPU PEs linked.
This list is used for TCE cache invalidation.

The problem is that NPU PE has just a single TVE and can be programmed
to point to 32bit or 64bit windows while GPU PE has two (as any other
PCI device). So we end up having an 32bit iommu_table struct linked to
both PEs even though only the 64bit TCE table cache can be invalidated
on NPU. And a relatively recent skiboot detects this and prints
errors.

This changes GPU's iommu_table_group_ops::set_window/unset_window to
make sure that NPU PE is only linked to the table actually used by the
hardware. If there are two tables used by an IOMMU group, the NPU PE
will use the last programmed one which with the current use scenarios
is expected to be a 64bit one.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Fix typo in comments
Alexey Kardashevskiy [Thu, 1 Feb 2018 05:07:25 +0000 (16:07 +1100)]
powerpc/mm: Fix typo in comments

Fixes: 912cc87a6 "powerpc/mm/radix: Add LPID based tlb flush helpers"
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/lpar/debug: Initialize flags before printing debug message
Alexey Kardashevskiy [Tue, 9 Jan 2018 05:52:14 +0000 (16:52 +1100)]
powerpc/lpar/debug: Initialize flags before printing debug message

With enabled DEBUG, there is a compile error:
"error: ‘flags’ is used uninitialized in this function".

This moves pr_devel() little further where @flags are initialized.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/init: Do not advertise radix during client-architecture-support
Alexey Kardashevskiy [Tue, 9 Jan 2018 05:45:20 +0000 (16:45 +1100)]
powerpc/init: Do not advertise radix during client-architecture-support

Currently the pseries kernel advertises radix MMU support even if
the actual support is disabled via the CONFIG_PPC_RADIX_MMU option.

This adds a check for CONFIG_PPC_RADIX_MMU to avoid advertising radix
to the hypervisor.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Fix section mismatch warning in stop_machine_change_mapping()
Mauricio Faria de Oliveira [Fri, 9 Mar 2018 20:45:58 +0000 (17:45 -0300)]
powerpc/mm: Fix section mismatch warning in stop_machine_change_mapping()

Fix the warning messages for stop_machine_change_mapping(), and a number
of other affected functions in its call chain.

All modified functions are under CONFIG_MEMORY_HOTPLUG, so __meminit
is okay (keeps them / does not discard them).

Boot-tested on powernv/power9/radix-mmu and pseries/power8/hash-mmu.

    $ make -j$(nproc) CONFIG_DEBUG_SECTION_MISMATCH=y vmlinux
    ...
      MODPOST vmlinux.o
    WARNING: vmlinux.o(.text+0x6b130): Section mismatch in reference from the function stop_machine_change_mapping() to the function .meminit.text:create_physical_mapping()
    The function stop_machine_change_mapping() references
    the function __meminit create_physical_mapping().
    This is often because stop_machine_change_mapping lacks a __meminit
    annotation or the annotation of create_physical_mapping is wrong.

    WARNING: vmlinux.o(.text+0x6b13c): Section mismatch in reference from the function stop_machine_change_mapping() to the function .meminit.text:create_physical_mapping()
    The function stop_machine_change_mapping() references
    the function __meminit create_physical_mapping().
    This is often because stop_machine_change_mapping lacks a __meminit
    annotation or the annotation of create_physical_mapping is wrong.
    ...

Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Wire up cpu_show_spectre_v2()
Michael Ellerman [Tue, 27 Mar 2018 12:01:53 +0000 (23:01 +1100)]
powerpc/64s: Wire up cpu_show_spectre_v2()

Add a definition for cpu_show_spectre_v2() to override the generic
version. This has several permuations, though in practice some may not
occur we cater for any combination.

The most verbose is:

  Mitigation: Indirect branch serialisation (kernel only), Indirect
  branch cache disabled, ori31 speculation barrier enabled

We don't treat the ori31 speculation barrier as a mitigation on its
own, because it has to be *used* by code in order to be a mitigation
and we don't know if userspace is doing that. So if that's all we see
we say:

  Vulnerable, ori31 speculation barrier enabled

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Wire up cpu_show_spectre_v1()
Michael Ellerman [Tue, 27 Mar 2018 12:01:52 +0000 (23:01 +1100)]
powerpc/64s: Wire up cpu_show_spectre_v1()

Add a definition for cpu_show_spectre_v1() to override the generic
version. Currently this just prints "Not affected" or "Vulnerable"
based on the firmware flag.

Although the kernel does have array_index_nospec() in a few places, we
haven't yet audited all the powerpc code to see where it's necessary,
so for now we don't list that as a mitigation.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
Michael Ellerman [Tue, 27 Mar 2018 12:01:51 +0000 (23:01 +1100)]
powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()

Now that we have the security flags we can simplify the code in
pseries_setup_rfi_flush() because the security flags have pessimistic
defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
Michael Ellerman [Tue, 27 Mar 2018 12:01:50 +0000 (23:01 +1100)]
powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()

Now that we have the security flags we can significantly simplify the
code in pnv_setup_rfi_flush(), because we can use the flags instead of
checking device tree properties and because the security flags have
pessimistic defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Enhance the information in cpu_show_meltdown()
Michael Ellerman [Tue, 27 Mar 2018 12:01:49 +0000 (23:01 +1100)]
powerpc/64s: Enhance the information in cpu_show_meltdown()

Now that we have the security feature flags we can make the
information displayed in the "meltdown" file more informative.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Move cpu_show_meltdown()
Michael Ellerman [Tue, 27 Mar 2018 12:01:48 +0000 (23:01 +1100)]
powerpc/64s: Move cpu_show_meltdown()

This landed in setup_64.c for no good reason other than we had nowhere
else to put it. Now that we have a security-related file, that is a
better place for it so move it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Set or clear security feature flags
Michael Ellerman [Tue, 27 Mar 2018 12:01:47 +0000 (23:01 +1100)]
powerpc/powernv: Set or clear security feature flags

Now that we have feature flags for security related things, set or
clear them based on what we see in the device tree provided by
firmware.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Set or clear security feature flags
Michael Ellerman [Tue, 27 Mar 2018 12:01:46 +0000 (23:01 +1100)]
powerpc/pseries: Set or clear security feature flags

Now that we have feature flags for security related things, set or
clear them based on what we receive from the hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add security feature flags for Spectre/Meltdown
Michael Ellerman [Tue, 27 Mar 2018 12:01:44 +0000 (23:01 +1100)]
powerpc: Add security feature flags for Spectre/Meltdown

This commit adds security feature flags to reflect the settings we
receive from firmware regarding Spectre/Meltdown mitigations.

The feature names reflect the names we are given by firmware on bare
metal machines. See the hostboot source for details.

Arguably these could be firmware features, but that then requires them
to be read early in boot so they're available prior to asm feature
patching, but we don't actually want to use them for patching. We may
also want to dynamically update them in future, which would be
incompatible with the way firmware features work (at the moment at
least). So for now just make them separate flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
Michael Ellerman [Tue, 27 Mar 2018 12:01:45 +0000 (23:01 +1100)]
powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags

Add some additional values which have been defined for the
H_GET_CPU_CHARACTERISTICS hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
Michael Ellerman [Wed, 14 Mar 2018 22:40:42 +0000 (19:40 -0300)]
powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration

We might have migrated to a machine that uses a different flush type,
or doesn't need flushing at all.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Differentiate enabled and patched flush types
Mauricio Faria de Oliveira [Wed, 14 Mar 2018 22:40:41 +0000 (19:40 -0300)]
powerpc/rfi-flush: Differentiate enabled and patched flush types

Currently the rfi-flush messages print 'Using <type> flush' for all
enabled_flush_types, but that is not necessarily true -- as now the
fallback flush is always enabled on pseries, but the fixup function
overwrites its nop/branch slot with other flush types, if available.

So, replace the 'Using <type> flush' messages with '<type> flush is
available'.

Also, print the patched flush types in the fixup function, so users
can know what is (not) being used (e.g., the slower, fallback flush,
or no flush type at all if flush is disabled via the debugfs switch).

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Always enable fallback flush on pseries
Michael Ellerman [Wed, 14 Mar 2018 22:40:40 +0000 (19:40 -0300)]
powerpc/rfi-flush: Always enable fallback flush on pseries

This ensures the fallback flush area is always allocated on pseries,
so in case a LPAR is migrated from a patched to an unpatched system,
it is possible to enable the fallback flush in the target system.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
Michael Ellerman [Wed, 14 Mar 2018 22:40:39 +0000 (19:40 -0300)]
powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again

For PowerVM migration we want to be able to call setup_rfi_flush()
again after we've migrated the partition.

To support that we need to check that we're not trying to allocate the
fallback flush area after memblock has gone away (i.e., boot-time only).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code
Michael Ellerman [Wed, 14 Mar 2018 22:40:38 +0000 (19:40 -0300)]
powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code

rfi_flush_enable() includes a check to see if we're already
enabled (or disabled), and in that case does nothing.

But that means calling setup_rfi_flush() a 2nd time doesn't actually
work, which is a bit confusing.

Move that check into the debugfs code, where it really belongs.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Add blacklisted events for Power9 DD2.2
Madhavan Srinivasan [Sun, 4 Mar 2018 11:56:28 +0000 (17:26 +0530)]
powerpc/perf: Add blacklisted events for Power9 DD2.2

These events either do not count, or do not count correctly, so to
prevent user confusion block counting them at all.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[mpe: Change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Add blacklisted events for Power9 DD2.1
Madhavan Srinivasan [Sun, 4 Mar 2018 11:56:27 +0000 (17:26 +0530)]
powerpc/perf: Add blacklisted events for Power9 DD2.1

These events either do not count, or do not count correctly, so to
prevent user confusion block counting them at all.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[mpe: Change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Infrastructure to support addition of blacklisted events
Madhavan Srinivasan [Sun, 4 Mar 2018 11:56:26 +0000 (17:26 +0530)]
powerpc/perf: Infrastructure to support addition of blacklisted events

Introduce code to support addition of blacklisted events for a
processor version. Blacklisted events are events that are known to not
count correctly on that CPU revision, and so should be prevented from
being counted so as to avoid user confusion.

A 'pointer' and 'int' variable to hold the number of events are added
to 'struct power_pmu', along with a generic function to loop through
the list to validate the given event. Generic function
'is_event_blacklisted' is called in power_pmu_event_init() to detect
and reject early.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Prevent kernel address leak via perf_get_data_addr()
Madhavan Srinivasan [Wed, 21 Mar 2018 11:40:26 +0000 (17:10 +0530)]
powerpc/perf: Prevent kernel address leak via perf_get_data_addr()

Sampled Data Address Register (SDAR) is a 64-bit register that
contains the effective address of the storage operand of an
instruction that was being executed, possibly out-of-order, at or
around the time that the Performance Monitor alert occurred.

In certain scenario SDAR happen to contain the kernel address even for
userspace only sampling. Add checks to prevent it.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Prevent kernel address leak to userspace via BHRB buffer
Madhavan Srinivasan [Wed, 21 Mar 2018 11:40:25 +0000 (17:10 +0530)]
powerpc/perf: Prevent kernel address leak to userspace via BHRB buffer

The current Branch History Rolling Buffer (BHRB) code does not check
for any privilege levels before updating the data from BHRB. This
could leak kernel addresses to userspace even when profiling only with
userspace privileges. Add proper checks to prevent it.

Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Fix kernel address leak via sampling registers
Michael Ellerman [Wed, 21 Mar 2018 11:40:24 +0000 (17:10 +0530)]
powerpc/perf: Fix kernel address leak via sampling registers

Current code in power_pmu_disable() does not clear the sampling
registers like Sampling Instruction Address Register (SIAR) and
Sampling Data Address Register (SDAR) after disabling the PMU. Since
these are userspace readable and could contain kernel addresses, add
code to explicitly clear the content of these registers.

Also add a "context synchronizing instruction" to enforce no further
updates to these registers as suggested by Power ISA v3.0B. From
section 9.4, on page 1108:

  "If an mtspr instruction is executed that changes the value of a
  Performance Monitor register other than SIAR, SDAR, and SIER, the
  change is not guaranteed to have taken effect until after a
  subsequent context synchronizing instruction has been executed (see
  Chapter 11. "Synchronization Requirements for Context Alterations"
  on page 1133)."

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[mpe: Massage change log and add ISA reference]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Call H_REGISTER_PROC_TBL when running as a HPT guest on POWER9
Paul Mackerras [Thu, 16 Feb 2017 05:03:39 +0000 (16:03 +1100)]
powerpc/64: Call H_REGISTER_PROC_TBL when running as a HPT guest on POWER9

On POWER9, since commit cc3d2940133d ("powerpc/64: Enable use of radix
MMU under hypervisor on POWER9", 2017-01-30), we set both the radix and
HPT bits in the client-architecture-support (CAS) vector, which tells
the hypervisor that we can do either radix or HPT.  According to PAPR,
if we use this combination we are promising to do a H_REGISTER_PROC_TBL
hcall later on to let the hypervisor know whether we are doing radix
or HPT.  We currently do this call if we are doing radix but not if
we are doing HPT.  If the hypervisor is able to support both radix
and HPT guests, it would be entitled to defer allocation of the HPT
until the H_REGISTER_PROC_TBL call, and to fail any attempts to create
HPTEs until the H_REGISTER_PROC_TBL call.  Thus we need to do a
H_REGISTER_PROC_TBL call when we are doing HPT; otherwise we may
crash at boot time.

This adds the code to call H_REGISTER_PROC_TBL in this case, before
we attempt to create any HPT entries using H_ENTER.

Fixes: cc3d2940133d ("powerpc/64: Enable use of radix MMU under hypervisor on POWER9")
Cc: stable@vger.kernel.org # v4.11+
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Fri, 23 Mar 2018 21:43:18 +0000 (08:43 +1100)]
Merge branch 'topic/ppc-kvm' into next

This brings in two series from Paul, one of which touches KVM code and
may need to be merged into the kvm-ppc tree to resolve conflicts.

6 years agoKVM: PPC: Book3S HV: Work around TEXASR bug in fake suspend state
Paul Mackerras [Wed, 21 Mar 2018 10:32:03 +0000 (21:32 +1100)]
KVM: PPC: Book3S HV: Work around TEXASR bug in fake suspend state

This works around a hardware bug in "Nimbus" POWER9 DD2.2 processors,
where the contents of the TEXASR can get corrupted while a thread is
in fake suspend state.  The workaround is for the instruction emulation
code to use the value saved at the most recent guest exit in real
suspend mode.  We achieve this by simply not saving the TEXASR into
the vcpu struct on an exit in fake suspend state.  We also have to
take care to set the orig_texasr field only on guest exit in real
suspend state.

This also means that on guest entry in fake suspend state, TEXASR
will be restored to the value it had on the last exit in real suspend
state, effectively counteracting any hardware-caused corruption.  This
works because TEXASR may not be written in suspend state.

With this, the guest might see the wrong values in TEXASR if it reads
it while in suspend state, but will see the correct value in
non-transactional state (e.g. after a treclaim), and treclaim will
work correctly.

With this workaround, the code will actually run slightly faster, and
will operate correctly on systems without the TEXASR bug (since TEXASR
may not be written in suspend state, and is only changed by failure
recording, which will have already been done before we get into fake
suspend state).  Therefore these changes are not made subject to a CPU
feature bit.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Work around XER[SO] bug in fake suspend mode
Suraj Jitindar Singh [Wed, 21 Mar 2018 10:32:02 +0000 (21:32 +1100)]
KVM: PPC: Book3S HV: Work around XER[SO] bug in fake suspend mode

This works around a hardware bug in "Nimbus" POWER9 DD2.2 processors,
where a treclaim performed in fake suspend mode can cause subsequent
reads from the XER register to return inconsistent values for the SO
(summary overflow) bit.  The inconsistent SO bit state can potentially
be observed on any thread in the core.  We have to do the treclaim
because that is the only way to get the thread out of suspend state
(fake or real) and into non-transactional state.

The workaround for the bug is to force the core into SMT4 mode before
doing the treclaim.  This patch adds the code to do that, conditional
on the CPU_FTR_P9_TM_XER_SO_BUG feature bit.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9
Paul Mackerras [Wed, 21 Mar 2018 10:32:01 +0000 (21:32 +1100)]
KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9

POWER9 has hardware bugs relating to transactional memory and thread
reconfiguration (changes to hardware SMT mode).  Specifically, the core
does not have enough storage to store a complete checkpoint of all the
architected state for all four threads.  The DD2.2 version of POWER9
includes hardware modifications designed to allow hypervisor software
to implement workarounds for these problems.  This patch implements
those workarounds in KVM code so that KVM guests see a full, working
transactional memory implementation.

The problems center around the use of TM suspended state, where the
CPU has a checkpointed state but execution is not transactional.  The
workaround is to implement a "fake suspend" state, which looks to the
guest like suspended state but the CPU does not store a checkpoint.
In this state, any instruction that would cause a transition to
transactional state (rfid, rfebb, mtmsrd, tresume) or would use the
checkpointed state (treclaim) causes a "soft patch" interrupt (vector
0x1500) to the hypervisor so that it can be emulated.  The trechkpt
instruction also causes a soft patch interrupt.

On POWER9 DD2.2, we avoid returning to the guest in any state which
would require a checkpoint to be present.  The trechkpt in the guest
entry path which would normally create that checkpoint is replaced by
either a transition to fake suspend state, if the guest is in suspend
state, or a rollback to the pre-transactional state if the guest is in
transactional state.  Fake suspend state is indicated by a flag in the
PACA plus a new bit in the PSSCR.  The new PSSCR bit is write-only and
reads back as 0.

On exit from the guest, if the guest is in fake suspend state, we still
do the treclaim instruction as we would in real suspend state, in order
to get into non-transactional state, but we do not save the resulting
register state since there was no checkpoint.

Emulation of the instructions that cause a softpatch interrupt is
handled in two paths.  If the guest is in real suspend mode, we call
kvmhv_p9_tm_emulation_early() to handle the cases where the guest is
transitioning to transactional state.  This is called before we do the
treclaim in the guest exit path; because we haven't done treclaim, we
can get back to the guest with the transaction still active.  If the
instruction is a case that kvmhv_p9_tm_emulation_early() doesn't
handle, or if the guest is in fake suspend state, then we proceed to
do the complete guest exit path and subsequently call
kvmhv_p9_tm_emulation() in host context with the MMU on.  This handles
all the cases including the cases that generate program interrupts
(illegal instruction or TM Bad Thing) and facility unavailable
interrupts.

The emulation is reasonably straightforward and is mostly concerned
with checking for exception conditions and updating the state of
registers such as MSR and CR0.  The treclaim emulation takes care to
ensure that the TEXASR register gets updated as if it were the guest
treclaim instruction that had done failure recording, not the treclaim
done in hypervisor state in the guest exit path.

With this, the KVM_CAP_PPC_HTM capability returns true (1) even if
transactional memory is not available to host userspace.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Provide a way to force a core into SMT4 mode
Paul Mackerras [Wed, 21 Mar 2018 10:32:00 +0000 (21:32 +1100)]
powerpc/powernv: Provide a way to force a core into SMT4 mode

POWER9 processors up to and including "Nimbus" v2.2 have hardware
bugs relating to transactional memory and thread reconfiguration.
One of these bugs has a workaround which is to get the core into
SMT4 state temporarily.  This workaround is only needed when
running bare-metal.

This patch provides a function which gets the core into SMT4 mode
by preventing threads from going to a stop state, and waking up
those which are already in a stop state.  Once at least 3 threads
are not in a stop state, the core will be in SMT4 and we can
continue.

To do this, we add a "dont_stop" flag to the paca to tell the
thread not to go into a stop state.  If this flag is set,
power9_idle_stop() just returns immediately with a return value
of 0.  The pnv_power9_force_smt4_catch() function does the following:

1. Set the dont_stop flag for each thread in the core, except
   ourselves (in fact we use an atomic_inc() in case more than
   one thread is calling this function concurrently).
2. See how many threads are awake, indicated by their
   requested_psscr field in the paca being 0.  If this is at
   least 3, skip to step 5.
3. Send a doorbell interrupt to each thread that was seen as
   being in a stop state in step 2.
4. Until at least 3 threads are awake, scan the threads to which
   we sent a doorbell interrupt and check if they are awake now.

This relies on the following properties:

- Once dont_stop is non-zero, requested_psccr can't go from zero to
  non-zero, except transiently (and without the thread doing stop).
- requested_psscr being zero guarantees that the thread isn't in
  a state-losing stop state where thread reconfiguration could occur.
- Doing stop with a PSSCR value of 0 won't be a state-losing stop
  and thus won't allow thread reconfiguration.
- Once threads_per_core/2 + 1 (i.e. 3) threads are awake, the core
  must be in SMT4 mode, since SMT modes are powers of 2.

This does add a sync to power9_idle_stop(), which is necessary to
provide the correct ordering between setting requested_psscr and
checking dont_stop.  The overhead of the sync should be unnoticeable
compared to the latency of going into and out of a stop state.

Because some objected to incurring this extra latency on systems where
the XER[SO] bug is not relevant, I have put the test in
power9_idle_stop inside a feature section.  This means that
pnv_power9_force_smt4_catch() WILL NOT WORK correctly on systems
without the CPU_FTR_P9_TM_XER_SO_BUG feature bit set, and will
probably hang the system.

In order to cater for uses where the caller has an operation that
has to be done while the core is in SMT4, the core continues to be
kept in SMT4 after pnv_power9_force_smt4_catch() function returns,
until the pnv_power9_force_smt4_release() function is called.
It undoes the effect of step 1 above and allows the other threads
to go into a stop state.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add CPU feature bits for TM bug workarounds on POWER9 v2.2
Paul Mackerras [Wed, 21 Mar 2018 10:31:59 +0000 (21:31 +1100)]
powerpc: Add CPU feature bits for TM bug workarounds on POWER9 v2.2

This adds a CPU feature bit which is set for POWER9 "Nimbus" DD2.2
processors which will be used to enable the hypervisor to assist
hardware with the handling of checkpointed register values while the
CPU is in suspend state, in order to work around hardware bugs.  The
hardware assistance for these workarounds introduced a new hardware
bug relating to the XER[SO] bit.  We add a separate feature bit for
this bug in case future chips fix it while still requiring the
hypervisor assistance with suspend state.

When the dt_cpu_ftrs subsystem is in use, the software assistance can
be enabled using a "tm-suspend-hypervisor-assist" node in the device
tree, and a "tm-suspend-xer-so-bug" node enables the workarounds for
the XER[SO] bug.  In the absence of such nodes, a quirk enables both
for POWER9 "Nimbus" DD2.2 processors.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Free up CPU feature bits on 64-bit machines
Paul Mackerras [Mon, 19 Mar 2018 21:46:13 +0000 (08:46 +1100)]
powerpc: Free up CPU feature bits on 64-bit machines

This moves all the CPU feature bits that are only used on 32-bit
machines to the top 20 bits of the CPU feature word and arranges
for them to be defined only in 32-bit builds.  The features that
are common to 32-bit and 64-bit machines are moved to bits 0-11
of the CPU feature word.  This means that for 64-bit platforms,
bits 44-63 can now be used for new features that only exist on
64-bit machines.  (These bit numbers are counting from the right,
i.e. the LSB is bit 0.)

Because CPU_FTR_L3_DISABLE_NAP moved from the low 16 bits to the high
16 bits, we have to adjust some assembly code.  Also, CPU_FTR_EMB_HV
moved from the high 16 bits to the low 16 bits.

Note that CPU_FTR_REAL_LE only applies to 64-bit chips, because only
64-bit chips (POWER6, 7, 8, 9) have a true little-endian mode that is
a CPU execution mode as opposed to being a page attribute.

With this we now have 20 free CPU feature bits on 64-bit machines.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Book E: Remove unused CPU_FTR_L2CSR bit
Paul Mackerras [Mon, 19 Mar 2018 21:46:12 +0000 (08:46 +1100)]
powerpc: Book E: Remove unused CPU_FTR_L2CSR bit

The CPU_FTR_L2CSR bit is never tested anywhere, so let's reclaim the
bit.

The last usage was removed in 86d63363defc ("powerpc/e500mc: Remove
dead L2 flushing code in idle_e500.S") (Jun 2015).

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use feature bit for RTC presence rather than timebase presence
Paul Mackerras [Mon, 19 Mar 2018 21:46:11 +0000 (08:46 +1100)]
powerpc: Use feature bit for RTC presence rather than timebase presence

All PowerPC CPUs other than the original PPC601 have a timebase
register rather than the "real-time clock" (RTC) register that the
PPC601 (and the original POWER and POWER2 CPUs) had.  Currently
we have a CPU feature bit to indicate the presence of the timebase,
but it makes more sense to use a bit to indicate the unusual
situation rather than the common situation.  This therefore defines
a CPU_FTR_USE_RTC bit in place of the CPU_FTR_USE_TB bit, and
arranges for it to be set on PPC601 systems.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: dts: replace 'linux,stdout-path' with 'stdout-path'
Rob Herring [Wed, 28 Feb 2018 22:44:06 +0000 (16:44 -0600)]
powerpc: dts: replace 'linux,stdout-path' with 'stdout-path'

'linux,stdout-path' has been deprecated for some time in favor of
'stdout-path'. Now dtc will warn on occurrences of 'linux,stdout-path'.
Search and replace all the of occurrences with 'stdout-path'.

Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoselftests/powerpc: Add process creation benchmark
Nicholas Piggin [Tue, 6 Mar 2018 13:24:58 +0000 (23:24 +1000)]
selftests/powerpc: Add process creation benchmark

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Add SPDX, and fixup formatting]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use sizeof(*foo) rather than sizeof(struct foo)
Markus Elfring [Thu, 19 Jan 2017 16:15:30 +0000 (17:15 +0100)]
powerpc: Use sizeof(*foo) rather than sizeof(struct foo)

It's slightly less error prone to use sizeof(*foo) rather than
specifying the type.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
[mpe: Consolidate into one patch, rewrite change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Remove unused flush_dcache_phys_range()
Matt Brown [Thu, 20 Jul 2017 06:25:14 +0000 (16:25 +1000)]
powerpc: Remove unused flush_dcache_phys_range()

The flush_dcache_phys_range() function is no longer used in the
kernel. The last usage was removed in c40785ad305b ("powerpc/dart: Use
a cachable DART").

This patch removes the function and declaration.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
[mpe: Munge change log, include commit that removed last user]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agolib/raid6: Build proper raid6test files on powerpc
Matt Brown [Fri, 4 Aug 2017 03:42:33 +0000 (13:42 +1000)]
lib/raid6: Build proper raid6test files on powerpc

Previously the raid6 test Makefile did not build the POWER specific files
(altivec and vpermxor).
This patch fixes the bug, so that all appropriate files for powerpc are built.

This patch also fixes the missing and mismatched ifdef statements to allow the
altivec.uc file to be built correctly.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agolib/raid6/altivec: Add vpermxor implementation for raid6 Q syndrome
Matt Brown [Fri, 4 Aug 2017 03:42:32 +0000 (13:42 +1000)]
lib/raid6/altivec: Add vpermxor implementation for raid6 Q syndrome

This patch uses the vpermxor instruction to optimise the raid6 Q
syndrome. This instruction was made available with POWER8, ISA version
2.07. It allows for both vperm and vxor instructions to be done in a
single instruction. This has been tested for correctness on a ppc64le
vm with a basic RAID6 setup containing 5 drives.

The performance benchmarks are from the raid6test in the
/lib/raid6/test directory. These results are from an IBM Firestone
machine with ppc64le architecture. The benchmark results show a 35%
speed increase over the best existing algorithm for powerpc (altivec).
The raid6test has also been run on a big-endian ppc64 vm to ensure it
also works for big-endian architectures.

Performance benchmarks:
  raid6: altivecx4 gen() 18773 MB/s
  raid6: altivecx8 gen() 19438 MB/s

  raid6: vpermxor4 gen() 25112 MB/s
  raid6: vpermxor8 gen() 26279 MB/s

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Reviewed-by: Daniel Axtens <dja@axtens.net>
[mpe: Add VPERMXOR macro so we can build with old binutils]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/5200: dts: digsy_mtc.dts: fix rv3029 compatible
Alexandre Belloni [Fri, 16 Feb 2018 23:43:23 +0000 (00:43 +0100)]
powerpc/5200: dts: digsy_mtc.dts: fix rv3029 compatible

The proper compatible for rv3029 is microcrystal,rv3029.

Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/time: stop validating rtc_time in .read_time
Alexandre Belloni [Wed, 21 Feb 2018 21:46:33 +0000 (22:46 +0100)]
powerpc/time: stop validating rtc_time in .read_time

The RTC core is always calling rtc_valid_tm after the read_time callback.
It is not necessary to call it just before returning from the callback.

Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/vas: Add a couple of trace points
Sukadev Bhattiprolu [Sat, 10 Feb 2018 03:49:27 +0000 (19:49 -0800)]
powerpc/vas: Add a couple of trace points

Add a couple of trace points in the VAS driver

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
[mpe: Add SPDX tag to new header]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/vas: Fix cleanup when VAS is not configured
Sukadev Bhattiprolu [Fri, 9 Feb 2018 17:49:06 +0000 (11:49 -0600)]
powerpc/vas: Fix cleanup when VAS is not configured

When VAS is not configured, unregister the platform driver. Also simplify
cleanup by delaying vas debugfs init until we know VAS is configured.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/npu-dma.c: Fix crash after __mmu_notifier_register failure
Mark Hairgrove [Sat, 10 Feb 2018 03:20:06 +0000 (19:20 -0800)]
powerpc/npu-dma.c: Fix crash after __mmu_notifier_register failure

pnv_npu2_init_context wasn't checking the return code from
__mmu_notifier_register. If  __mmu_notifier_register failed, the
npu_context was still assigned to the mm and the caller wasn't given any
indication that things went wrong. Later on pnv_npu2_destroy_context would
be called, which in turn called mmu_notifier_unregister and dropped
mm->mm_count without having incremented it in the first place. This led to
various forms of corruption like mm use-after-free and mm double-free.

__mmu_notifier_register can fail with EINTR if a signal is pending, so
this case can be frequent.

This patch calls opal_npu_destroy_context on the failure paths, and makes
sure not to assign mm->context.npu_context until past the failure points.

Signed-off-by: Mark Hairgrove <mhairgrove@nvidia.com>
Acked-By: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: Fix timebase synchronization status on P9
Christophe Lombard [Tue, 20 Feb 2018 13:48:56 +0000 (14:48 +0100)]
cxl: Fix timebase synchronization status on P9

The PSL Timebase register is updated by the PSL to maintain the
timebase.

On P9, the Timebase value is only provided by the CAPP as received the
last time a timebase request was performed.

The timebase requests are initiated through the adapter configuration
or application registers.

The specific sysfs entry "/sys/class/cxl/cardxx/psl_timebase_synced"
is now dynamically updated according the content of the PSL Timebase
register.

Fixes: f24be42aab37 ("cxl: Add psl9 specific code")
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Reviewed-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: remove radix calls to the slice code
Nicholas Piggin [Wed, 7 Mar 2018 01:37:17 +0000 (11:37 +1000)]
powerpc/mm/slice: remove radix calls to the slice code

This is a tidy up which removes radix MMU calls into the slice
code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: Use const pointers to cached slice masks where possible
Nicholas Piggin [Wed, 7 Mar 2018 01:37:16 +0000 (11:37 +1000)]
powerpc/mm/slice: Use const pointers to cached slice masks where possible

The slice_mask cache was a basic conversion which copied the slice
mask into caller's structures, because that's how the original code
worked. In most cases the pointer can be used directly instead, saving
a copy and an on-stack structure.

On POWER8, this increases vfork+exec+exit performance by 0.3%
and reduces time to mmap+munmap a 64kB page by 2%.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: remove dead code
Nicholas Piggin [Wed, 7 Mar 2018 01:37:15 +0000 (11:37 +1000)]
powerpc/mm/slice: remove dead code

This code is never compiled in, and it gets broken by the next
patch, so remove it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: Switch to 3-operand slice bitops helpers
Nicholas Piggin [Wed, 7 Mar 2018 01:37:14 +0000 (11:37 +1000)]
powerpc/mm/slice: Switch to 3-operand slice bitops helpers

This converts the slice_mask bit operation helpers to be the usual
3-operand kind, which allows 2 inputs to set a different output
without an extra copy, which is used in the next patch.

Adds slice_copy_mask, which will be used in the next patch.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: implement slice_check_range_fits
Nicholas Piggin [Wed, 7 Mar 2018 01:37:13 +0000 (11:37 +1000)]
powerpc/mm/slice: implement slice_check_range_fits

Rather than build slice masks from a range then use that to check for
fit in a candidate mask, implement slice_check_range_fits that checks
if a range fits in a mask directly.

This allows several structures to be removed from stacks, and also we
don't expect a huge range in a lot of these cases, so building and
comparing a full mask is going to be more expensive than testing just
one or two bits of the range.

On POWER8, this increases vfork+exec+exit performance by 0.3%
and reduces time to mmap+munmap a 64kB page by 5%.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: implement a slice mask cache
Nicholas Piggin [Wed, 7 Mar 2018 01:37:12 +0000 (11:37 +1000)]
powerpc/mm/slice: implement a slice mask cache

Calculating the slice mask can become a signifcant overhead for
get_unmapped_area. This patch adds a struct slice_mask for
each page size in the mm_context, and keeps these in synch with
the slices psize arrays and slb_addr_limit.

On Book3S/64 this adds 288 bytes to the mm_context_t for the
slice mask caches.

On POWER8, this increases vfork+exec+exit performance by 9.9%
and reduces time to mmap+munmap a 64kB page by 28%.

Reduces time to mmap+munmap by about 10% on 8xx.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: pass pointers to struct slice_mask where possible
Nicholas Piggin [Wed, 7 Mar 2018 01:37:11 +0000 (11:37 +1000)]
powerpc/mm/slice: pass pointers to struct slice_mask where possible

Pass around const pointers to struct slice_mask where possible, rather
than copies of slice_mask, to reduce stack and call overhead.

checkstack.pl gives, before:
0x00000d1c slice_get_unmapped_area [slice.o]: 592
0x00001864 is_hugepage_only_range [slice.o]: 448
0x00000754 slice_find_area_topdown [slice.o]: 400
0x00000484 slice_find_area_bottomup.isra.1 [slice.o]: 272
0x000017b4 slice_set_range_psize [slice.o]: 224
0x00000a4c slice_find_area [slice.o]: 128
0x00000160 slice_check_fit [slice.o]: 112

after:
0x00000ad0 slice_get_unmapped_area [slice.o]: 448
0x00001464 is_hugepage_only_range [slice.o]: 288
0x000006c0 slice_find_area [slice.o]: 144
0x0000016c slice_check_fit [slice.o]: 128
0x00000528 slice_find_area_bottomup.isra.2 [slice.o]: 128
0x000013e4 slice_set_range_psize [slice.o]: 128

This increases vfork+exec+exit performance by 1.5%.

Reduces time to mmap+munmap a 64kB page by 17%.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: tidy lpsizes and hpsizes update loops
Nicholas Piggin [Wed, 7 Mar 2018 01:37:10 +0000 (11:37 +1000)]
powerpc/mm/slice: tidy lpsizes and hpsizes update loops

Make these loops look the same, and change their form so the
important part is not wrapped over so many lines.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: Simplify and optimise slice context initialisation
Nicholas Piggin [Wed, 7 Mar 2018 01:37:09 +0000 (11:37 +1000)]
powerpc/mm/slice: Simplify and optimise slice context initialisation

The slice state of an mm gets zeroed then initialised upon exec.
This is the only caller of slice_set_user_psize now, so that can be
removed and instead implement a faster and simplified approach that
requires no locking or checking existing state.

This speeds up vfork+exec+exit performance on POWER8 by 3%.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xmon: Move empty plpar_set_ciabr() into plpar_wrappers.h
Michael Ellerman [Thu, 8 Mar 2018 02:54:42 +0000 (13:54 +1100)]
powerpc/xmon: Move empty plpar_set_ciabr() into plpar_wrappers.h

Now that plpar_wrappers.h has an #ifdef PSERIES we can move the empty
version of plpar_set_ciabr() which xmon wants into there.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Rename plapr routines to plpar
Michael Ellerman [Thu, 8 Mar 2018 02:54:41 +0000 (13:54 +1100)]
powerpc: Rename plapr routines to plpar

Back in 2013 we added some hypercall wrappers which misspelled
"plpar" (P-series Logical PARtition) as "plapr".

Visually they're hard to distinguish and it almost doesn't matter, but
it is confusing when grepping to miss some calls because of the typo.

They've also started spreading, so before they take over let's fix
them all to be "plpar".

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Make plpar_wrappers.h safe to include when PSERIES=n
Michael Ellerman [Thu, 8 Mar 2018 02:54:40 +0000 (13:54 +1100)]
powerpc/pseries: Make plpar_wrappers.h safe to include when PSERIES=n

Currently plpar_wrappers.h is not safe to include when
CONFIG_PPC_PSERIES=n, or at least it can be depending on other config
options and so on.

Fix that by wrapping the entire content in an ifdef.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Move smp_query_cpu_stopped() etc. out of plpar_wrappers.h
Michael Ellerman [Thu, 8 Mar 2018 02:54:39 +0000 (13:54 +1100)]
powerpc/pseries: Move smp_query_cpu_stopped() etc. out of plpar_wrappers.h

smp_query_cpu_stopped() and related #defines are currently in
plpar_wrappers.h. The function actually does an RTAS call, not an
hcall, and basically has nothing to do with plpar_wrappers.h

Move it into pseries.h, where it can easily be used by the only two
callers in pseries/smp.c and pseries/hotplug-cpu.c.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/32: Add missing prototypes for (early|machine)_init()
Mathieu Malaterre [Thu, 8 Mar 2018 11:31:59 +0000 (22:31 +1100)]
powerpc/32: Add missing prototypes for (early|machine)_init()

early_init() and machine_init() have no prototype, add one in
asm-prototypes.h.

Fixes the following warnings (treated as error in W=1):
  arch/powerpc/kernel/setup_32.c:68:30: error: no previous prototype for ‘early_init’
  arch/powerpc/kernel/setup_32.c:99:21: error: no previous prototype for ‘machine_init’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
[mpe: Move them to asm-prototypes.h, drop other functions]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/32: Make some functions static
Mathieu Malaterre [Wed, 7 Mar 2018 20:32:55 +0000 (21:32 +0100)]
powerpc/32: Make some functions static

These functions can all be static, make it so.

Signed-off-by: Mathieu Malaterre <malat@debian.org>
[mpe: Combine a patch of Mathieu's with some other static conversions]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Avoid comparison of unsigned long >= 0 in __access_ok()
Mathieu Malaterre [Fri, 2 Mar 2018 19:50:51 +0000 (20:50 +0100)]
powerpc: Avoid comparison of unsigned long >= 0 in __access_ok()

Rewrite function-like macro into regular static inline function to
avoid a warning during macro expansion.

Fix warning (treated as error in W=1):
./arch/powerpc/include/asm/uaccess.h:52:35: error: comparison of unsigned expression >= 0 is always true
   (((size) == 0) || (((size) - 1) <= ((segment).seg - (addr)))))
                                   ^

Suggested-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Avoid comparison of unsigned long >= 0 in pfn_valid()
Mathieu Malaterre [Wed, 7 Mar 2018 20:34:35 +0000 (21:34 +0100)]
powerpc: Avoid comparison of unsigned long >= 0 in pfn_valid()

Rewrite comparison since all values compared are of type `unsigned long`.

Instead of using unsigned properties and rewriting the original code as:
(originally suggested by Segher Boessenkool <segher@kernel.crashing.org>)

  #define pfn_valid(pfn) \
               (((pfn) - ARCH_PFN_OFFSET) < (max_mapnr - ARCH_PFN_OFFSET))

Prefer a static inline function to make code as readable as possible.

Fix a warning (treated as error in W=1):
  arch/powerpc/include/asm/page.h:129:32: error: comparison of unsigned expression >= 0 is always true [-Werror=type-limits]
  #define pfn_valid(pfn)  ((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr)
                                  ^

Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/prom: Remove warning on array size when empty
Mathieu Malaterre [Fri, 2 Mar 2018 19:49:18 +0000 (20:49 +0100)]
powerpc/prom: Remove warning on array size when empty

When neither CONFIG_ALTIVEC, nor CONFIG_VSX or CONFIG_PPC64 is
defined, the array feature_properties is defined as an empty array,
which in turn triggers the following warning (treated as error on
W=1):

  arch/powerpc/kernel/prom.c: In function ‘check_cpu_feature_properties’:
  arch/powerpc/kernel/prom.c:298:16: error: comparison of unsigned expression < 0 is always false
    for (i = 0; i < ARRAY_SIZE(feature_properties); ++i, ++fp) {
                  ^

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototypes for ppc_select() & ppc_fadvise64_64()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:35 +0000 (18:22 +0100)]
powerpc: Add missing prototypes for ppc_select() & ppc_fadvise64_64()

Add missing prototypes for ppc_select() & ppc_fadvise64_64() to header
asm-prototypes.h. Fix the following warnings (treated as errors in W=1)

  arch/powerpc/kernel/syscalls.c:87:1: error: no previous prototype for ‘ppc_select’
  arch/powerpc/kernel/syscalls.c:119:6: error: no previous prototype for ‘ppc_fadvise64_64’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototypes for hw_breakpoint_handler() & arch_unregister_hw_brea...
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:34 +0000 (18:22 +0100)]
powerpc: Add missing prototypes for hw_breakpoint_handler() & arch_unregister_hw_breakpoint()

In commit 5aae8a537080 ("powerpc, hw_breakpoints: Implement
hw_breakpoints for 64-bit server processors") function
hw_breakpoint_handler() and arch_unregister_hw_breakpoint() were added
without function prototypes in hw_breakpoint.h header.

Fix the following warning(s) (treated as error in W=1):
  arch/powerpc/kernel/hw_breakpoint.c:106:6: error: no previous prototype for ‘arch_unregister_hw_breakpoint’
  arch/powerpc/kernel/hw_breakpoint.c:209:5: error: no previous prototype for ‘hw_breakpoint_handler’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototypes for sys_sigreturn() & sys_rt_sigreturn()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:33 +0000 (18:22 +0100)]
powerpc: Add missing prototypes for sys_sigreturn() & sys_rt_sigreturn()

Two functions did not have a prototype defined in signal.h header. Fix
the following two warnings (treated as errors in W=1):

  arch/powerpc/kernel/signal_32.c:1135:6: error: no previous prototype for ‘sys_rt_sigreturn’
  arch/powerpc/kernel/signal_32.c:1422:6: error: no previous prototype for ‘sys_sigreturn’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for sys_debug_setcontext()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:32 +0000 (18:22 +0100)]
powerpc: Add missing prototype for sys_debug_setcontext()

In commit 81e7009ea46c ("powerpc: merge ppc signal.c and ppc64
signal32.c") the function sys_debug_setcontext was added without a
prototype.

Fix compilation warning (treated as error in W=1):
  arch/powerpc/kernel/signal_32.c:1227:5: error: no previous prototype for ‘sys_debug_setcontext’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for init_IRQ()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:31 +0000 (18:22 +0100)]
powerpc: Add missing prototype for init_IRQ()

A function init_IRQ() was added without a prototype declared in header
irq.h. Fix the following warning (treated as error in W=1):

  arch/powerpc/kernel/irq.c:662:13: error: no previous prototype for ‘init_IRQ’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for arch_irq_work_raise()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:29 +0000 (18:22 +0100)]
powerpc: Add missing prototype for arch_irq_work_raise()

In commit 4f8b50bbbe63 ("irq_work, ppc: Fix up arch hooks") a new
function arch_irq_work_raise() was added without a prototype in header
irq_work.h.

Fix the following warning (treated as error in W=1):
  arch/powerpc/kernel/time.c:523:6: error: no previous prototype for ‘arch_irq_work_raise’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for arch_dup_task_struct()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:28 +0000 (18:22 +0100)]
powerpc: Add missing prototype for arch_dup_task_struct()

In commit 55ccf3fe3f9a ("fork: move the real prepare_to_copy() users to
arch_dup_task_struct()") a new arch_dup_task_struct() was added
without a prototype declared in thread_info.h header. Fix the
following warning (treated as error in W=1):

  arch/powerpc/kernel/process.c:1609:5: error: no previous prototype for ‘arch_dup_task_struct’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for time_init()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:27 +0000 (18:22 +0100)]
powerpc: Add missing prototype for time_init()

The function time_init did not have a prototype defined in the time.h
header. Fix the following warning (treated as error in W=1):

  arch/powerpc/kernel/time.c:1068:13: error: no previous prototype for ‘time_init’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for hdec_interrupt
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:26 +0000 (18:22 +0100)]
powerpc: Add missing prototype for hdec_interrupt

In commit dabe859ec636 ("powerpc: Give hypervisor decrementer interrupts
their own handler") an empty body function was added, but no prototype
was declared. Fix warning (treated as error in W=1):

  arch/powerpc/kernel/time.c:629:6: error: no previous prototype for ‘hdec_interrupt’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add missing prototype for slb_miss_bad_addr()
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:25 +0000 (18:22 +0100)]
powerpc: Add missing prototype for slb_miss_bad_addr()

In commit f0f558b131db ("powerpc/mm: Preserve CFAR value on SLB miss caused
by access to bogus address"), the function slb_miss_bad_addr() was added
without a prototype. This commit adds it.

Fix a warning (treated as error in W=1):
  arch/powerpc/kernel/traps.c:1498:6: error: no previous prototype for ‘slb_miss_bad_addr’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/kernel: Make function __giveup_fpu() static
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:23 +0000 (18:22 +0100)]
powerpc/kernel: Make function __giveup_fpu() static

__giveup_fpu() is never called outside process.c, so it can be static.
That also means we don't need an empty definition in switch_to.h

Signed-off-by: Mathieu Malaterre <malat@debian.org>
[mpe: Also drop the empty version, rewrite change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/embedded6xx: Make functions flipper_pic_init() & ug_udbg_putc() static
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:22 +0000 (18:22 +0100)]
powerpc/embedded6xx: Make functions flipper_pic_init() & ug_udbg_putc() static

Change signature of two functions, adding static keyword to prevent the
following two warnings (treated as errors on W=1):

  arch/powerpc/platforms/embedded6xx/flipper-pic.c:135:28: error: no previous prototype for ‘flipper_pic_init’
  arch/powerpc/platforms/embedded6xx/usbgecko_udbg.c:172:6: error: no previous prototype for ‘ug_udbg_putc’

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/32: Mark both tmp variables as unused
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:19 +0000 (18:22 +0100)]
powerpc/32: Mark both tmp variables as unused

Since the value of `tmp` is never intended to be read, declare both `tmp`
variables as unused. Fix warning (treated as error in W=1):

  arch/powerpc/kernel/signal_32.c: In function ‘sys_swapcontext’:
  arch/powerpc/kernel/signal_32.c:1048:16: error: variable ‘tmp’ set but not used
  arch/powerpc/kernel/signal_32.c: In function ‘sys_debug_setcontext’:
  arch/powerpc/kernel/signal_32.c:1234:16: error: variable ‘tmp’ set but not used

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/32: Move the inline keyword at the beginning of function declaration
Mathieu Malaterre [Sun, 25 Feb 2018 17:22:17 +0000 (18:22 +0100)]
powerpc/32: Move the inline keyword at the beginning of function declaration

The inline keyword was not at the beginning of the function declaration.
Fix the following warning (treated as error in W=1):

  arch/powerpc/lib/sstep.c:283:1: error: ‘inline’ is not at beginning of declaration
   static int nokprobe_inline copy_mem_in(u8 *dest, unsigned long ea, int nb,
  arch/powerpc/lib/sstep.c:388:1: error: ‘inline’ is not at beginning of declaration
   static int nokprobe_inline copy_mem_out(u8 *dest, unsigned long ea, int nb,

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/epapr: Move register keyword at the beginning of declaration
Mathieu Malaterre [Wed, 31 Jan 2018 07:54:43 +0000 (08:54 +0100)]
powerpc/epapr: Move register keyword at the beginning of declaration

Fix warning for all register unsigned long (0,3-12) that appear during W=1
compilation:

./arch/powerpc/include/asm/epapr_hcalls.h:479:2: warning: ‘register’ is not at beginning of declaration [-Wold-style-declaration]
  unsigned long register r[\d] asm("r[\d]");

Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoPCI/hotplug: ppc: correct a php_slot usage after free
Simon Guo [Wed, 7 Mar 2018 08:46:04 +0000 (16:46 +0800)]
PCI/hotplug: ppc: correct a php_slot usage after free

In pnv_php_unregister_one(), pnv_php_put_slot() might kfree
php_slot structure. But there is pci_hp_deregister() after
that with php_slot reference.

This patch moves pnv_php_put_slot() to the end of function.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv/mce: Don't silently restart the machine
Balbir Singh [Thu, 8 Mar 2018 00:36:06 +0000 (11:36 +1100)]
powerpc/powernv/mce: Don't silently restart the machine

On MCE the current code will restart the machine with
ppc_md.restart(). This case was extremely unlikely since
prior to that a skiboot call is made and that resulted in
a checkstop for analysis.

With newer skiboots, on P9 we don't checkstop the box by
default, instead we return back to the kernel to extract
useful information at the time of the MCE. While we still
get this information, this patch converts the restart to
a panic(), so that if configured a dump can be taken and
we can track and probably debug the potential issue causing
the MCE.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: read PHB indications from the device tree
Philippe Bergheaud [Fri, 2 Mar 2018 09:56:12 +0000 (10:56 +0100)]
cxl: read PHB indications from the device tree

Configure the P9 XSL_DSNCTL register with PHB indications found
in the device tree, or else use legacy hard-coded values.

Signed-off-by: Philippe Bergheaud <felix@linux.vnet.ibm.com>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Enable tunneled operations
Philippe Bergheaud [Fri, 2 Mar 2018 09:56:11 +0000 (10:56 +0100)]
powerpc/powernv: Enable tunneled operations

P9 supports PCI tunneled operations (atomics and as_notify). This
patch adds support for tunneled operations on powernv, with a new
API, to be called by device drivers:

pnv_pci_enable_tunnel()
   Enable tunnel operations, tell driver the 16-bit ASN indication
   used by kernel.

pnv_pci_disable_tunnel()
   Disable tunnel operations.

pnv_pci_set_tunnel_bar()
   Tell kernel the Tunnel BAR Response address used by driver.
   This function uses two new OPAL calls, as the PBCQ Tunnel BAR
   register is configured by skiboot.

pnv_pci_get_as_notify_info()
   Return the ASN info of the thread to be woken up.

Signed-off-by: Philippe Bergheaud <felix@linux.vnet.ibm.com>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv/npu: Fix deadlock in mmio_invalidate()
Alistair Popple [Fri, 2 Mar 2018 05:18:45 +0000 (16:18 +1100)]
powerpc/powernv/npu: Fix deadlock in mmio_invalidate()

When sending TLB invalidates to the NPU we need to send extra flushes due
to a hardware issue. The original implementation would lock the all the
ATSD MMIO registers sequentially before unlocking and relocking each of
them sequentially to do the extra flush.

This introduced a deadlock as it is possible for one thread to hold one
ATSD register whilst waiting for another register to be freed while the
other thread is holding that register waiting for the one in the first
thread to be freed.

For example if there are two threads and two ATSD registers:

  Thread A Thread B
  ----------------------
  Acquire 1
  Acquire 2
  Release 1 Acquire 1
  Wait 1 Wait 2

Both threads will be stuck waiting to acquire a register resulting in an
RCU stall warning or soft lockup.

This patch solves the deadlock by refactoring the code to ensure registers
are not released between flushes and to ensure all registers are either
acquired or released together and in order.

Fixes: bbd5ff50afff ("powerpc/powernv/npu-dma: Add explicit flush when sending an ATSD")
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/8xx: fix cpm_cascade() dual end of interrupt
Christophe Leroy [Tue, 27 Feb 2018 11:25:55 +0000 (12:25 +0100)]
powerpc/8xx: fix cpm_cascade() dual end of interrupt

cpm_cascade() doesn't have to call eoi() as it is already called
by handle_fasteoi_irq()

And cpm_get_irq() will always return an unsigned int so the test
is useless

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Drop the function native_register_proc_table()
Anshuman Khandual [Sat, 3 Mar 2018 09:24:02 +0000 (14:54 +0530)]
powerpc/mm: Drop the function native_register_proc_table()

This is left over from the segment table implementation and not getting
called from any where now. Hence just drop it.

Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: Check if PSL data-cache is available before issue flush request
Vaibhav Jain [Thu, 15 Feb 2018 15:49:24 +0000 (21:19 +0530)]
cxl: Check if PSL data-cache is available before issue flush request

PSL9D doesn't have a data-cache that needs to be flushed before
resetting the card. However when cxl tries to flush data-cache on such
a card, it times-out as PSL_Control register never indicates flush
operation complete due to missing data-cache. This is usually
indicated in the kernel logs with this message:

"WARNING: cache flush timed out"

To fix this the patch checks PSL_Debug register CDC-Field(BIT:27)
which indicates the absence of a data-cache and sets a flag
'no_data_cache' in 'struct cxl_native' to indicate this. When
cxl_data_cache_flush() is called it checks the flag and if set bails
out early without requesting a data-cache flush operation to the PSL.

Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: Remove function write_timebase_ctrl_psl9() for PSL9
Vaibhav Jain [Thu, 15 Feb 2018 06:19:36 +0000 (11:49 +0530)]
cxl: Remove function write_timebase_ctrl_psl9() for PSL9

For PSL9 the contents of PSL_TB_CTLSTAT register have changed in PSL9
and all of the register is now readonly. Hence we don't need an sl_ops
implementation for 'write_timebase_ctrl' for to populate this register
for PSL9.

Hence this patch removes function write_timebase_ctrl_psl9() and its
references from the code.

Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: Enable NORST bit in PSL_DEBUG register for PSL9
Vaibhav Jain [Fri, 9 Feb 2018 04:09:16 +0000 (09:39 +0530)]
cxl: Enable NORST bit in PSL_DEBUG register for PSL9

We enable the NORST bit by default for debug afu images to prevent
reset of AFU trace-data on a PCI link drop. For production AFU images
this bit is always ignored and PSL gets reconfigured anyways thereby
resetting the trace data. So setting this bit for non-debug images
doesn't have any impact.

Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Reviewed-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xmon: Clear all breakpoints when xmon is disabled via debugfs
Vaibhav Jain [Sun, 4 Mar 2018 17:31:32 +0000 (23:01 +0530)]
powerpc/xmon: Clear all breakpoints when xmon is disabled via debugfs

Presently when xmon is disabled by debugfs any existing
instruction/data-access breakpoints set are not disabled. This may
lead to kernel oops when those breakpoints are hit as the necessary
debugger hooks aren't installed.

Hence this patch introduces a new function named clear_all_bpt() which
is called when xmon is disabled via debugfs. The function will
unpatch/clear all the trap and ciabr/dab based breakpoints.

Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Fix build break when CONFIG_DEBUG_FS=n]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xmon: Setup debugger hooks when first break-point is set
Vaibhav Jain [Sun, 4 Mar 2018 17:30:25 +0000 (23:00 +0530)]
powerpc/xmon: Setup debugger hooks when first break-point is set

Presently sysrq key for xmon('x') is registered during kernel init
irrespective of the value of kernel param 'xmon'. Thus xmon is enabled
even if 'xmon=off' is passed on the kernel command line. However this
doesn't enable the kernel debugger hooks needed for instruction or
data breakpoints. Thus when a break-point is hit with xmon=off a
kernel oops of the form below is reported:

  Oops: Exception in kernel mode, sig: 5 [#1]
  < snip >
  Trace/breakpoint trap

To fix this the patch checks and enables debugger hooks when an
instruction or data break-point is set via xmon console.

Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Just printf directly, no need for static const char[]]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoselftests/powerpc: Skip tm-unavailable if TM is not enabled
Gustavo Romero [Mon, 5 Mar 2018 20:48:55 +0000 (15:48 -0500)]
selftests/powerpc: Skip tm-unavailable if TM is not enabled

Some processor revisions do not support transactional memory, and
additionally kernel support can be disabled. In either case the
tm-unavailable test should be skipped, otherwise it will fail with
a SIGILL.

That commit also sets this selftest to be called through the test
harness as it's done for other TM selftests.

Finally, it avoids using "ping" as a thread name since it's
ambiguous and can be confusing when shown, for instance,
in a kernel backtrace log.

Fixes: 77fad8bfb1d2 ("selftests/powerpc: Check FP/VEC on exception in TM")
Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>