Commit Graph

72 Commits

Author SHA1 Message Date
Richard Henderson
a22a2a8b71 target/arm: Introduce core_to_aa64_mmu_idx
If by context we know that we're in AArch64 mode, we need not
test for M-profile when reconstructing the full ARMMMUIdx.

Backports commit 20dc67c947a691fa9df05e76aec6df50204b4b94 from qemu
2020-04-30 05:58:59 -04:00
Peter Maydell
cef6f3e72c target/arm: Move DBGDIDR into ARMISARegisters
We're going to want to read the DBGDIDR register from KVM in
a subsequent commit, which means it needs to be in the
ARMISARegisters sub-struct. Move it.

Backports commit 4426d3617d64922d97b74ed22e67e33b6fb7de0a from qemu
2020-03-21 18:29:01 -04:00
Peter Maydell
a6c9c87a5d target/arm: Stop assuming DBGDIDR always exists
The AArch32 DBGDIDR defines properties like the number of
breakpoints, watchpoints and context-matching comparators. On an
AArch64 CPU, the register may not even exist if AArch32 is not
supported at EL1.

Currently we hard-code use of DBGDIDR to identify the number of
breakpoints etc; this works for all our TCG CPUs, but will break if
we ever add an AArch64-only CPU. We also have an assert() that the
AArch32 and AArch64 registers match, which currently works only by
luck for KVM because we don't populate either of these ID registers
from the KVM vCPU and so they are both zero.

Clean this up so we have functions for finding the number
of breakpoints, watchpoints and context comparators which look
in the appropriate ID register.

This allows us to drop the "check that AArch64 and AArch32 agree
on the number of breakpoints etc" asserts:
* we no longer look at the AArch32 versions unless that's the
right place to be looking
* it's valid to have a CPU (eg AArch64-only) where they don't match
* we shouldn't have been asserting the validity of ID registers
in a codepath used with KVM anyway

Backports commit 88ce6c6ee85d902f59dc65afc3ca86b34f02b9ed from qemu
2020-03-21 18:26:24 -04:00
Peter Maydell
e63f70f980 target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registers
Enforce a convention that an isar_feature function that tests a
32-bit ID register always has _aa32_ in its name, and one that
tests a 64-bit ID register always has _aa64_ in its name.
We already follow this except for three cases: thumb_div,
arm_div and jazelle, which all need _aa32_ adding.

(As noted in the comment, isar_feature_aa32_fp16_arith()
is an exception in that it currently tests ID_AA64PFR0_EL1,
but will switch to MVFR1 once we've properly implemented
FP16 for AArch32.)

Backports commit 873b73c0c891ec20adacc7bd1ae789294334d675 from qemu
2020-03-21 18:08:23 -04:00
Richard Henderson
0131e804fb target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbid
For the purpose of rebuild_hflags_a64, we do not need to compute
all of the va parameters, only tbi. Moreover, we can compute them
in a form that is more useful to storing in hflags.

This eliminates the need for aa64_va_parameter_both, so fold that
in to aa64_va_parameter. The remaining calls to aa64_va_parameter
are in get_phys_addr_lpae and in pauth_helper.c.

This reduces the total cpu consumption of aa64_va_parameter in a
kernel boot plus a kvm guest kernel boot from 3% to 0.5%.

Backports commit b830a5ee82e66f54697dcc6450fe9239b7412d13 from qemu
2020-03-21 18:04:39 -04:00
Richard Henderson
5b5050c6ca target/arm: Update MSR access to UAO
Backports commit 9eeb7a1c9531cb3574bfe2c36eb7624802c3ec00 from qemu
2020-03-21 17:48:01 -04:00
Richard Henderson
aad0621f96 target/arm: Enforce PAN semantics in get_S1prot
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.

Backports commit 81636b70c226dc27d7ebc8dedbcec26166d23085 from qemu
2020-03-21 17:35:55 -04:00
Richard Henderson
35fab80c57 target/arm: Update MSR access for PAN
For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr. Writes from el0
are ignored, which is already handled by the CPSR_USER mask.

Backports commit 220f508f49c5f49fb771d5105f991c19ffede3f7 from qemu
2020-03-21 17:33:16 -04:00
Richard Henderson
50bb867a6f target/arm: Introduce aarch64_pstate_valid_mask
Use this along the exception return path, where we previously
accepted any values

Backports commit 140845111809cd6fd57ccde93867b48cc56ffc74 from qemu
2020-03-21 17:26:00 -04:00
Richard Henderson
e4a7a089f0 target/arm: Mask CPSR_J when Jazelle is not enabled
The J bit signals Jazelle mode, and so of course is RES0
when the feature is not enabled.

Backports commit f062d1447f2a80e7a5f593b8cb5ac7cab5e16eb0 from qemu
2020-03-21 17:17:50 -04:00
Richard Henderson
ca2bb77ab3 target/arm: Split out aarch32_cpsr_valid_mask
Split this helper out of msr_mask in translate.c. At the same time,
transform the negative reductive logic to positive accumulative logic.
It will be usable along the exception paths.

While touching msr_mask, fix up formatting.

Backports commit 4f9584ed4bba8a57a3cb2fa48a682725005d530a from qemu
2020-03-21 17:16:20 -04:00
Richard Henderson
7aaf0d442b target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
To implement PAN, we will want to swap, for short periods
of time, to a different privileged mmu_idx. In addition,
we cannot do this with flushing alone, because the AT*
instructions have both PAN and PAN-less versions.

Add the ARMMMUIdx*_PAN constants where necessary next to
the corresponding ARMMMUIdx* constant.

Backports commit 452ef8cb8c7b06f44a30a3c3a54d3be82c4aef59 from qemu
2020-03-21 17:12:16 -04:00
Richard Henderson
ed5a4950fd target/arm: Add arm_mmu_idx_is_stage1_of_2
Use a common predicate for querying stage1-ness.

Backports commit fee7aa46edd76f06c3dc176abb8fd05b365efce6 from qemu
2020-03-21 16:56:03 -04:00
Richard Henderson
3d583cd45f target/arm: Split out arm_mmu_idx_el
Backports commit 164690b29f9eaf69fe641859bc9f8954f12e691d from qemu
2020-03-21 15:22:01 -04:00
Richard Henderson
f4397b0212 target/arm: Add regime_has_2_ranges
Create a predicate to indicate whether the regime has
both positive and negative addresses.

Backports commit 339370b90d067345b69585ddf4b668fa01f41d67 from qemu
2020-03-21 15:14:11 -04:00
Richard Henderson
0318d7af99 target/arm: Reorganize ARMMMUIdx
Prepare for, but do not yet implement, the EL2&0 regime.
This involves adding the new MMUIdx enumerators and adjusting
some of the MMUIdx related predicates to match.

Backports commit b9f6033c1a5fb7da55ed353794db8ec064f78bb2 from qemu.
2020-03-21 15:10:05 -04:00
Richard Henderson
153d7aadd5 target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2
This is part of a reorganization to the set of mmu_idx.
The non-secure EL2 regime only has a single stage translation;
there is no point in pointing out that the idx is for stage1.

Backports commit e013b7411339342aac8d986c5d5e329e1baee8e1 from qemu
2020-03-21 14:42:23 -04:00
Richard Henderson
f45ab0614e target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3
This is part of a reorganization to the set of mmu_idx.
The EL3 regime only has a single stage translation, and
is always secure.

Backports commit 127b2b086303296289099a6fb10bbc51077f1d53 from qemu
2020-03-21 14:38:44 -04:00
Richard Henderson
1a672fc3b1 target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the Secure EL1&0 regime.

Backports commit fba37aedecb82506c62a1f9e81d066b4fd04e443 from qemu
2020-03-21 14:35:28 -04:00
Richard Henderson
31837384b3 target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*
This is part of a reorganization to the set of mmu_idx.
The EL1&0 regime is the only one that uses 2-stage translation.
Spelling out Stage avoids confusion with Secure.

Backports commit 2859d7b590760283a7b5aef40b723e9dfd7c98ba from qemu
2020-03-21 14:20:31 -04:00
Richard Henderson
b62b4c4f35 target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2
The EL1&0 regime is the only one that uses 2-stage translation.

Backports commit 97fa9350017e647151dd1dc212f1bbca0294dba7 from qemu
2020-03-21 14:15:35 -04:00
Richard Henderson
ec05f22e82 target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the EL1&0 regime.

The ultimate goal is

-- Non-secure regimes:
ARMMMUIdx_E10_0,
ARMMMUIdx_E20_0,
ARMMMUIdx_E10_1,
ARMMMUIdx_E2,
ARMMMUIdx_E20_2,

-- Secure regimes:
ARMMMUIdx_SE10_0,
ARMMMUIdx_SE10_1,
ARMMMUIdx_SE3,

-- Helper mmu_idx for non-secure EL1&0 stage1 and stage2
ARMMMUIdx_Stage2,
ARMMMUIdx_Stage1_E0,
ARMMMUIdx_Stage1_E1,

The 'S' prefix is reserved for "Secure". Unless otherwise specified,
each mmu_idx represents all stages of translation.

Backports commit 01b98b686460b3a0fb47125882e4f8d4268ac1b6 from qemu
2020-03-21 14:09:15 -04:00
Philippe Mathieu-Daudé
fa2a772c7b
target/arm: Declare some M-profile functions publicly
In the next commit we will split the M-profile functions from this
file. Some function will be called out of helper.c. Declare them in
the "internals.h" header.

Backports commit 787a7e76c2e93a48c47b324fea592c9910a70483 from qemu
2019-08-08 15:37:01 -04:00
Philippe Mathieu-Daudé
4eacf0ce98
target/arm: Restrict PSCI to TCG
Under KVM, the kernel gets the HVC call and handle the PSCI requests.

Backports commit 21fbea8c8af72817d8e21570fa4edbfae417341b from qemu
2019-08-08 15:32:19 -04:00
Philippe Mathieu-Daudé
91e264823e
target/arm: Move TLB related routines to tlb_helper.c
These routines are TCG specific.
The arm_deliver_fault() function is only used within the new
helper. Make it static.

Backports commit e21b551cb652663f2f2405a64d63ef6b4a1042b7 from qemu
2019-08-08 15:24:26 -04:00
Philippe Mathieu-Daudé
1af5deaf52
target/arm: Declare get_phys_addr() function publicly
In the next commit we will split the TLB related routines of
this file, and this function will also be called in the new
file. Declare it in the "internals.h" header.

Backports commit ebae861fc6c385a7bcac72dde4716be06e6776f1 from qemu
2019-08-08 15:14:45 -04:00
Richard Henderson
31ecdb5341
target/arm: Convert to CPUClass::tlb_fill
Backports commit 7350d553b5066abdc662045d7db5cdb73d0f9d53 from qemu
2019-05-16 16:55:12 -04:00
Richard Henderson
60742608f5
target/arm: Split helper_msr_i_pstate into 3
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
the unconditional write to pc and use raise_exception_ra to unwind.

Backports commit ff730e9666a716b669ac4a8ca7c521177d1d2b15 from qemu
2019-03-05 22:45:11 -05:00
Peter Maydell
8124b9f975
target/arm: Compute TB_FLAGS for TBI for user-only
Enables, but does not turn on, TBI for CONFIG_USER_ONLY.

Backports commit c47eaf9fc2af68cfbdbd9ae31f8e2e5ebb7022b4 from qemu
2019-02-05 17:43:11 -05:00
Richard Henderson
88193cf7c3
target/arm: Default handling of BTYPE during translation
The branch target exception for guarded pages has high priority,
and only 8 instructions are valid for that case. Perform this
check before doing any other decode.

Clear BTYPE after all insns that neither set BTYPE nor exit via
exception (DISAS_NORETURN).

Not yet handled are insns that exit via DISAS_NORETURN for some
other reason, like direct branches.

Backports commit 51bf0d7aa91a9d4e2563240a42e6cb705cef84aa from qemu
2019-02-05 17:05:31 -05:00
Richard Henderson
028aef155a
target/arm: Decode TBID from TCR
Use TBID in aa64_va_parameters depending on the data parameter.
This automatically updates all existing users of the function.

Backports commit 8220af7e4d34c858898fbfe55943aeea8f4e875f from qemu
2019-01-22 16:27:37 -05:00
Richard Henderson
b99e2f920b
target/arm: Add aa64_va_parameters_both
We will want to check TBI for I and D simultaneously.

Backports commit e737ed2ad8c14b4b82ed241646ffa370d29d0937 from qemu
2019-01-22 16:25:12 -05:00
Richard Henderson
23b162f2fb
target/arm: Export aa64_va_parameters to internals.h
We need to reuse this from helper-a64.c. Provide a stub
definition for CONFIG_USER_ONLY. This matches the stub
definitions that we removed for arm_regime_tbi{0,1} before.

Backports commit bf0be433878935e824479e8ae890493e1fb646ed from qemu
2019-01-22 16:22:57 -05:00
Richard Henderson
b6415f7a4b
target/arm: Create ARMVAParameters and helpers
Split out functions to extract the virtual address parameters.
Let the functions choose T0 or T1 address space half, if present.
Extract (most of) the control bits that vary between EL or Tx.

Backports commit ba97be9f4a4ecaf16a1454dc669e5f3d935d3b63 from qemu
2019-01-22 16:17:16 -05:00
Richard Henderson
377bd123bd
target/arm: Introduce arm_stage1_mmu_idx
While we could expose stage_1_mmu_idx, the combination is
probably going to be more useful.

Backports commit 64be86ab1b5ef10b660a4230ee7f27c0da499043 from qemu
2019-01-22 16:08:37 -05:00
Richard Henderson
9743787d0f
target/arm: Introduce arm_mmu_idx
The pattern

ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));

is computing the full ARMMMUIdx, stripping off the ARM bits,
and then putting them back.

Avoid the extra two steps with the appropriate helper function.

Backports commit 50494a279dab22a015aba9501a94fcc3cd52140e from qemu
2019-01-22 16:06:34 -05:00
Richard Henderson
e6196b2040
target/arm: Add PAuth helpers
The cryptographic internals are stubbed out for now,
but the enable and trap bits are checked.

Backports commit 0d43e1a2d29a05f7b0d5629caaff18733cbdf3bb from qemu
2019-01-22 15:27:15 -05:00
Richard Henderson
1f7d228c8a
target/arm: Introduce raise_exception_ra
This path uses cpu_loop_exit_restore to unwind current processor state.

Backports commit 7469f6c696d74ad3b22b67c08e1e8f79e2b5d3d6 from qemu
2019-01-22 15:20:06 -05:00
Peter Maydell
8b69824de7
target/arm: Move id_aa64mmfr* to ARMISARegisters
At the same time, define the fields for these registers,
and use those defines in arm_pamax().

Backports commit 3dc91ddbc68391f934bf6945853e99cf6810fc00 from qemu
2018-12-18 04:03:50 -05:00
Peter Maydell
61c0f40ac3
target/arm: Hyp mode R14 is shared with User and System
Hyp mode is an exception to the general rule that each AArch32
mode has its own r13, r14 and SPSR -- it has a banked r13 and
SPSR but shares its r14 with User and System mode. We were
incorrectly implementing it as banked, which meant that on
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.

We provide a new function r14_bank_number() which is like
the existing bank_number() but provides the index into
env->banked_r14[]; bank_number() provides the index to use
for env->banked_r13[] and env->banked_cpsr[].

All the points in the code that were using bank_number()
to index into env->banked_r14[] are updated for consintency:
* switch_mode() -- this is the only place where we fix
an actual bug
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
no behavioural change as we already special-cased Hyp R14
* kvm32.c: no behavioural change since the guest can't ever
be in Hyp mode, but conceptually the right thing to do
* msr_banked()/mrs_banked(): we can never get to the case
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
so no behavioural change

Backports commit 593cfa2b637b92d37eef949653840dc065cdb960 from qemu
2018-11-16 21:58:29 -05:00
Peter Maydell
92bf8ee620
target/arm: Correctly implement handling of HCR_EL2.{VI, VF}
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
but we got it wrong and had to revert it.

In that commit we implemented them as simply tracking whether there
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
bits cause a software-generated VIRQ/VFIQ, which is distinct from
whether there is a hardware-generated VIRQ/VFIQ caused by the
external interrupt controller. So we need to track separately
the HCR_EL2 bit state and the external virq/vfiq line state, and
OR the two together to get the actual pending VIRQ/VFIQ state.

Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f

Backports commit 89430fc6f80a5aef1d4cbd6fc26b40c30793786c from qemu
2018-11-16 21:53:53 -05:00
Peter Maydell
d60fe610bb
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
provided in HSR has more information than is reported to AArch64.
Specifically, there are extra fields TA and coproc which indicate
whether the trapped instruction was FP or SIMD. Add this extra
information to the syndromes we construct, and mask it out when
taking the exception to AArch64.

Backports commit 4be42f4013fa1a9df47b48aae5148767bed8e80c from qemu
2018-11-10 09:36:41 -05:00
Peter Maydell
075bac4d57
target/arm: Get IL bit correct for v7 syndrome values
For the v7 version of the Arm architecture, the IL bit in
syndrome register values where the field is not valid was
defined to be UNK/SBZP. In v8 this is RES1, which is what
QEMU currently implements. Handle the desired v7 behaviour
by squashing the IL bit for the affected cases:
* EC == EC_UNCATEGORIZED
* prefetch aborts
* data aborts where ISV is 0

(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
section G7.2.70, "illegal state exception", can't happen
on a v7 CPU.)

This deals with a corner case noted in a comment.

Backports commit 2ed08180db096ea5e44573529b85e09b1ed10b08 from qemu
2018-11-10 09:29:13 -05:00
Peter Maydell
99516b43a3
target/arm: New utility function to extract EC from syndrome
Create and use a utility function to extract the EC field
from a syndrome, rather than open-coding the shift.

Backports commit 64b91e3f890a8c221b65c6820a5ee39107ee40f5 from qemu
2018-11-10 09:28:23 -05:00
Lioncash
a0358202a7
target/arm: Improve debug logging of AArch32 exception return
For AArch32, exception return happens through certain kinds
of CPSR write. We don't currently have any CPU_LOG_INT logging
of these events (unlike AArch64, where we log in the ERET
instruction). Add some suitable logging.

This will log exception returns like this:
Exception return from AArch32 hyp to usr PC 0x80100374

paralleling the existing logging in the exception_return
helper for AArch64 exception returns:
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c

(Note that an AArch32 exception return can only be
AArch32->AArch32, never to AArch64.)

Backports commit 81e3728407bf4a12f83e14fd410d5f0a7d29b5b4 from qemu
2018-11-10 09:09:52 -05:00
Lioncash
cb935d868e
target/arm: Add v8M stack limit checks on NS function calls 2018-10-08 14:15:15 -04:00
Peter Maydell
ca5d7b8fd2
target/arm: Add v8M stack checks on ADD/SUB/MOV of SP
Add code to insert calls to a helper function to do the stack
limit checking when we handle these forms of instruction
that write to SP:
* ADD (SP plus immediate)
* ADD (SP plus register)
* SUB (SP minus immediate)
* SUB (SP minus register)
* MOV (register)

Backports commit 5520318939fea5d659bf808157cd726cb967b761 from qemu
2018-10-08 14:15:15 -04:00
Peter Maydell
b2146058c3
target/arm: Move v7m_using_psp() to internals.h
We're going to want v7m_using_psp() in op_helper.c in the
next patch, so move it from helper.c to internals.h.

Backports commit 5529bf188d996391ff52a0e1801daf9c6a6bfcb0 from qemu
2018-10-08 14:15:15 -04:00
Richard Henderson
66ffb372e7
target/arm: Pass TCGMemOpIdx to sve memory helpers
There is quite a lot of code required to compute cpu_mem_index,
or even put together the full TCGMemOpIdx. This can easily be
done at translation time.

Backports commit 500d04843ba953dc4560e44f04001efec38c14a6 from qemu
2018-10-08 14:15:15 -04:00
Aaron Lindsay
99a0be89a8
target/arm: Add pre-EL change hooks
Because the design of the PMU requires that the counter values be
converted between their delta and guest-visible forms for mode
filtering, an additional hook which occurs before the EL is changed is
necessary.

Backports commit b5c53d1b3886387874f8c8582b205aeb3e4c3df6 from qemu
2018-04-26 09:21:54 -04:00