Commit Graph

4095 Commits

Author SHA1 Message Date
Igor Mammedov
6f265062ef
qom: add helper macro DEFINE_TYPES()
DEFINE_TYPES() will help to simplify following routine patterns:

static void foo_register_types(void)
{
type_register_static(&foo1_type_info);
type_register_static(&foo2_type_info);
...
}

type_init(foo_register_types)

or

static void foo_register_types(void)
{
int i;

for (i = 0; i < ARRAY_SIZE(type_infos); i++) {
type_register_static(&type_infos[i]);
}
}

type_init(foo_register_types)

with a single line

DEFINE_TYPES(type_infos)

where types have static definition which could be consolidated in
a single array of TypeInfo structures.
It saves us ~6-10LOC per use case and would help to replace
imperative foo_register_types() there with declarative style of
type registration.

Backports commit 38b5d79b2e8cf6085324066d84e8bb3b3bbe8548 from qemu
2018-03-05 03:51:54 -05:00
Igor Mammedov
c97583fc42
qom: introduce type_register_static_array()
it will help to remove code duplication of registration
static types in places that have open coded loop to
perform batch type registering.

Backports commit aa04c9d20704fa5b9ab239d5111adbcce5f49808 from qemu
2018-03-05 03:49:50 -05:00
Peter Maydell
0c06666800
target/arm: Implement SG instruction corner cases
The common situation of the SG instruction is that it is
executed from S&NSC memory by a CPU in NS state. That case
is handled by v7m_handle_execute_nsc(). However the instruction
also has defined behaviour in a couple of other cases:
* SG instruction in NS memory (behaves as a NOP)
* SG in S memory but CPU already secure (clears IT bits and
does nothing else)
* SG instruction in v8M without Security Extension (NOP)

These can be implemented in translate.c.

Backports commit 76eff04d166b8fe747adbe82de8b7e060e668ff9 from qemu
2018-03-05 03:47:20 -05:00
Peter Maydell
272427b4a0
target/arm: Support some Thumb insns being always unconditional
A few Thumb instructions are always unconditional even inside an
IT block (as opposed to being UNPREDICTABLE if used inside an
IT block): BKPT, the v8M SG instruction, and the A profile
HLT (debug halt) instruction.

This means we need to suppress the jump-over-instruction-on-condfail
code generation (though the IT state still advances as usual and
subsequent insns in the IT block may be conditional).

Backports commit dcf14dfb704519846f396a376339ebdb93eaf049 from qemu
2018-03-05 03:46:10 -05:00
Peter Maydell
7a293cd7cc
target-arm: Simplify insn_crosses_page()
Recent changes have left insn_crosses_page() more complicated
than it needed to be:
* it's only called from thumb_tr_translate_insn() so we know
for certain that we're looking at a Thumb insn
* the caller's check for dc->pc >= dc->next_page_start - 3
means that dc->pc can't possibly be 4 aligned, so there's
no need to check that (the check was partly there to ensure
that we didn't treat an ARM insn as Thumb, I think)
* we now have thumb_insn_is_16bit() which lets us do a precise
check of the length of the next insn, rather than opencoding
an inaccurate check

Simplify it down to just loading the first half of the insn
and calling thumb_insn_is_16bit() on it.

Backports commit 5b8d7289e9e92a0d7bcecb93cd189e245fef10cd from qemu
2018-03-05 03:44:54 -05:00
Peter Maydell
96f86f472a
target/arm: Pull Thumb insn word loads up to top level
Refactor the Thumb decode to do the loads of the instruction words at
the top level rather than only loading the second half of a 32-bit
Thumb insn in the middle of the decode.

This is simple apart from the awkward case of Thumb1, where the
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
32-bit insn space. To handle these we decode enough to identify
whether we're looking at a prefix/suffix that we handle as a 16 bit
insn, or a prefix that we're going to merge with the following suffix
to consider as a 32 bit insn. The translation of the 16 bit cases
then moves from disas_thumb2_insn() to disas_thumb_insn().

The refactoring has the benefit that we don't need to pass the
CPUARMState* down into the decoder code any more, but the major
reason for doing this is that some Thumb instructions must be always
unconditional regardless of the IT state bits, so we need to know the
whole insn before we emit the "skip this insn if the IT bits and cond
state tell us to" code. (The always unconditional insns are BKPT,
HLT and SG; the last of these is 32 bits.)

Backports commit 296e5a0a6c393553079a641c50521ae33ff89324 from qemu
2018-03-05 03:43:38 -05:00
Peter Maydell
b85d617bda
target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
The code which implements the Thumb1 split BL/BLX instructions
is guarded by a check on "not M or THUMB2". All we really need
to check here is "not THUMB2" (and we assume that elsewhere too,
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).

This doesn't change behaviour because all M profile cores
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
(v6M implements a very restricted subset of Thumb2, but we
can cross that bridge when we get to it with appropriate
feature bits.)

Backports commit 6b8acf256df09c8a8dd7dcaa79b06eaff4ad63f7 from qemu
2018-03-05 03:34:48 -05:00
Peter Maydell
ee9b8a20c9
target/arm: Implement secure function return
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.

Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().

Backports commit d02a8698d7ae2bfed3b11fe5b064cb0aa406863b from qemu
2018-03-05 03:33:42 -05:00
Peter Maydell
e312993f1f
target/arm: Implement BLXNS
Implement the BLXNS instruction, which allows secure code to
call non-secure code.

Backports commit 3e3fa230e3b8ffe119f14ba57a6bc677a411be57 from qemu
2018-03-05 03:31:59 -05:00
Peter Maydell
2c4578f46e
target/arm: Implement SG instruction
Implement the SG instruction, which we emulate 'by hand' in the
exception handling code path.

Backports commit 333e10c51ef5876ced26f77b61b69ce0f83161a9 from qemu
2018-03-05 03:28:28 -05:00
Peter Maydell
19ecd4f732
target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
Add the M profile secure MMU index values to the switch in
get_a32_user_mem_index() so that LDRT/STRT work correctly
rather than asserting at translate time.

Backports commit b9f587d62cebed427206539750ebf59bde4df422 from qemu
2018-03-05 03:25:54 -05:00
Jiang Biao
60ef6d016d
tcg/mips: delete commented out extern keyword
Backports commit 8df8d529ed958de4e23dcbf38bd34eff1a4716f2 from qemu
2018-03-05 03:24:25 -05:00
Emilio G. Cota
239e9771df
tcg: define TCG_HIGHWATER
Will come in handy very soon.

Backports commit a505785cd221994dd3713bde860861869a059940 from qemu
2018-03-05 03:22:27 -05:00
Emilio G. Cota
8552d95c52
exec-all: extract tb->tc_* into a separate struct tc_tb
In preparation for adding tc.size to be able to keep track of
TB's using the binary search tree implementation from glib.

Backports commit e7e168f41364c6e83d0f75fc1b3ce7f9c41ccf76 from qemu
2018-03-05 02:57:22 -05:00
Emilio G. Cota
7bd35ab27d
translate-all: define and use DEBUG_TB_CHECK_GATE
This prevents bit rot by ensuring the debug code is compiled when
building a user-mode target.

Unfortunately the helpers are user-mode-only so we cannot fully
get rid of the ifdef checks. Add a comment to explain this.

Backports commit 6eb062abd66611333056633899d3f09c2e795f4c from qemu
2018-03-05 02:51:46 -05:00
Emilio G. Cota
f778c02bbc
translate-all: define and use DEBUG_TB_INVALIDATE_GATE
This gets rid of an ifdef check while ensuring that the debug code
is compiled, which prevents bit rot.

Backports commit dae9e03aed8e652f5dce2e5cab05dff83aa193b8 from qemu
2018-03-05 02:50:24 -05:00
Emilio G. Cota
5fc83f3eb2
exec-all: introduce TB_PAGE_ADDR_FMT
And fix the following warning when DEBUG_TB_INVALIDATE is enabled
in translate-all.c:

CC mipsn32-linux-user/accel/tcg/translate-all.o
/data/src/qemu/accel/tcg/translate-all.c: In function ‘tb_alloc_page’:
/data/src/qemu/accel/tcg/translate-all.c:1201:16: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘tb_page_addr_t {aka unsigned int}’ [-Werror=format=]
printf("protecting code page: 0x" TARGET_FMT_lx "\n",
^
cc1: all warnings being treated as errors
/data/src/qemu/rules.mak:66: recipe for target 'accel/tcg/translate-all.o' failed
make[1]: *** [accel/tcg/translate-all.o] Error 1
Makefile:328: recipe for target 'subdir-mipsn32-linux-user' failed
make: *** [subdir-mipsn32-linux-user] Error 2
cota@flamenco:/data/src/qemu/build ((18f3fe1...) *$)$

Backports commit 67a5b5d2f6eb6d3b980570223ba5c478487ddb6f from qemu
2018-03-05 02:49:44 -05:00
Emilio G. Cota
79f53cf97b
translate-all: define and use DEBUG_TB_FLUSH_GATE
This gets rid of some ifdef checks while ensuring that the debug code
is compiled, which prevents bit rot.

Backports commit 424079c13b692cfcd08866bc9ffec77b887fed4e from qemu
2018-03-05 02:48:16 -05:00
Emilio G. Cota
b4a7d8b773
exec-all: bring tb->invalid into tb->cflags
This gets rid of a hole in struct TranslationBlock.

Backports commit 84f1c148da2b35fbb5a436597872765257e8914e from qemu
2018-03-05 02:46:21 -05:00
Emilio G. Cota
210d13ec49
tcg: consolidate TB lookups in tb_lookup__cpu_state
This avoids duplicating code. cpu_exec_step will also use the
new common function once we integrate parallel_cpus into tb->cflags.

Note that in this commit we also fix a race, described by Richard Henderson
during review. Think of this scenario with threads A and B:

(A) Lookup succeeds for TB in hash without tb_lock
(B) Sets the TB's tb->invalid flag
(B) Removes the TB from tb_htable
(B) Clears all CPU's tb_jmp_cache
(A) Store TB into local tb_jmp_cache

Given that order of events, (A) will keep executing that invalid TB until
another flush of its tb_jmp_cache happens, which in theory might never happen.
We can fix this by checking the tb->invalid flag every time we look up a TB
from tb_jmp_cache, so that in the above scenario, next time we try to find
that TB in tb_jmp_cache, we won't, and will therefore be forced to look it
up in tb_htable.

Performance-wise, I measured a small improvement when booting debian-arm.
Note that inlining pays off:

Performance counter stats for 'taskset -c 0 qemu-system-arm \
-machine type=virt -nographic -smp 1 -m 4096 \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-device virtio-net-device,netdev=unet \
-drive file=jessie.qcow2,id=myblock,index=0,if=none \
-device virtio-blk-device,drive=myblock \
-kernel kernel.img -append console=ttyAMA0 root=/dev/vda1 \
-name arm,debug-threads=on -smp 1' (10 runs):

Before:
18714.917392 task-clock # 0.952 CPUs utilized ( +- 0.95% )
23,142 context-switches # 0.001 M/sec ( +- 0.50% )
1 CPU-migrations # 0.000 M/sec
10,558 page-faults # 0.001 M/sec ( +- 0.95% )
53,957,727,252 cycles # 2.883 GHz ( +- 0.91% ) [83.33%]
24,440,599,852 stalled-cycles-frontend # 45.30% frontend cycles idle ( +- 1.20% ) [83.33%]
16,495,714,424 stalled-cycles-backend # 30.57% backend cycles idle ( +- 0.95% ) [66.66%]
76,267,572,582 instructions # 1.41 insns per cycle
12,692,186,323 branches # 678.186 M/sec ( +- 0.92% ) [83.35%]
263,486,879 branch-misses # 2.08% of all branches ( +- 0.73% ) [83.34%]

19.648474449 seconds time elapsed ( +- 0.82% )

After, w/ inline (this patch):
18471.376627 task-clock # 0.955 CPUs utilized ( +- 0.96% )
23,048 context-switches # 0.001 M/sec ( +- 0.48% )
1 CPU-migrations # 0.000 M/sec
10,708 page-faults # 0.001 M/sec ( +- 0.81% )
53,208,990,796 cycles # 2.881 GHz ( +- 0.98% ) [83.34%]
23,941,071,673 stalled-cycles-frontend # 44.99% frontend cycles idle ( +- 0.95% ) [83.34%]
16,161,773,848 stalled-cycles-backend # 30.37% backend cycles idle ( +- 0.76% ) [66.67%]
75,786,269,766 instructions # 1.42 insns per cycle
12,573,617,143 branches # 680.708 M/sec ( +- 1.34% ) [83.33%]
260,235,550 branch-misses # 2.07% of all branches ( +- 0.66% ) [83.33%]

19.340502161 seconds time elapsed ( +- 0.56% )

After, w/o inline:
18791.253967 task-clock # 0.954 CPUs utilized ( +- 0.78% )
23,230 context-switches # 0.001 M/sec ( +- 0.42% )
1 CPU-migrations # 0.000 M/sec
10,563 page-faults # 0.001 M/sec ( +- 1.27% )
54,168,674,622 cycles # 2.883 GHz ( +- 0.80% ) [83.34%]
24,244,712,629 stalled-cycles-frontend # 44.76% frontend cycles idle ( +- 1.37% ) [83.33%]
16,288,648,572 stalled-cycles-backend # 30.07% backend cycles idle ( +- 0.95% ) [66.66%]
77,659,755,503 instructions # 1.43 insns per cycle
12,922,780,045 branches # 687.702 M/sec ( +- 1.06% ) [83.34%]
261,962,386 branch-misses # 2.03% of all branches ( +- 0.71% ) [83.35%]

19.700174670 seconds time elapsed ( +- 0.56% )

Backports commit f6bb84d53110398f4899c19dab4e0fe9908ec060 from qemu
2018-03-05 02:42:46 -05:00
Emilio G. Cota
5fae6dd433
tcg: remove addr argument from lookup_tb_ptr
It is unlikely that we will ever want to call this helper passing
an argument other than the current PC. So just remove the argument,
and use the pc we already get from cpu_get_tb_cpu_state.

This change paves the way to having a common "tb_lookup" function.

Backports commit 7f11636dbee89b0e4d03e9e2b96e14649a7db778 from qemu
2018-03-05 02:16:34 -05:00
Emilio G. Cota
68ddc0cb08
exec-all: fix typos in TranslationBlock's documentation
Backports commit eb5e2b9e3b141de0c435eedc31c26cbbdefbee1b from qemu
2018-03-05 02:10:28 -05:00
Emilio G. Cota
921c71a9ee
cpu-exec: rename have_tb_lock to acquired_tb_lock in tb_find
Reusing the have_tb_lock name, which is also defined in translate-all.c,
makes code reviewing unnecessarily harder.

Avoid potential confusion by renaming the local have_tb_lock variable
to something else.

Backports commit 841710c78e022cbc1f798cced035789725702dac from qemu
2018-03-05 02:09:42 -05:00
Emilio G. Cota
f1d6630893
tcg/mips: constify tcg_target_callee_save_regs
Backports commit d453ec78251d03cbd4ffc28dbf6070931c8ae469 from qemu
2018-03-05 02:08:36 -05:00
Emilio G. Cota
3cf23eb256
tcg/i386: constify tcg_target_callee_save_regs
Backports commit e268f4c036d2b47a4f8bf293c1371b328e03ca04 from qemu
2018-03-05 02:08:02 -05:00
Todd Eisenberger
75bdfd85a7
x86: Correct translation of some rdgsbase and wrgsbase encodings
It looks like there was a transcription error when writing this code
initially. The code previously only decoded src or dst of rax. This
resolves
https://bugs.launchpad.net/qemu/+bug/1719984.

Backports commit e0dd5fd41a1a38766009f442967fab700d2d0550 from qemu
2018-03-05 02:05:26 -05:00
Philippe Mathieu-Daudé
0cb01a52bd
qom/cpu: move cpu_model null check to cpu_class_by_name()
and clean every implementation.

Backports commit 8301ea444abb49f7b7fb939b09c1e23b22977f30 from qemu
2018-03-05 02:02:29 -05:00
Peter Maydell
059f238f11
target/arm: Factor out "get mmuidx for specified security state"
For the SG instruction and secure function return we are going
to want to do memory accesses using the MMU index of the CPU
in secure state, even though the CPU is currently in non-secure
state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
and use it in cpu_mmu_index().

Backports commit b81ac0eb6315e602b18439961e0538538e4aed4f from qemu
2018-03-05 02:00:23 -05:00
Peter Maydell
6958a4763d
target/arm: Fix calculation of secure mm_idx values
In cpu_mmu_index() we try to do this:
if (env->v7m.secure) {
mmu_idx += ARMMMUIdx_MSUser;
}
but it will give the wrong answer, because ARMMMUIdx_MSUser
includes the 0x40 ARM_MMU_IDX_M field, and so does the
mmu_idx we're adding to, and we'll end up with 0x8n rather
than 0x4n. This error is then nullified by the call to
arm_to_core_mmu_idx() which masks out the high part, but
we're about to factor out the code that calculates the
ARMMMUIdx values so it can be used without passing it through
arm_to_core_mmu_idx(), so fix this bug first.

Backports commit fe768788d29597ee56fc11ba2279d502c2617457 from qemu
2018-03-05 01:58:42 -05:00
Peter Maydell
7988aec017
target/arm: Implement security attribute lookups for memory accesses
Implement the security attribute lookups for memory accesses
in the get_phys_addr() functions, causing these to generate
various kinds of SecureFault for bad accesses.

The major subtlety in this code relates to handling of the
case when the security attributes the SAU assigns to the
address don't match the current security state of the CPU.

In the ARM ARM pseudocode for validating instruction
accesses, the security attributes of the address determine
whether the Secure or NonSecure MPU state is used. At face
value, handling this would require us to encode the relevant
bits of state into mmu_idx for both S and NS at once, which
would result in our needing 16 mmu indexes. Fortunately we
don't actually need to do this because a mismatch between
address attributes and CPU state means either:
* some kind of fault (usually a SecureFault, but in theory
perhaps a UserFault for unaligned access to Device memory)
* execution of the SG instruction in NS state from a
Secure & NonSecure code region

The purpose of SG is simply to flip the CPU into Secure
state, so we can handle it by emulating execution of that
instruction directly in arm_v7m_cpu_do_interrupt(), which
means we can treat all the mismatch cases as "throw an
exception" and we don't need to encode the state of the
other MPU bank into our mmu_idx values.

This commit doesn't include the actual emulation of SG;
it also doesn't include implementation of the IDAU, which
is a per-board way to specify hard-coded memory attributes
for addresses, which override the CPU-internal SAU if they
specify a more secure setting than the SAU is programmed to.

Backports commit 35337cc391245f251bfb9134f181c33e6375d6c1 from qemu
2018-03-05 01:57:07 -05:00
Peter Maydell
f9b4381ce0
nvic: Implement Security Attribution Unit registers
Implement the register interface for the SAU: SAU_CTRL,
SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
actual behaviour is implemented here; registers just
read back as written.

When the CPU definition for Cortex-M33 is eventually
added, its initfn will set cpu->sau_sregion, in the same
way that we currently set cpu->pmsav7_dregion for the
M3 and M4.

Number of SAU regions is typically a configurable
CPU parameter, but this patch doesn't provide a
QEMU CPU property for it. We can easily add one when
we have a board that requires it.

Backports commit 9901c576f6c02d43206e5faaf6e362ab7ea83246 from qemu
2018-03-05 01:55:11 -05:00
Peter Maydell
3da3a3fb41
target/arm: Add v8M support to exception entry code
Add support for v8M and in particular the security extension
to the exception entry code. This requires changes to:
* calculation of the exception-return magic LR value
* push the callee-saves registers in certain cases
* clear registers when taking non-secure exceptions to avoid
leaking information from the interrupted secure code
* switch to the correct security state on entry
* use the vector table for the security state we're targeting

Backports commit d3392718e1fcf0859fb7c0774a8e946bacb8419c from qemu
2018-03-05 01:51:22 -05:00
Peter Maydell
39466771d6
target/arm: Add support for restoring v8M additional state context
For v8M, exceptions from Secure to Non-Secure state will save
callee-saved registers to the exception frame as well as the
caller-saved registers. Add support for unstacking these
registers in exception exit when necessary.

Backports commit 907bedb3f3ce134c149599bd9cb61856d811b8ca from qemu
2018-03-05 01:47:25 -05:00
Peter Maydell
2feecbac0d
target/arm: Update excret sanity checks for v8M
In v8M, more bits are defined in the exception-return magic
values; update the code that checks these so we accept
the v8M values when the CPU permits them.

Backports commit bfb2eb52788b9605ef2fc9bc72683d4299117fde from qemu
2018-03-05 01:44:33 -05:00
Peter Maydell
33d2358c91
target/arm: Add new-in-v8M SFSR and SFAR
Add the new M profile Secure Fault Status Register
and Secure Fault Address Register.

Backports commit bed079da04dd9e0e249b9bc22bca8dce58b67f40 from qemu
2018-03-05 01:42:52 -05:00
Peter Maydell
7af730ed3e
target/arm: Don't warn about exception return with PC low bit set for v8M
In the v8M architecture, return from an exception to a PC which
has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
is discarded [R_HRJH]. Restrict our complaint about this to v7M.

Backports commit 4e4259d3c574a8e89c3af27bcb84bc19a442efb1 from qemu
2018-03-05 01:41:51 -05:00
Peter Maydell
2aea283c4f
target/arm: Warn about restoring to unaligned stack
Attempting to do an exception return with an exception frame that
is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
(It is not UNPREDICTABLE in v7M, and our implementation can
handle the merely-4-aligned case fine, so we don't need to
do anything except warn.)

Backports commit cb484f9a6e790205e69d9a444c3e353a3a1cfd84 from qemu
2018-03-05 01:40:40 -05:00
Peter Maydell
5063ca11ab
target/arm: Check for xPSR mismatch usage faults earlier for v8M
ARM v8M specifies that the INVPC usage fault for mismatched
xPSR exception field and handler mode bit should be checked
before updating the PSR and SP, so that the fault is taken
with the existing stack frame rather than by pushing a new one.
Perform this check in the right place for v8M.

Since v7M specifies in its pseudocode that this usage fault
check should happen later, we have to retain the original
code for that check rather than being able to merge the two.
(The distinction is architecturally visible but only in
very obscure corner cases like attempting an invalid exception
return with an exception frame in read only memory.)

Backports commit 224e0c300a0098fb577a03bd29d774d0769f632a from qemu
2018-03-05 01:39:39 -05:00
Peter Maydell
6f08acdcfe
target/arm: Restore SPSEL to correct CONTROL register on exception return
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
value should be restored to the SPSEL bit in the CONTROL register
banked specified by the EXC_RETURN.ES bit.

Add write_v7m_control_spsel_for_secstate() which behaves like
write_v7m_control_spsel() but allows the caller to specify which
CONTROL bank to use, reimplement write_v7m_control_spsel() in
terms of it, and use it in exception return.

Backports commit 3f0cddeee1f266d43c956581f3050058360a810d from qemu
2018-03-05 01:35:17 -05:00
Peter Maydell
0bb50b9a7e
target/arm: Restore security state on exception return
Now that we can handle the CONTROL.SPSEL bit not necessarily being
in sync with the current stack pointer, we can restore the correct
security state on exception return. This happens before we start
to read registers off the stack frame, but after we have taken
possible usage faults for bad exception return magic values and
updated CONTROL.SPSEL.

Backports commit 3919e60b6efd9a86a0e6ba637aa584222855ac3a from qemu
2018-03-05 01:31:58 -05:00
Peter Maydell
c7b5fccfb8
target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
In the v7M architecture, there is an invariant that if the CPU is
in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
This in turn means that the current stack pointer is always
indicated by CONTROL.SPSEL, even though Handler mode always uses
the Main stack pointer.

In v8M, this invariant is removed, and CONTROL.SPSEL may now
be nonzero in Handler mode (though Handler mode still always
uses the Main stack pointer). In preparation for this change,
change how we handle this bit: rename switch_v7m_sp() to
the now more accurate write_v7m_control_spsel(), and make it
check both the handler mode state and the SPSEL bit.

Note that this implicitly changes the point at which we switch
active SP on exception exit from before we pop the exception
frame to after it.

Backports commit de2db7ec894f11931932ca78cd14a8d2b1389d5b from qemu
2018-03-05 01:29:54 -05:00
Peter Maydell
8036c5b3de
target/arm: Don't switch to target stack early in v7M exception return
Currently our M profile exception return code switches to the
target stack pointer relatively early in the process, before
it tries to pop the exception frame off the stack. This is
awkward for v8M for two reasons:
* in v8M the process vs main stack pointer is not selected
purely by the value of CONTROL.SPSEL, so updating SPSEL
and relying on that to switch to the right stack pointer
won't work
* the stack we should be reading the stack frame from and
the stack we will eventually switch to might not be the
same if the guest is doing strange things

Change our exception return code to use a 'frame pointer'
to read the exception frame rather than assuming that we
can switch the live stack pointer this early.

Backports commit 5b5223997c04b769bb362767cecb5f7ec382c5f0 from qemu
2018-03-05 01:26:05 -05:00
Jan Kiszka
ae16a26c20
arm: Fix SMC reporting to EL2 when QEMU provides PSCI
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
itself and, thus, ARM_FEATURE_EL3 is off.

Found and tested with the Jailhouse hypervisor. Solution based on
suggestions by Peter Maydell.

Backports commit 77077a83006c3c9bdca496727f1735a3c5c5355d from qemu
2018-03-05 01:19:22 -05:00
Peter Xu
0741c3880a
qom: provide root container for internal objs
We have object_get_objects_root() to keep user created objects, however
no place for objects that will be used internally. Create such a
container for internal objects.

Backports commit 7c47c4ead75d0b733ee8f2f51fd1de0644cc1308 from qemu
2018-03-05 01:16:50 -05:00
KONRAD Frederic
61ecdd1032
memory: avoid a name clash with access macro
This avoids a name clash with the access macro on windows 64:

make
CHK version_gen.h
CC aarch64-softmmu/memory.o
/home/konrad/qemu/memory.c: In function 'access_with_adjusted_size':
/home/konrad/qemu/memory.c:591:73: error: macro "access" passed 7 arguments, \
but takes just 2
(size - access_size - i) * 8, access_mask, attrs);
^

Backports commit 05e015f73c3b5c50c237d3d8e555e25cfa543a5c from qemu
2018-03-05 01:13:01 -05:00
Peter Xu
4956effd11
bitmap: provide to_le/from_le helpers
Provide helpers to convert bitmaps to little endian format. It can be
used when we want to send one bitmap via network to some other hosts.

One thing to mention is that, these helpers only solve the problem of
endianess, but it does not solve the problem of different word size on
machines (the bitmaps managing same count of bits may contains different
size when malloced). So we need to take care of the size alignment issue
on the callers for now.

Backports commit d7788151a0807d5d2d410e3f8944d8c8a651f8d2 from qemu
2018-03-05 01:11:13 -05:00
Peter Xu
3d5fa79305
bitmap: introduce bitmap_count_one()
Count how many bits set in the bitmap.

Backports commit fc7deeea26af3d08f45bad85b8bd3fc3d790a090 from qemu
2018-03-05 01:08:29 -05:00
Peter Xu
296c30aaca
bitmap: remove BITOP_WORD()
We have BIT_WORD(). It's the same.

Backports commit ab089e058eb443a9a11ade4d3fc57ced1947bfa3 from qemu
2018-03-05 01:07:02 -05:00
Peter Maydell
f0569ba11a
target/arm: Remove out of date ARM ARM section references in A64 decoder
In the A64 decoder, we have a lot of references to section numbers
from version A.a of the v8A ARM ARM (DDI0487). This version of the
document is now long obsolete (we are currently on revision B.a),
and various intervening versions renumbered all the sections.

The most recent B.a version of the document doesn't assign
section numbers at all to the individual instruction classes
in the way that the various A.x versions did. The simplest thing
to do is just to delete all the out of date C.x.x references.

Backports commit 4ce31af4aeb8471f6a913de7c59d3bde1fc4f03d from qemu
2018-03-05 01:05:53 -05:00
Peter Maydell
72dadc6518
target/arm: Handle banking in negative-execution-priority check in cpu_mmu_index()
Now that we have a banked FAULTMASK register and banked exceptions,
we can implement the correct check in cpu_mmu_index() for whether
the MPU_CTRL.HFNMIENA bit's effect should apply. This bit causes
handlers which have requested a negative execution priority to run
with the MPU disabled. In v8M the test has to check this for the
current security state and so takes account of banking.

Backports relevant part of commit 5d4791991d4de12e83d44738417c9e964167b6e8 from qemu
2018-03-05 00:54:28 -05:00