Commit Graph

130 Commits

Author SHA1 Message Date
Richard Henderson
d609ab30c2
target-sparc: Use global registers for the register window
Via indirection off cpu_regwptr.

Backports commit d2dc4069e046deeccc4dca0f73c3077ac22ba43f from qemu
2018-02-20 20:34:42 -05:00
Richard Henderson
3653771265
tcg: Allocate indirect_base temporaries in a different order
Since we've not got liveness analysis for indirect bases,
placing them at the end of the call-saved registers makes
it more likely that it'll stay live.

Backports commit 91478cefaaf2fa678e56df8635b34957f4d5d565 from qemu
2018-02-20 19:46:59 -05:00
Richard Henderson
bf385eba3c
tcg: Implement indirect memory registers
That is, global_mem registers whose base is another global_mem
register, rather than a fixed register.

Backports commit b3915dbbdcdb2e04753f3d34a1b0865eea005069 from qemu
2018-02-20 19:20:01 -05:00
Richard Henderson
9299329349
tcg: Work around clang bug wrt enum ranges, part 2
A previous patch patch changed the type of REG from int
to enum TCGReg, which provokes the following bug in clang:

https://llvm.org/bugs/show_bug.cgi?id=16154

Backports commit 869938ae2a284fe730cb6f807ea0f9e324e0f87c from qemu
2018-02-20 19:12:49 -05:00
Richard Henderson
8bc3037864
target-i386: Implement BNDMK
Backports commit 149b427b32de358c3bd5bc064c50acca6e9ff78f from qemu
2018-02-20 14:02:31 -05:00
Richard Henderson
65a78ebb26
target-i386: Deconstruct the cpu_T array
All references to cpu_T are done with a constant index. It aids
readability to decompose the array into two scalar variables.

Backports commit 1d1cc4d0f481b2939c7e9f6606e571b2fc81971a from qemu
2018-02-20 11:02:34 -05:00
Richard Henderson
092c7bea97
target-i386: Access segs via TCG registers
Having segs[].base as a register significantly improves code
generation for real and protected modes, particularly for TBs
that have multiple memory references where the segment base
can be held in a hard register through the TB.

Backports commit 3558f8055f37a34762b7a2a0f02687e6eeab893d from qemu
2018-02-20 10:02:37 -05:00
Richard Henderson
292c67109a
tcg: Introduce temp_load
Unify all of the places that realize a temporary into a register.

Backports commit 40ae5c62ebaaf7d9d3b93b88c2d32bf6342f7889 from qemu
2018-02-19 11:44:01 -05:00
Richard Henderson
c821ffd989
tcg: Change temp_save argument to TCGTemp
Backports commit b13eb728d33deaa53efc0dcef557da998e6ec40e from qemu
2018-02-19 11:39:04 -05:00
Richard Henderson
2c3ad57215
tcg: Change temp_sync argument to TCGTemp
Backports commit 12b9b11a2743002232098afb41810f1c0cb211a0 from qemu
2018-02-19 11:37:12 -05:00
Richard Henderson
82a4e93629
tcg: Change temp_dead argument to TCGTemp
Backports commit f8bf00f1028a00a7978e9175da53944de95b9fcb from qemu
2018-02-19 11:34:17 -05:00
Richard Henderson
daf837956c
tcg: Change reg_to_temp to TCGTemp pointer
Backports commit f8b2f202344b362b1e676688f838d6b7c08f1975 from qemu
2018-02-19 11:30:26 -05:00
Richard Henderson
cf59e51811
tcg: Work around clang bug wrt enum ranges
A subsequent patch patch will change the type of REG from int
to enum TCGReg, which provokes the following bug in clang:

https://llvm.org/bugs/show_bug.cgi?id=16154

Backports commit c8074023204e8e8a213399961ab56e2814aa6116 from qemu
2018-02-19 11:23:19 -05:00
Richard Henderson
7cb5f2fed8
tcg: Tidy temporary allocation
In particular, make sure the memory is memset before use.
Continues the increased use of TCGTemp pointers instead of
integer indices where appropriate.

Backports commit 7ca4b752feaab647b0c1a147bd3815fcdb479a59 from qemu
2018-02-19 11:17:45 -05:00
Richard Henderson
45f9ddf970
tcg: Remove tcg_get_arg_str_i32/64
Backports commit e4ce0d4eb774eb2a8b6a27cd8a6f1d75e05c21ae from qemu
2018-02-19 02:07:04 -05:00
Richard Henderson
12577dfcc0
tcg: More use of TCGReg where appropriate
Backports commit b66386623176e0b0f3bd270640bdb8ac8431c732 from qemu
2018-02-19 02:06:08 -05:00
Emilio G. Cota
e7a7d8c508
tcg: optimise memory layout of TCGTemp
This brings down the size of the struct from 56 to 32 bytes on 64-bit,
and to 20 bytes on 32-bit. This leads to memory savings:

Before:
$ find . -name 'tcg.o' | xargs size
   text    data     bss     dec     hex filename
  41131   29800      88   71019   1156b ./aarch64-softmmu/tcg/tcg.o
  37969   29416      96   67481   10799 ./x86_64-linux-user/tcg/tcg.o
  39354   28816      96   68266   10aaa ./arm-linux-user/tcg/tcg.o
  40802   29096      88   69986   11162 ./arm-softmmu/tcg/tcg.o
  39417   29672      88   69177   10e39 ./x86_64-softmmu/tcg/tcg.o

After:
$ find . -name 'tcg.o' | xargs size
   text    data     bss     dec     hex filename
  40883   29800      88   70771   11473 ./aarch64-softmmu/tcg/tcg.o
  37473   29416      96   66985   105a9 ./x86_64-linux-user/tcg/tcg.o
  38858   28816      96   67770   108ba ./arm-linux-user/tcg/tcg.o
  40554   29096      88   69738   1106a ./arm-softmmu/tcg/tcg.o
  39169   29672      88   68929   10d41 ./x86_64-softmmu/tcg/tcg.o

Note that using an entire byte for some enums that need less than
that wastes a few bits (noticeable in 32 bits, where we use
20 bytes instead of 16) but avoids extraction code, which overall
is a win--I've tested several variations of the patch, and the appended
is the best performer for OpenSSL's bntest by a very small margin:

Before:
$ taskset -c 0 perf stat -r 15 -- x86_64-linux-user/qemu-x86_64 img/bntest-x86_64 >/dev/null
[...]
 Performance counter stats for 'x86_64-linux-user/qemu-x86_64 img/bntest-x86_64' (15 runs):

      10538.479833 task-clock (msec)  # 0.999 CPUs utilized  ( +-  0.38% )
               772 context-switches   # 0.073 K/sec          ( +-  2.03% )
                 0 cpu-migrations     # 0.000 K/sec          ( +-100.00% )
             2,207 page-faults        # 0.209 K/sec          ( +-  0.08% )
      10.552871687 seconds time elapsed                      ( +-  0.39% )

After:
$ taskset -c 0 perf stat -r 15 -- x86_64-linux-user/qemu-x86_64 img/bntest-x86_64 >/dev/null
 Performance counter stats for 'x86_64-linux-user/qemu-x86_64 img/bntest-x86_64' (15 runs):

      10459.968847 task-clock (msec)  # 0.999 CPUs utilized  ( +-  0.30% )
               739 context-switches   # 0.071 K/sec          ( +-  1.71% )
                 0 cpu-migrations     # 0.000 K/sec          ( +- 68.14% )
             2,204 page-faults        # 0.211 K/sec          ( +-  0.10% )
      10.473900411 seconds time elapsed                      ( +-  0.30% )

Backports commit 00c8fa9ffeee7458e5ed62c962faf638156c18da from qemu
2018-02-19 02:03:01 -05:00
Richard Henderson
c507f16702
tcg: Remove lingering references to gen_opc_buf
Three in comments and one in code in the stub tcg_liveness_analysis.

Backports commit 201577059331b8b3aef221ee2ed594deb99d6631 from qemu
2018-02-19 01:42:55 -05:00
Richard Henderson
8dbf46ca82
tcg: Respect highwater in tcg_out_tb_finalize
Undo the workaround at b17a6d3390f87620735f7efb03bb1c96682ff449.

If there are lots of memory operations in a TB, the slow path code
can exceed the highwater reservation. Add a check within the loop.

Backports commit 23dceda62a3643f734b7aa474fa6052593ae1a70 from qemu
2018-02-19 01:40:20 -05:00
Peter Maydell
4ca19f2cd6
tcg: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit 757e725b58c57d3ebb66a31fd2210df977a12154 from qemu
2018-02-19 01:04:30 -05:00
John Clarke
5c57445f08
tcg: Fix highwater check
A simple typo in the variable to use when comparing vs the highwater mark.
Reports are that qemu can in fact segfault occasionally due to this mistake.

Backports commit 644da9b39e477caa80bab69d2847dfcb468f0d33 from qemu
2018-02-17 18:53:18 -05:00
Lioncash
a2e7d86ccf
tcg/mips: Support r6 SEL{NE, EQ}Z instead of MOVN/MOVZ
Extend MIPS movcond implementation to support the SELNEZ/SELEQZ
instructions introduced in MIPS r6 (where MOVN/MOVZ have been removed).

Whereas the "MOVN/MOVZ rd, rs, rt" instructions have the following
semantics:
rd = [!]rt ? rs : rd

The "SELNEZ/SELEQZ rd, rs, rt" instructions are slightly different:
rd = [!]rt ? rs : 0

First we ensure that if one of the movcond input values is zero that it
comes last (we can swap the input arguments if we invert the condition).
This is so that it can exactly match one of the SELNEZ/SELEQZ
instructions and avoid the need to emit the other one.

Otherwise we emit the opposite instruction first into a temporary
register, and OR that into the result:
SELNEZ/SELEQZ TMP1, v2, c1
SELEQZ/SELNEZ ret, v1, c1
OR ret, ret, TMP1

Which does the following:
ret = cond ? v1 : v2

Backports commit 137d63902faf4960081856db9242cbaf234a23af from qemu
2018-02-17 15:24:04 -05:00
James Hogan
e71d19df81
tcg/mips: Support r6 multiply/divide encodings
MIPSr6 adds several new integer multiply, divide, and modulo
instructions, and removes several pre-r6 encodings, along with the HI/LO
registers which were the implicit operands of some of those
instructions. Update TCG to use the new instructions when built for r6.

The new instructions actually map much more directly to the TCG ops, as
they only provide a single 32-bit half of the result and in a normal
general purpose register instead of HI or LO.

The mulu2_i32 and muls2_i32 operations are no longer appropriate for r6,
so they are removed from the TCG opcode table. This is because they
would need to emit two separate host instructions anyway (for the high
and low half of the result), which TCG can arrange automatically for us
in the absense of mulu2_i32/muls2_i32 by splitting it into mul_i32 and
mul*h_i32 TCG ops.

Backports commit bc6d0c22b09a72897d9db4482076f89e7de97400 from qemu
2018-02-17 15:24:04 -05:00
James Hogan
9dac598855
tcg/mips: Support r6 JR encoding
MIPSr6 encodes JR as JALR with zero as the link register, and the pre-r6
JR encoding is removed. Update TCG to use the new encoding when built
for r6.

We still use the old encoding for pre-r6, so as not to confuse return
prediction stack hardware which may detect only particular encodings of
the return instruction.

Backports commit 6e0d096989be52c2b945fc83a9bd15d887bbdb47 from qemu
2018-02-17 15:24:04 -05:00
James Hogan
7f1bc28513
tcg/mips: Add use_mips32r6_instructions definition
Add definition use_mips32r6_instructions to the MIPS TCG backend which
is constant 1 when built for MIPS release 6. This will be used to decide
between pre-R6 and R6 instruction encodings.

Backports commit ce14bd4d469f3a14f6cbfceb6360aee066a60d72 from qemu
2018-02-17 15:24:04 -05:00
James Hogan
9d3a2feea0
tcg-opc.h: Simplify insn_start def
We already have a TLADDR_ARGS definition, so rearrange the order
slightly and use it in the definition of insn_start, instead of
having an #ifdef.

Backports commit c0e40dbdcc291c85faa289a53be60b7b1b7c7598 from qemu
2018-02-17 15:24:03 -05:00
Richard Henderson
d167379211
tcg/ppc: Prefer mask over andi.
Prefer the instruction that isn't required to modify cr0.

Backports commit 1e1df962e325e18a5188c4814cd1a10215a48f79 from qemu
2018-02-17 15:24:03 -05:00
Richard Henderson
3c3dee3747
tcg/ppc: Revise goto_tb implementation
Restrict the size of code_gen_buffer to 2GB on ppc64, which
lets us assert that everything is reachable with addis+addi
from tb_ret_addr. This lets us use a max of 4 insns for goto_tb
instead of 7.

Emit the indirect branch portion of goto_tb up front, which
means we only have to update two insns to update any link.
With a 64-bit store, we can update the link atomically, which
may be required in future.

Backports commit 5bfd75a35c11dd3aa61c73d0d2cd88137c31519c from qemu
2018-02-17 15:24:03 -05:00
Richard Henderson
13ad21a21f
tcg/ppc: Adjust exit_tb for change in prologue placement
Changing the prologue to the beginning of the code_gen_buffer
changes the direction of the "return" branch. Need to change
the logic to match.

Backports commit 70f897bdc4ce4101ec008317d43090f532bfb07d from qemu
2018-02-17 15:24:03 -05:00
Richard Henderson
bdf667fd4e
tcg: Check for overflow via highwater mark
We currently pre-compute an worst case code size for any TB, which
works out to be 122kB. Since the average TB size is near 1kB, this
wastes quite a lot of storage.

Instead, check for overflow in between generating code for each opcode.
The overhead of the check isn't measurable and wastage is minimized.

Backports commit b125f9dc7bd68cd4c57189db4da83b0620b28a72 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
19a3c7e03f
tcg: Emit prologue to the beginning of code_gen_buffer
By putting the prologue at the end, we risk overwriting the
prologue should our estimate of maximum TB size. Given the
two different placements of the call to tcg_prologue_init,
move the high water mark computation into tcg_prologue_init.

Backports commit 8163b74938d8b7d12e70597c4553dd0dc49443d5 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
532877a366
tcg: Remove tcg_gen_code_search_pc
It's no longer used, so tidy up everything reached by it.

Backports commit 04fe64000162c45d8974da9ca4d266f8d0e67eb7 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
a5ac288135
tcg: Remove gen_intermediate_code_pc
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.

Backports commit 4e5e1215156662b2b153255c49d4640d82c5568b from qemu
2018-02-17 15:23:59 -05:00
Richard Henderson
66de6cc37c
tcg: Save insn data and use it in cpu_restore_state_from_tb
We can now restore state without retranslation.

Backports commit fca8a500d519a56abeaedf8073167a61d3c6b9c4 from qemu
2018-02-17 15:23:59 -05:00
Richard Henderson
1cbd175736
tcg: Pass data argument to restore_state_to_opc
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.

Backports commit bad729e272387de7dbfa3ec4319036552fc6c107 from qemu
2018-02-17 15:23:58 -05:00
Lioncash
b115c5509d
tcg: Add TCG_MAX_INSNS
Adjust all translators to respect it.

Backports commit 190ce7fbc79fd0883a6170d7f30da59d366e6830 from qemu
2018-02-17 15:23:58 -05:00
Richard Henderson
2c1ae7a408
target-sparc: Remove gen_opc_jump_pc
Since jump_pc[1] is always npc + 4, we can infer after incrementing
that jump_pc[1] == pc + 4. Because of that, we can encode the branch
destination into a single word, and store that in npc.

Backports commit 6c42444f9a53b6af39d46008cb9f650b11e96cb9 from qemu
2018-02-17 15:23:56 -05:00
Richard Henderson
500e116581
target-mips: Add delayed branch state to insn_start
Backports commit c20d594e45bc8c4b21be1a7637cba0f279f72879 from qemu
2018-02-17 15:23:56 -05:00
Aurelien Jarno
b5f5e2dbc2
tcg/mips: pass oi to tcg_out_tlb_load
Instead of computing mem_index and s_bits in both tcg_out_qemu_ld and
tcg_out_qemu_st function and passing them to tcg_out_tlb_load, directly
pass oi to the tcg_out_tlb_load function and compute mem_index and
s_bits there.

Backports commit 81dfaf1a8f7f95259801da9732472f879023ef77 from qemu
2018-02-17 15:23:54 -05:00
Peter Crosthwaite
2b15db6e12
tcg: split tcg_op_defs to -common
tcg_op_defs (and the _max) are both needed by the TCI disassembler. For
multi-arch, tcg.c will be multiple-compiled (arch-obj) with its symbols
hidden from common code. So split the definition off to new file,
tcg-common.c which will remain a regular obj-y for use by both the TCI
disas as well as the multiple tcg.c's.

Backports commit 7d8f787d9d261d6880b69e35ed682241e3f9242f from qemu
2018-02-17 15:23:51 -05:00
Pavel Dovgalyuk
6cdaaf9b1b
softmmu: add helper function to pass through retaddr
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.

Backports commit 282dffc8a4bfe8724548cabb8a26698bde0a6e18 from qemu
2018-02-17 15:23:38 -05:00
Aurelien Jarno
11cfddad05
tcg/i386: use softmmu fast path for unaligned accesses
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
  - to support unaligned access on host with strict alignement
  - to correctly handle accesses crossing pages

x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.

For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.

On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.

Backports commit 8cc580f6a0d8c0e2f590c1472cf5cd8e51761760 from qemu
2018-02-17 15:23:33 -05:00
Laurent Vivier
ea2ee48d9c
s390: fix softmmu compilation
guest_base must be used only in linux-user mode.

Backports commit 090d0bfd948343d522cd20bc634105b5cfe2483b from qemu
2018-02-17 15:23:32 -05:00
James Hogan
dba4828444
tcg/mips: Fix clobbering of qemu_ld inputs
The MIPS TCG backend implements qemu_ld with 64-bit targets using the v0
register (base) as a temporary to load the upper half of the QEMU TLB
comparator (see line 5 below), however this happens before the input
address is used (line 8 to mask off the low bits for the TLB
comparison, and line 12 to add the host-guest offset). If the input
address (addrl) also happens to have been placed in v0 (as in the second
column below), it gets clobbered before it is used.

addrl in t2 addrl in v0

1 srl a0,t2,0x7 srl a0,v0,0x7
2 andi a0,a0,0x1fe0 andi a0,a0,0x1fe0
3 addu a0,a0,s0 addu a0,a0,s0
4 lw at,9136(a0) lw at,9136(a0) set TCG_TMP0 (at)
5 lw v0,9140(a0) lw v0,9140(a0) set base (v0)
6 li t9,-4093 li t9,-4093
7 lw a0,9160(a0) lw a0,9160(a0) set addend (a0)
8 and t9,t9,t2 and t9,t9,v0 use addrl
9 bne at,t9,0x836d8c8 bne at,t9,0x836d838 use TCG_TMP0
10 nop nop
11 bne v0,t8,0x836d8c8 bne v0,a1,0x836d838 use base
12 addu v0,a0,t2 addu v0,a0,v0 use addrl, addend
13 lw t0,0(v0) lw t0,0(v0)

Fix by using TCG_TMP0 (at) as the temporary instead of v0 (base),
pushing the load on line 5 forward into the delay slot of the low
comparison (line 10). The early load of the addend on line 7 also needs
pushing even further for 64-bit targets, or it will clobber a0 before
we're done with it. The output for 32-bit targets is unaffected.

srl a0,v0,0x7
andi a0,a0,0x1fe0
addu a0,a0,s0
lw at,9136(a0)
-lw v0,9140(a0) load high comparator
li t9,-4093
-lw a0,9160(a0) load addend
and t9,t9,v0
bne at,t9,0x836d838
- nop
+ lw at,9140(a0) load high comparator
+lw a0,9160(a0) load addend
-bne v0,a1,0x836d838
+bne at,a1,0x836d838
addu v0,a0,v0
lw t0,0(v0)

Backports commit 33fca8589cf2aa7bf91564e6a8f26b3ba0910541 from qemu
2018-02-17 15:23:24 -05:00
Aurelien Jarno
45927edecf
tcg/mips: fix add2
The add2 code in the tcg_out_addsub2 function doesn't take into account
the case where rl == al == bl. In that case we can't compute the carry
after the addition. As it corresponds to a multiplication by 2, the
carry bit is the bit 31.

While this is a corner case, this prevents x86-64 guests to boot on a
MIPS host.

Backports commit c99d69694af4ed15b33e3f7c2e3ef6972c14358d from qemu
2018-02-17 15:23:23 -05:00
Aurelien Jarno
4e68b4167d
tcg/s390x: Mask TCGMemOp appropriately for indexing
Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition,
but two cases were forgotten in the TCG S390 backend.

Backports commit 3c8691f568f49bf623dcb2850464d4156d95e61b from qemu
2018-02-17 15:23:23 -05:00
Aurelien Jarno
096d1a975d
tcg/mips: Mask TCGMemOp appropriately for indexing
Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition,
but two cases were forgotten in the TCG MIPS backend.

Backports commit 4214a8cb7c15ec43d4b2a43ebf248b273a0f4d45 from qemu
2018-02-17 15:23:23 -05:00
Aurelien Jarno
8396601082
tcg/mips: fix TLB loading for BE host with 32-bit guests
For 32-bit guest, we load a 32-bit address from the TLB, so there is no
need to compensate for the low or high part. This fixes 32-bit guests on
big-endian hosts.

Backports commit e72c4fb81db52be881c9356f1c60e0a7817d2d32 from qemu
2018-02-17 15:23:23 -05:00
Aurelien Jarno
ba73fd9162
tcg/s390: fix branch target change during code retranslation
Make sure to not modify the branch target. This ensure that the
branch target is not corrupted during partial retranslation.

Backports commit cd3b29b745b0ff393b2d37317837bc726b8dacc8 from qemu
2018-02-17 15:23:17 -05:00
Peter Crosthwaite
a591219ad6
cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcg
The usages of this define are pure TCG and there is no architecture
specific variation of the value. Localise it to the TCG engine to
remove another architecture agnostic piece from cpu-defs.h.

This follows on from a28177820a868eafda8fab007561cc19f41941f4 where
temp_buf was moved out of the CPU_COMMON obsoleting the need for
the super early definition.

Backports commit 6e0b07306d1793e8402dd218d2e38a7377b5fc27 from qemu
2018-02-17 15:23:15 -05:00