Commit Graph

533 Commits

Author SHA1 Message Date
Richard Henderson
3f9502dc8b
tcg: Allow extra data to be attached to insn_start
With an eye toward having this data replace the gen_opc_* arrays
that each target collects in order to enable restore_state_from_tb.

Backports commit 9aef40ed1f6e2bd794bbb3ba8c8b773e506334c9 from qemu
2018-02-11 13:03:51 -05:00
Richard Henderson
dd1ec408e5
target-*: Increment num_insns immediately after tcg_gen_insn_start
This does tidy the icount test common to all targets.

Backports commit 959082fc4a93a016a6b697e1e0c2b373d8a3a373 from qemu
2018-02-11 12:46:30 -05:00
Richard Henderson
a64d0ff657
target-*: Unconditionally emit tcg_gen_insn_start
While we're at it, emit the opcode adjacent to where we currently
record data for search_pc. This puts gen_io_start et al on the
"correct" side of the marker.

Backports commit 667b8e29c5b1d8c5b4e6ad5f780ca60914eb6e96 from qemu
2018-02-11 12:41:20 -05:00
Lioncash
b3f9ff667b
tcg: Rename debug_insn_start to insn_start
With an eye toward making it mandatory.

Backports commit 765b842adec4c5a359e69ca08785553599f71496 from qemu
2018-02-11 12:34:01 -05:00
Richard Henderson
77b03e0973
target-i386: Make check_hw_breakpoints static
The function is now only used from within a single file.

Backports commit dd941cdcfec536aad6a310a153778142ed9f3e92 from qemu
2018-02-11 12:28:08 -05:00
Richard Henderson
10e0920fa0
target-i386: Move breakpoint related functions to new file
Backports commit ba4b5c65a98ea91dc3b13e42dd9404808c999dda from qemu
2018-02-11 12:25:24 -05:00
Lioncash
fb2fe4580f
optimize: Add missing extrh/extrl case 2018-02-11 02:57:55 -05:00
Lioncash
3791fc69fd
target-arm: Use new revbit functions
Backports commit 42fedbca8f5b54324ed89be3484d4a3dc9946387 from qemu
2018-02-11 02:57:55 -05:00
Richard Henderson
a5d6a31d69
host-utils: Add revbit functions
Backports commit 652a4b7e736f432a6809d1d2b52d169ab0b9aa3b from qemu.
2018-02-11 02:57:55 -05:00
Richard Henderson
eb5ed2a844
target-arm: Use tcg_gen_extrh_i64_i32
Usually, eliminate an operation from the translator by combining
a shift with an extract.

In the case of gen_set_NZ64, we don't need a boolean value for cpu_ZF,
merely a non-zero value. Given that we can extract both halves of a
64-bit input in one call, this simplifies the code.

Backports commit 7cb36e18b2f1c1f971ebdc2121de22a8c2e94fd6 from qemu
2018-02-11 02:57:54 -05:00
Richard Henderson
b94da3fc13
target-arm: Recognize ROR
Backports commit 8fb0ad8e16ab3d03433244a1a03e1df757342ad8 from qemu
2018-02-11 02:57:33 -05:00
Richard Henderson
3173269986
target-arm: Eliminate unnecessary zero-extend in disas_bitfield
For !SF, this initial ext32u can't be optimized away by the
current TCG code generator. (It would require backward bit
liveness propagation.)

Backports commit d3a77b42decd0cbfa62a5526e67d1d6d380c83a9 from qemu
2018-02-11 01:35:58 -05:00
Richard Henderson
c637a97270
target-arm: Recognize UXTB, UXTH, LSR, LSL
These are all special case aliases of UBFM.

Backports commit 9924e85829fe21b5f38a5d267c9aea44c5d478ac from qemu
2018-02-11 01:34:11 -05:00
Richard Henderson
d9e4e70636
target-arm: Recognize SXTB, SXTH, SXTW, ASR
These are all special case aliases of SBFM.

Backports commit ef60151bee9a95e3a5cc98b345a19ed7eb435ddb from qemu
2018-02-11 01:31:54 -05:00
Richard Henderson
5ee72ff9f5
target-arm: Implement fcsel with movcond
Backports commit 6e061029d74455d83f6fa070ac33de7a356cf60d from qemu
2018-02-11 01:29:14 -05:00
Richard Henderson
53bd2b1d5c
target-arm: Implement ccmp branchless
This can allow much of a ccmp to be elided when particular
flags are subsequently dead.

Backports commit 7dd03d773e0dafae9271318fc8d6b2b14de74403 from qemu
2018-02-11 01:25:51 -05:00
Richard Henderson
2c71ddefb1
target-arm: Use setcond and movcond for csel
Backports commit 259cb68491ab36427e7e5d820fe543d53b006ec6 from qemu
2018-02-10 23:57:11 -05:00
Richard Henderson
70dd48b855
target-arm: Handle always condition codes within arm_test_cc
Handling this with TCG_COND_ALWAYS will allow these unlikely
cases to be handled without special cases in the rest of the
translator. The TCG optimizer ought to be able to reduce
these ALWAYS conditions completely.

Backports commit 9305eac09e61d857c9cc11e20db754dfc25a82db from qemu
2018-02-10 23:48:10 -05:00
Lioncash
94f1227f7a
target-arm: Introduce DisasCompare
Split arm_gen_test_cc into 3 functions, so that it can be reused
for non-branch TCG comparisons.

Backports commit 6c2c63d3a02c79e9035ca0370cc549d0f938a4dd from qemu
2018-02-10 23:45:47 -05:00
Richard Henderson
352f93a119
tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0
In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest
base to the address base register from the address index register.
Except that 31 in the base slot is SP not XZR, so we need to be
more intelligent about which reg gets placed in which slot.

Backports commit 352bcb0a2b816ff9ab9d75d0f2384650d9e9ab19 from qemu
2018-02-10 23:33:24 -05:00
Richard Henderson
7d57c2e4ce
tcg/aarch64: Use softmmu fast path for unaligned accesses
Backports commit 9ee14902bf107e37fb2c8119fa7bca424396237c from qemu
2018-02-10 23:25:34 -05:00
Lioncash
f8388a6c03
header_gen: Fix mips platform 2018-02-10 23:21:41 -05:00
Richard Henderson
aaf89ed84d
tcg/s390: Use softmmu fast path for unaligned accesses
Backports commit a5e39810b9088b5d20fac8e0293f281e1c8b608f from qemu
2018-02-10 23:14:14 -05:00
Richard Henderson
a3aaf5a864
tcg: Remove tcg_gen_trunc_i64_i32
Replacing it with tcg_gen_extrl_i64_i32.

Backports commit ecc7b3aa71f5fdcf9ee87e74ca811d988282641d from qemu
2018-02-10 23:11:02 -05:00
Richard Henderson
58e939b91f
tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.

Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
2018-02-10 23:00:45 -05:00
Aurelien Jarno
a05256b206
tcg: update README about size changing ops
Backports commit 870ad1547ac53bc79c21d86cf453b3b20cc660a2 from qemu
2018-02-10 22:49:36 -05:00
Aurelien Jarno
4bd3d5005e
tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 ops
They behave the same as ext32s_i64 and ext32u_i64 from the constant
folding and zero propagation point of view, except that they can't
be replaced by a mov, so we don't compute the affected value.

Backports commit 8bcb5c8f34f9215d4f88f388c7ff14c9bd5cecd3 from qemu
2018-02-10 22:47:26 -05:00
Aurelien Jarno
f279c93768
tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
2018-02-10 22:45:13 -05:00
Aurelien Jarno
80223e7ad5
tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
2018-02-10 22:29:30 -05:00
Aurelien Jarno
5f0920ad0f
tcg/optimize: allow constant to have copies
Now that copies and constants are tracked separately, we can allow
constant to have copies, deferring the choice to use a register or a
constant to the register allocation pass. This prevent this kind of
regular constant reloading:

-OUT: [size=338]
+OUT: [size=298]
   mov    -0x4(%r14),%ebp
   test   %ebp,%ebp
   jne    0x7ffbe9cb0ed6
   mov    $0x40002219f8,%rbp
   mov    %rbp,(%r14)
-  mov    $0x40002219f8,%rbp
   mov    $0x4000221a20,%rbx
   mov    %rbp,(%rbx)
   mov    $0x4000000000,%rbp
   mov    %rbp,(%r14)
-  mov    $0x4000000000,%rbp
   mov    $0x4000221d38,%rbx
   mov    %rbp,(%rbx)
   mov    $0x40002221a8,%rbp
   mov    %rbp,(%r14)
-  mov    $0x40002221a8,%rbp
   mov    $0x4000221d40,%rbx
   mov    %rbp,(%rbx)
   mov    $0x4000019170,%rbp
   mov    %rbp,(%r14)
-  mov    $0x4000019170,%rbp
   mov    $0x4000221d48,%rbx
   mov    %rbp,(%rbx)
   mov    $0x40000049ee,%rbp
   mov    %rbp,0x80(%r14)
   mov    %r14,%rdi
   callq  0x7ffbe99924d0
   mov    $0x4000001680,%rbp
   mov    %rbp,0x30(%r14)
   mov    0x10(%r14),%rbp
   mov    $0x4000001680,%rbp
   mov    %rbp,0x30(%r14)
   mov    0x10(%r14),%rbp
   shl    $0x20,%rbp
   mov    (%r14),%rbx
   mov    %ebx,%ebx
   mov    %rbx,(%r14)
   or     %rbx,%rbp
   mov    %rbp,0x10(%r14)
   mov    %rbp,0x90(%r14)
   mov    0x60(%r14),%rbx
   mov    %rbx,0x38(%r14)
   mov    0x28(%r14),%rbx
   mov    $0x4000220e60,%r12
   mov    %rbx,(%r12)
   mov    $0x40002219c8,%rbx
   mov    %rbp,(%rbx)
   mov    0x20(%r14),%rbp
   sub    $0x8,%rbp
   mov    $0x4000004a16,%rbx
   mov    %rbx,0x0(%rbp)
   mov    %rbp,0x20(%r14)
   mov    $0x19,%ebp
   mov    %ebp,0xa8(%r14)
   mov    $0x4000015110,%rbp
   mov    %rbp,0x80(%r14)
   xor    %eax,%eax
   jmpq   0x7ffbebcae426
   lea    -0x5f6d72a(%rip),%rax        # 0x7ffbe3d437b3
   jmpq   0x7ffbebcae426

Backports commit 299f80130401153af1a6ddb3cc011781bcd47600 from qemu
2018-02-10 22:18:03 -05:00
Aurelien Jarno
59909fe549
tcg/optimize: track const/copy status separately
Instead of using an enum which could be either a copy or a const, track
them separately. This will be used in the next patch.

Constants are tracked through a bool. Copies are tracked by initializing
temp's next_copy and prev_copy to itself, allowing to simplify the code
a bit.

Backports commit b41059dd9deec367a4ccd296659f0bc5de2dc705 from qemu
2018-02-10 22:15:43 -05:00
Aurelien Jarno
134a7dfe82
tcg/optimize: add temp_is_const and temp_is_copy functions
Add two accessor functions temp_is_const and temp_is_copy, to make the
code more readable and make code change easier.

Backports commit d9c769c60948815ee03b2684b1c1c68ee4375149 from qemu
2018-02-10 22:07:02 -05:00
Aurelien Jarno
b450b79622
tcg/optimize: optimize temps tracking
The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate
vector registers on most guests, it's not uncommon to have more than 100
used temps. This means we have initialize more than 2kB at least twice
per TB, often more when there is a few goto_tb.

Instead used a TCGTempSet bit array to track which temps are in used in
the current basic block. This means there are only around 16 bytes to
initialize.

This improves the boot time of a MIPS guest on an x86-64 host by around
7% and moves out tcg_optimize from the the top of the profiler list.

Backports commit 1208d7dd5fddc1fbd98de800d17429b4e5578848 from qemu
2018-02-10 21:51:46 -05:00
Aurelien Jarno
5f67ab74e7
tcg/optimize: fix constant signedness
By convention, on a 64-bit host TCG internally stores 32-bit constants
as sign-extended. This is not the case in the optimizer when a 32-bit
constant is folded.

This doesn't seem to have more consequences than suboptimal code
generation. For instance the x86 backend assumes sign-extended constants,
and in some rare cases uses a 32-bit unsigned immediate 0xffffffff
instead of a 8-bit signed immediate 0xff for the constant -1. This is
with a ppc guest:

before
------

 ---- 0x9f29cc
 movi_i32 tmp1,$0xffffffff
 movi_i32 tmp2,$0x0
 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
 mov_i32 r10,tmp0

0x7fd8c7dfe90c:  xor    %ebp,%ebp
0x7fd8c7dfe90e:  mov    %ebp,%r11d
0x7fd8c7dfe911:  mov    0x18(%r14),%r9d
0x7fd8c7dfe915:  add    %r9d,%r10d
0x7fd8c7dfe918:  adc    %ebp,%r11d
0x7fd8c7dfe91b:  add    $0xffffffff,%r10d
0x7fd8c7dfe922:  adc    %ebp,%r11d
0x7fd8c7dfe925:  mov    %r11d,0x134(%r14)
0x7fd8c7dfe92c:  mov    %r10d,0x28(%r14)

after
-----

 ---- 0x9f29cc
 movi_i32 tmp1,$0xffffffffffffffff
 movi_i32 tmp2,$0x0
 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
 mov_i32 r10,tmp0

0x7f37010d490c:  xor    %ebp,%ebp
0x7f37010d490e:  mov    %ebp,%r11d
0x7f37010d4911:  mov    0x18(%r14),%r9d
0x7f37010d4915:  add    %r9d,%r10d
0x7f37010d4918:  adc    %ebp,%r11d
0x7f37010d491b:  add    $0xffffffffffffffff,%r10d
0x7f37010d491f:  adc    %ebp,%r11d
0x7f37010d4922:  mov    %r11d,0x134(%r14)
0x7f37010d4929:  mov    %r10d,0x28(%r14)

Backports commit 29f3ff8d6cbc28f79933aeaa25805408d0984a8f from qemu
2018-02-10 21:40:20 -05:00
Aurelien Jarno
e273acf87a
tcg/optimize: fix tcg_opt_gen_movi
Due to a copy&paste, the new op value is tested against mov_i32 instead
of movi_i32. The test is therefore always false. Fix that.

Backports commit 961521261a3d600b0695b2e6d2b0f490076f7e90 from qemu
2018-02-10 21:38:09 -05:00
Aurelien Jarno
42dd2addbe
tcg/optimize: rename tcg_constant_folding
The tcg_constant_folding folding ends up doing all the optimizations
(which is a good thing to avoid looping on all ops multiple time), so
make it clear and just rename it tcg_optimize.

Backports commit 36e60ef6ac5d8a262d0fbeedfdb2b588514cb1ea from qemu
2018-02-10 21:36:34 -05:00
Aurelien Jarno
7b0055d742
tcg/optimize: fold constant test in tcg_opt_gen_mov
Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if
the source temp is a constant. Fold that into the tcg_opt_gen_mov
function.

Backports commit 97a79eb70dd35a24fda87d86196afba5e6f21c5d from qemu
2018-02-10 21:34:00 -05:00
Aurelien Jarno
517fac57c3
tcg/optimize: fold temp copies test in tcg_opt_gen_mov
Each call to tcg_opt_gen_mov is preceeded by a test to check if the
source and destination temps are copies. Fold that into the
tcg_opt_gen_mov function.

Backports commit 5365718a9afeeabde3784d82a542f8ad909b18cf from qemu
2018-02-10 21:27:06 -05:00
Aurelien Jarno
d21f474c39
tcg/optimize: remove opc argument from tcg_opt_gen_mov
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.

Backports commit 8d6a91602ea824ef4435ea38fd475387eecc098c from qemu
2018-02-10 21:23:34 -05:00
Aurelien Jarno
0fd0afad13
tcg/optimize: remove opc argument from tcg_opt_gen_movi
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.

Backports commit ebd27391b00cdafc81e0541a940686137b3b48df from qemu
2018-02-10 21:21:13 -05:00
Richard Henderson
dafc44c0a5
target-mips: Use CPU_LOG_INT for logging related to interrupts
There are now no unconditional uses of qemu_log in the subdirectory.

Backports commit c85570163bdf1ba29cb52a63f22ff1c48f1b9398 from qemu
2018-02-10 21:12:41 -05:00
Richard Henderson
6f66fb4bd5
target-mips: Copy restrictions from ext/ins to dext/dins
The checks in dins is required to avoid triggering an assertion
in tcg_gen_deposit_tl. The check in dext is just for completeness.
Fold the other D cases in via fallthru.

Backports commit b7f26e523914b982a1c1bfa8295f77ff9787c33c from qemu
2018-02-10 21:09:26 -05:00
Richard Henderson
f5e38ea71e
tcg/aarch64: use 32-bit offset for 32-bit softmmu emulation
Similar to the same fix for user-mode, except this instance
occurs on the softmmu path. Again, the tlb addend must be
the base register, while the guest address is the index.

Backports commit 80adb8fcad4778376a11d394a9e01516819e2327 from qemu
2018-02-10 20:59:13 -05:00
Paolo Bonzini
cfc9356a8e
tcg/aarch64: use 32-bit offset for 32-bit user-mode emulation
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and
tcg_out_qemu_st to use a 32-bit zero extended offset.  However, the
guest base register x28 must be the base and addr_reg must be the
index.

Backports commit ffc6372851d8631a9f9fa56ec613b3244dc635b9 from qemu
2018-02-10 20:55:51 -05:00
Paolo Bonzini
85bac3c96d
tcg/aarch64: add ext argument to tcg_out_insn_3310
The new argument lets you pick uxtw or uxtx mode for the offset
register.  For now, all callers pass TCG_TYPE_I64 so that uxtx
is generated.  The bits for uxtx are removed from I3312_TO_I3310.

Backports commit 6c0f0c0f124718650a8d682ba275044fc02f6fe2 from qemu
2018-02-10 20:51:37 -05:00
Richard Henderson
95e666c547
tcg/i386: Extend addresses for 32-bit guests
Removing the ??? comment explaining why it (mostly) worked.

Backports commit ee8ba9e4d8458b8bba5455a7ae704620c4f2ef4b from qemu
2018-02-10 20:42:33 -05:00
Richard Henderson
17c1f027c1
tcg: Handle MO_AMASK in tcg_dump_ops
Backports commit 59c4b7e8dfab0cdc41434fedbf2686222f541e57 from qemu
2018-02-10 20:32:52 -05:00
Richard Henderson
c5a2a50c06
tcg: Mask TCGMemOp appropriately for indexing
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.

Backports commit 2b7ec66f025263a5331f37d5ad78a625496fd7bd from qemu
2018-02-10 20:29:36 -05:00
Richard Henderson
336833c11e
tcg: Add MO_ALIGN, MO_UNALN
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed. The default setting
reflects the target's definition of ALIGNED_ONLY.

Backports commit dfb36305626636e2e07e0c5acd3a002a5419399e from qemu
2018-02-10 20:18:53 -05:00
Richard Henderson
ac713c7034
tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
2018-02-10 20:03:22 -05:00