Commit Graph

55 Commits

Author SHA1 Message Date
Richard Henderson
d1da0b8f6d
tcg/aarch64: Add vector operations
Backports commit 14e4c1e2355473ccb2939afc69ac8f25de103b92 from qemu
2018-03-07 08:07:58 -05:00
Richard Henderson
47ed20fdd4
tcg/aarch64: Fully convert tcg_target_op_def
Backports commit 1897cc2eb8be2d8be23380b45a2d3c1a2808723f from qemu
2018-03-04 23:46:38 -05:00
Richard Henderson
fc8b4316a9
tcg: Remove tcg_regset_set32
It's not even clear what the interface REG and VAL32 were supposed to mean.
All uses had REG = 0 and VAL32 was the bitset assigned to the destination.

Backports commit f46934df662182097dce07d57ec00f37e4d2abf1 from qemu
2018-03-04 23:42:59 -05:00
Richard Henderson
49d09d6888
tcg: Remove tcg_regset_clear
Backports commit ccb1bb66ea2a42e773bfa04178d8b383ff86d4d8 from qemu
2018-03-04 23:24:45 -05:00
Richard Henderson
0c3781e7eb
tcg/aarch64: Use constant pool for movi
Backports commit 55129955e92ec164ee2d778f20070dc214109bc6 from qemu
2018-03-04 22:46:50 -05:00
Richard Henderson
f96514a99c
tcg: Rearrange ldst label tracking
Dispense with TCGBackendData, as it has never been used for more than
holding a single pointer. Use a define in the cpu/tcg-target.h to
signal requirement for TCGLabelQemuLdst, so that we can drop the no-op
tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c.

Backports commit 659ef5cbb893872d25e9d95191cc23b16546c8a1 from qemu
2018-03-04 22:13:13 -05:00
Richard Henderson
31b8b67cd3
tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.h
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump
boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional
function tb_target_set_jmp_target.

While we're touching all backends, add a parameter for tb->tc_ptr;
we're going to need it shortly for some backends.

Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c.

Backports commit a85833933628384d74ec412024d55cf012640287 from qemu
2018-03-04 21:52:35 -05:00
Pranith Kumar
862bbef07d
tcg: Add tcg target default memory ordering
Backports commit 71650df7b0ee0600308810a267a123b971b3d533 from qemu
2018-03-04 13:22:41 -05:00
Pranith Kumar
57f8eec080
tcg/aarch64: Enable indirect jump path using LDR (literal)
This patch enables the indirect jump path using an LDR (literal)
instruction. It will be interesting to test and see which performs
better among the two paths.

Backports commit 2acee8b2b5e6bba2935bb6ce5be92d0f0f9799cb from qemu
2018-03-03 22:03:39 -05:00
Pranith Kumar
5e9e39cafd
tcg/aarch64: Use ADRP+ADD to compute target address
We use ADRP+ADD to compute the target address for goto_tb. This patch
introduces the NOP instruction which is used to align the above
instruction pair so that we can use one atomic instruction to patch
the destination offsets.

Backports commit b68686bd4bfeb70040b4099df993dfa0b4f37b03 from qemu
2018-03-03 22:01:38 -05:00
Pranith Kumar
0998ba8259
tcg/aarch64: Introduce and use long branch to register
We can use a branch to register instruction for exit_tb for offsets
greater than 128MB.

Backports commit 23b7aa1d2af04ba57cc94f74d9f0ab25dce72fa0 from qemu
2018-03-03 21:59:58 -05:00
Richard Henderson
9a85cb0a26
tcg/aarch64: Use ADR in tcg_out_movi
The new placement of the TB means that we can use one insn
to load the return value for exit_tb returning the TB pointer.

Backports commit cc74d332ff9a78684374847375ef63fc4bd10436 from qemu
2018-03-03 17:09:42 -05:00
Richard Henderson
81f1aae572
tcg/aarch64: Implement goto_ptr
Measurements:

SPECint06 (test set), x86_64-linux-user. Host: APM 64-bit ARMv8 (Atlas/A57) @ 2.4 GHz

1.45x +-+-------------------------------------------------------------------------------------------------------------+-+
| ***** |
| +++ * * +goto-ptr |
1.4x +-+...*****............................*...*....................................................................+-+
| *+++* * * +++ |
1.35x +-+...*...*............................*...*...........................*****....................................+-+
| * * * * *+++* |
| * * * * * * |
1.3x +-+...*...*............................*...*...........................*...*....................................+-+
| * * * * * * |
| * * * * * * ***** |
1.25x +-+...*...*...........*****............*...*...........................*...*............*****...*...*...........+-+
| * * * * * * * * *+++* * * |
1.2x +-+...*...*...........*...*............*...*...........................*...*............*...*...*...*...........+-+
| * * * * * * * * * * * * |
| * * * * * * * * * * * * ***** |
1.15x +-+...*...*...........*...*............*...*...........................*...*............*...*...*...*...*...*...+-+
| * * * * * * * * +++ * * * * * * |
| * * * * * * * * ***** * * * * * * |
1.1x +-+...*...*...........*...*....*****...*...*...*****...................*...*...*...*....*...*...*...*...*...*...+-+
| * * * * * * * * * * * * * * * * * * * * |
1.05x +-+...*...*...........*...*....*...*...*...*...*...*...................*...*...*...*....*...*...*...*...*...*...+-+
| * * ***** * * * * * * * * * * * * * * * * * * |
| * * * * * * * * * * * * ***** ***** * * * * * * * * * * |
1x +-+---*****---*****---*****----*****---*****---*****---*****---*****---*****---*****----*****---*****---*****---+-+
astar bzip2 gcc gobmk h264ref hmmlibquantum mcf omnetpperlbench sjenxalancbmk hmean
png: http://imgur.com/en9HE8L

Backports commit b19f0c2e7d344d4d62daf554951acdb6c94a34b0 from qemu
2018-03-03 14:13:09 -05:00
Emilio G. Cota
8f4f15e5f5
tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptr
Instead of exporting goto_ptr directly to TCG frontends, export
tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
returned by the lookup_tb_ptr() helper. This is the only use case
we have for goto_ptr and lookup_tb_ptr, so having this function is
very convenient. Furthermore, it trivially allows us to avoid calling
the lookup helper if goto_ptr is not implemented by the backend.

Backports commit cedbcb01529cb6cf9a2289cdbebbc63f6149fc18 from qemu
2018-03-02 21:05:18 -05:00
Pranith Kumar
ee609fa59f
aarch64: Change ext type to TCGType to fix warnings
To fix the following warnings:

In file included from /users/pranith/qemu/tcg/tcg.c:255:
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:879:24: warning: implicit conversion from enumeration type 'TCGMemOp' (aka 'enum TCGMemOp') to different enumeration type 'TCGType' (aka 'enum TCGType')
[-Wenum-conversion]
tcg_out_cmp(s, ext, a, b, b_const);
~~~~~~~~~~~ ^~~
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:893:36: warning: implicit conversion from enumeration type 'TCGMemOp' (aka 'enum TCGMemOp') to different enumeration type 'TCGType' (aka 'enum TCGType')
[-Wenum-conversion]
tcg_out_insn(s, 3201, CBZ, ext, a, offset);
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:389:65: note: expanded from macro 'tcg_out_insn'
glue(tcg_out_insn_,FMT)(S, glue(glue(glue(I,FMT),_),OP), ## __VA_ARGS__)
^
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:895:37: warning: implicit conversion from enumeration type 'TCGMemOp' (aka 'enum TCGMemOp') to different enumeration type 'TCGType' (aka 'enum TCGType')
[-Wenum-conversion]
tcg_out_insn(s, 3201, CBNZ, ext, a, offset);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:389:65: note: expanded from macro 'tcg_out_insn'
glue(tcg_out_insn_,FMT)(S, glue(glue(glue(I,FMT),_),OP), ## __VA_ARGS__)
^
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:1610:27: warning: implicit conversion from enumeration type 'TCGType' (aka 'enum TCGType') to different enumeration type 'TCGMemOp' (aka 'enum TCGMemOp')
[-Wenum-conversion]
tcg_out_brcond(s, ext, a2, a0, a1, const_args[1], arg_label(args[3]));
~~~~~~~~~~~~~~ ^~~

backports commit dc1eccd661ada3b746ca4438e444993c36a0f04f from qemu
2018-03-02 10:48:56 -05:00
Richard Henderson
5f6e7bbdbd
tcg: Add opcode for ctpop
The number of actual invocations of ctpop itself does not warrent
an opcode, but it is very helpful for POWER7 to use in generating
an expansion for ctz.

Backports commit a768e4e99247911f00c5c0267c12d4e207d5f6cc from qemu
2018-03-01 18:26:41 -05:00
Richard Henderson
2b87ddda35
tcg/aarch64: Handle ctz and clz opcodes
Backports commit 53c76c19904983d2c81e4f5e77027c241918a479 from qemu
2018-03-01 16:19:34 -05:00
Richard Henderson
2cf34e1b55
tcg: Add clz and ctz opcodes
Backports commit 0e28d0063bbd9e59a981ea2d20f82f30c5d956a8 from qemu
2018-03-01 16:04:11 -05:00
Richard Henderson
3f38611159
tcg: Pass the opcode width to target_parse_constraint
This will let us choose how to interpret a given constraint
depending on whether the opcode is 32- or 64-bit. Which will
let us share more constraint combinations between opcodes.

At the same time, change the interface to return the advanced
pointer instead of passing it in/out by reference.

Backports commit 069ea736b50b75fdec99c9b8cc603b97bd98419e from qemu
2018-03-01 15:45:40 -05:00
Richard Henderson
b8c93597b4
tcg: Transition flat op_defs array to a target callback
This will allow the target to tailor the constraints to the
auto-detected ISA extensions.

Backports commit f69d277ece43c42c7ab0144c2ff05ba740f6706b from qemu
2018-03-01 15:40:11 -05:00
Richard Henderson
fbea4130fc
tcg/aarch64: Implement field extraction opcodes
Backports commit e2179f94a17bf0933df29ce1b4f6bc93cbe7dbd3 from qemu
2018-03-01 13:30:55 -05:00
Richard Henderson
8e0585dcb1
tcg: Add field extraction primitives
Adds tcg_gen_extract_* and tcg_gen_sextract_* for extraction of
fixed position bitfields, much like we already have for deposit.

Backports commit 7ec8bab3deae643b1ce579c2d65a244f30708330 from qemu
2018-03-01 13:21:30 -05:00
Richard Henderson
6820964e2f
tcg/aarch64: Fix tcg_out_movi
There were some patterns, like 0x0000_ffff_ffff_00ff, for which we
would select to begin a multi-insn sequence with MOVN, but would
fail to set the 0x0000 lane back from 0xffff.

Backports commit 50b468d42107a2c646b1c566ed17d9ec362c51c4 from qemu
2018-03-01 09:15:34 -05:00
Richard Henderson
a03666f2f2
tcg/aarch64: Fix addsub2 for 0+C
When al == xzr, we cannot use addi/subi because that encodes xsp.
Force a zero into the temp register for that (rare) case.

Backports commit 028fbea47713f909d6ea761a457779a82b276247 from qemu
2018-03-01 09:13:54 -05:00
Pranith Kumar
907060b865
tcg/aarch64: Add support for fence
Backports commit c7a59c2a92592e556b9361437c9c4229917bd1e3 from qemu
2018-02-26 03:11:03 -05:00
Richard Henderson
91f5cf0417
tcg: Support arbitrary size + alignment
Previously we allowed fully unaligned operations, but not operations
that are aligned but with less alignment than the operation size.

In addition, arm32, ia64, mips, and sparc had been omitted from the
previous overalignment patch, which would have led to that alignment
being enforced.

Backports commit 85aa80813dd9f5c1f581c743e45678a3bee220f8 from qemu
2018-02-26 02:47:26 -05:00
Markus Armbruster
25ec9ab016
tcg: Clean up tcg-target.h header guards
These use guard symbols like TCG_TARGET_$target.
scripts/clean-header-guards.pl doesn't like them because they don't
match their file name (they should, to make guard collisions less
likely).

Clean them up: use guard symbol $target_TCG_TARGET_H for
tcg/$target/tcg-target.h.

Backports commit 14e54f8ecfe9c5e17348f456781344737ed10b3b from qemu
2018-02-25 04:15:08 -05:00
Sergey Sorokin
e4d123caa9
tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
2018-02-25 02:23:28 -05:00
Richard Henderson
23586e2674
tcg: Optimize spills of constants
While we can store constants via constrants on INDEX_op_st_i32 et al,
we weren't able to spill constants to backing store.

Add a new backend interface, tcg_out_sti, which may store the constant
(and is allowed to fail). Rearrange the temp_* helpers so that we only
attempt to directly store a constant when the temp is becoming dead/free.

Backports commit 59d7c14eeff8d2ad7f61aed86ce5a176113bc153 from qemu
2018-02-25 01:45:29 -05:00
Sergey Fedorov
e60c24cecf
tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov
a45f8cb49d
tcg/aarch64: Make direct jump patching thread-safe
Ensure direct jump patching in AArch64 is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 9e269112953be4d670cb0d25042bd6546fcf3e45 from qemu
2018-02-23 21:28:18 -05:00
Aurelien Jarno
6060ab6596
tcg: check for CONFIG_DEBUG_TCG instead of NDEBUG
Check for CONFIG_DEBUG_TCG instead of NDEBUG, drop now useless code.

Backports commit 8d8fdbae010aa75a23f0307172e81034125aba6e from qemu
2018-02-23 13:55:21 -05:00
Aurelien Jarno
355ed7cd08
tcg: use tcg_debug_assert instead of assert (fix performance regression)
The TCG code is quite performance sensitive, but at the same time can
also be quite tricky. That is why asserts that can be enabled with the
--enable-debug-tcg configure option.

This used to work the following way:

| #include "config.h"
|
| ...
|
| #if !defined(CONFIG_DEBUG_TCG) && !defined(NDEBUG)
| /* define it to suppress various consistency checks (faster) */
| #define NDEBUG
| #endif
|
| ...
|
| #include <assert.h>

Since commit 757e725b (tcg: Clean up includes) "config.h" as been
replaced by "qemu/osdep.h" which itself includes <assert.h>. As a
consequence the assertions are always enabled, even when using
--disable-debug-tcg, causing a performance regression, especially on
targets with many registers. For instance on qemu-system-ppc the
speed difference is about 15%.

tcg_debug_assert is controlled directly by CONFIG_DEBUG_TCG and already
uses in some places. This patch replaces all the calls to assert into
calss to tcg_debug_assert.

Backports commit eabb7b91b36b202b4dac2df2d59d698e3aff197a from qemu
2018-02-23 13:52:13 -05:00
Peter Maydell
764c2d09e5
tcg: Remove unnecessary osdep.h includes from tcg-target.inc.c
Commit 757e725b58c57d added a number of #include "qemu/osdep.h"
files to the tcg-target.c files (as they were named at the time).
These are unnecessary because these files are not standalone C
files, and the tcg/tcg.c file which includes them will have
already included osdep.h on their behalf. Remove the unneeded
include directives.

Backports commit c3b7f66800fbf9f47fddbcf2e2cd30ea932e0aae from qemu
2018-02-20 20:41:00 -05:00
Peter Maydell
7784a25470
tcg: Rename tcg-target.c to tcg-target.inc.c
Rename the per-architecture tcg-target.c files to tcg-target.inc.c.
This makes it clearer that they are not intended to be standalone
C files, but are instead #included into another source file.

Backports commit ce151109813e2770fd3cee2f37bfa2cdd01a12b9 from qemu
2018-02-20 20:39:57 -05:00
Peter Maydell
4ca19f2cd6
tcg: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit 757e725b58c57d3ebb66a31fd2210df977a12154 from qemu
2018-02-19 01:04:30 -05:00
Paolo Bonzini
b34c233c2f
tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITS
This will be used to size the TLB when more than 8 MMU modes are
used by the target. Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).

Backports commit 006f8638c62bca2b0caf609485f47fa5e14d8a3c from qemu
2018-02-13 08:28:29 -05:00
Richard Henderson
352f93a119
tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0
In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest
base to the address base register from the address index register.
Except that 31 in the base slot is SP not XZR, so we need to be
more intelligent about which reg gets placed in which slot.

Backports commit 352bcb0a2b816ff9ab9d75d0f2384650d9e9ab19 from qemu
2018-02-10 23:33:24 -05:00
Richard Henderson
7d57c2e4ce
tcg/aarch64: Use softmmu fast path for unaligned accesses
Backports commit 9ee14902bf107e37fb2c8119fa7bca424396237c from qemu
2018-02-10 23:25:34 -05:00
Richard Henderson
58e939b91f
tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.

Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
2018-02-10 23:00:45 -05:00
Aurelien Jarno
f279c93768
tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
2018-02-10 22:45:13 -05:00
Aurelien Jarno
80223e7ad5
tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
2018-02-10 22:29:30 -05:00
Richard Henderson
f5e38ea71e
tcg/aarch64: use 32-bit offset for 32-bit softmmu emulation
Similar to the same fix for user-mode, except this instance
occurs on the softmmu path. Again, the tlb addend must be
the base register, while the guest address is the index.

Backports commit 80adb8fcad4778376a11d394a9e01516819e2327 from qemu
2018-02-10 20:59:13 -05:00
Paolo Bonzini
cfc9356a8e
tcg/aarch64: use 32-bit offset for 32-bit user-mode emulation
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and
tcg_out_qemu_st to use a 32-bit zero extended offset.  However, the
guest base register x28 must be the base and addr_reg must be the
index.

Backports commit ffc6372851d8631a9f9fa56ec613b3244dc635b9 from qemu
2018-02-10 20:55:51 -05:00
Paolo Bonzini
85bac3c96d
tcg/aarch64: add ext argument to tcg_out_insn_3310
The new argument lets you pick uxtw or uxtx mode for the offset
register.  For now, all callers pass TCG_TYPE_I64 so that uxtx
is generated.  The bits for uxtx are removed from I3312_TO_I3310.

Backports commit 6c0f0c0f124718650a8d682ba275044fc02f6fe2 from qemu
2018-02-10 20:51:37 -05:00
Richard Henderson
c5a2a50c06
tcg: Mask TCGMemOp appropriately for indexing
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.

Backports commit 2b7ec66f025263a5331f37d5ad78a625496fd7bd from qemu
2018-02-10 20:29:36 -05:00
Richard Henderson
ac713c7034
tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
2018-02-10 20:03:22 -05:00
Richard Henderson
6234d07489
tcg: Merge memop and mmu_idx parameters to qemu_ld/st
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.

Backports commit 59227d5d45bb3c31dc2118011691c35b3c00879c from qemu
2018-02-10 19:01:49 -05:00
Richard Henderson
00b0a50f47
tcg: Change generator-side labels to a pointer
This is less about improved type checking than enabling a
subsequent change to the representation of labels.

Backports commit bec1631100323fac0900aea71043d5c4e22fc2fa from qemu
2018-02-09 14:40:59 -05:00
Nguyen Anh Quynh
3eb51116b9 arm64: fix the access to tcg_op_defs[] in arm64 backend (issue #387) 2016-01-22 11:35:01 +08:00