Commit Graph

56 Commits

Author SHA1 Message Date
Sergey Fedorov
c530eb06a9
tcg: Extract removing of jumps to TB from tb_phys_invalidate()
Move the code for removing jumps to a TB out of tb_phys_invalidate() to
a separate static inline function tb_jmp_unlink(). This simplifies
tb_phys_invalidate() and improves code structure.

Backports commit 89bba496322d4cf996d42cdd4bb0912231656c3d from qemu
2018-02-23 21:36:29 -05:00
Sergey Fedorov
0d2e91518b
tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list()
tb_jmp_remove() was only used to remove the TB from a list of all TBs
jumping to the same TB which is n-th jump destination of the given TB.
Put a comment briefly describing the function behavior and rename it to
better reflect its purpose.

Backports commit 133626783aa5a1bf86332fa3e6f7b8efe005f924 from qemu
2018-02-23 21:34:01 -05:00
Sergey Fedorov
e93f68a755
tcg: Init TB's direct jumps before making it visible
Initialize TB's direct jump list data fields and reset the jumps before
tb_link_page() puts it into the physical hash table and the physical
page list. So TB is completely initialized before it becomes visible.

This is pure rearrangement of code to a more suitable place, though it
could be a preparation for relaxing the locking scheme in future.

Backports commit 901bc3deb43bf37c85e43955905d003be7ae5fa5 from qemu
2018-02-23 21:31:36 -05:00
Sergey Fedorov
87f2bb42d4
tcg: Rearrange tb_link_page() to avoid forward declaration
Backports commit e90d96b158665a684ab89b4f002838034b5fafc8 from qemu
2018-02-23 21:28:20 -05:00
Sergey Fedorov
fbc0a1105f
tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.

Backports commit c37e6d7e3589ecb96914faa21025ad7ba6654aea from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov
e60c24cecf
tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
2018-02-23 21:28:19 -05:00
Richard Henderson
bb0b055a99
translate-all: Adjust 256mb testing for mips64
Make sure we preserve the high 32-bits when masking for mips64.

Backports commit 7ba6a512ae439c98c0c1f0f4348c079d90f9dd9d from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota
de17843702
translate-all: add missing munmap of the code_gen guard page for MIPS
Backports commit 8bdf4997823126a39bd4c99e4b2283b02cc7865f from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota
9a2b02b241
translate-all: remove redundant setting of tcg_ctx.code_gen_buffer_size
The setting of tcg_ctx.code_gen_buffer_size is done by the only caller of
size_code_gen_buffer(), which is code_gen_alloc():

$ git grep size_code_gen_buffer
translate-all.c:static inline size_t size_code_gen_buffer(size_t tb_size)
translate-all.c: tcg_ctx.code_gen_buffer_size = size_code_gen_buffer(tb_size);

Backports commit 835154b6e2200460f04719d0028716a37c178368 from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota
170f6e0b3b
tb: consistently use uint32_t for tb->flags
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.

Compile-tested for all targets.

Backports commit 89fee74a0f066dfd73830a7b5fa137e87888c870 from qemu
2018-02-23 21:28:11 -05:00
Emilio G. Cota
4cacbf212f
translate-all: add missing fold of tb_ctx into tcg_ctx
Since 5e5f07e08 "TCG: Move translation block variables
to new context inside tcg_ctx: tb_ctx" on Feb 1 2013, compilation
of usermode + TB_DEBUG_CHECK has been broken. Fix it.

Backports commit 7e6bd36d61129feb7f667cb09ffec1b7b54b971c from qemu
2018-02-23 13:35:42 -05:00
Alex Bennée
3da7d9d9ae
qemu-log: dfilter-ise exec, out_asm, op and opt_op
qemu-log: dfilter-ise exec, out_asm, op and opt_op

This ensures the code generation debug code will honour -dfilter if set.
For the "exec" tracing I've added a new inline macro for efficiency's
sake.

Backports commit d977e1c2dbc9e63454b2000f91954d02543bf43b from qemu
2018-02-22 10:06:19 -05:00
Alex Bennée
bc5d7c5e1d
tcg: pass down TranslationBlock to tcg_code_gen
My later debugging patches need access to the origin PC which is held in
the TranslationBlock structure. Pass down the whole structure as it also
holds the information about the code start point.

Backports commit 5bd2ec3d7b47b2252745882795d79aef36380fb7 from qemu
2018-02-22 09:28:06 -05:00
Peter Maydell
c9bf91049c
all: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit d38ea87ac54af64ef611de434d07c12dc0399216 from qemu
2018-02-19 01:34:28 -05:00
Peter Maydell
293266a9d8
exec: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit 7b31bbc2e68605ab2f10dc609dd54cf4c7b5f49a from qemu
2018-02-19 00:49:55 -05:00
Peter Maydell
e07cd2542c
exec.c: Drop TARGET_HAS_ICE define and checks
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.

Backports commit ec53b45bcd1f74f7a4c31331fa6d50b402cd6d26 from qemu
2018-02-18 18:17:14 -05:00
Lioncash
c8be425439
translate-all: ensure host page mask is always extended with 1's
Anthony reported that >4GB guests on Xen with 32bit QEMU broke after
commit 4ed023c ("Round up RAMBlock sizes to host page sizes", 2015-11-05).

In that patch sizes are masked against qemu_host_page_size/mask which
are uintptr_t, and thus 32bit on a 32bit QEMU, even though the ram space
might be bigger than 4GB on Xen.

Since ram_addr_t is not available on user-mode emulation targets, ensure
that we get a sign extension when masking away the low bits of the address.
Remove the ~10 year old scary comment that the type of these variables
is probably wrong, with another equally scary comment. The new comment
however does not have "???" in it, which is arguably an improvement.

For completeness use the alignment macros in linux-user and bsd-user
instead of manually doing an &. linux-user and bsd-user are not affected
by the Xen issue, however.

Backports commit 0c2d70c448b7853a91cfa63659aa3cc6630fb9be from qemu
2018-02-17 19:17:19 -05:00
Richard Henderson
3c3dee3747
tcg/ppc: Revise goto_tb implementation
Restrict the size of code_gen_buffer to 2GB on ppc64, which
lets us assert that everything is reachable with addis+addi
from tb_ret_addr. This lets us use a max of 4 insns for goto_tb
instead of 7.

Emit the indirect branch portion of goto_tb up front, which
means we only have to update two insns to update any link.
With a 64-bit store, we can update the link atomically, which
may be required in future.

Backports commit 5bfd75a35c11dd3aa61c73d0d2cd88137c31519c from qemu
2018-02-17 15:24:03 -05:00
Richard Henderson
bdf667fd4e
tcg: Check for overflow via highwater mark
We currently pre-compute an worst case code size for any TB, which
works out to be 122kB. Since the average TB size is near 1kB, this
wastes quite a lot of storage.

Instead, check for overflow in between generating code for each opcode.
The overhead of the check isn't measurable and wastage is minimized.

Backports commit b125f9dc7bd68cd4c57189db4da83b0620b28a72 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
71221baf40
tcg: Allocate a guard page after code_gen_buffer
This will catch any overflow of the buffer.

Add a native win32 alternative for alloc_code_gen_buffer;
remove the malloc alternative.

Backports commit f293709c6af7a65a9bcec09cdba7a60183657a3e from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
19a3c7e03f
tcg: Emit prologue to the beginning of code_gen_buffer
By putting the prologue at the end, we risk overwriting the
prologue should our estimate of maximum TB size. Given the
two different placements of the call to tcg_prologue_init,
move the high water mark computation into tcg_prologue_init.

Backports commit 8163b74938d8b7d12e70597c4553dd0dc49443d5 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
66de6cc37c
tcg: Save insn data and use it in cpu_restore_state_from_tb
We can now restore state without retranslation.

Backports commit fca8a500d519a56abeaedf8073167a61d3c6b9c4 from qemu
2018-02-17 15:23:59 -05:00
Richard Henderson
a7cf761caf
tcg: Merge cpu_gen_code into tb_gen_code
As it's only caller, this tidies things a bit.

Backports commit fec88f64bda27846add83e924c8f4def9d94e068 from qemu
2018-02-17 15:23:59 -05:00
Paolo Bonzini
cab4c979f0
cpu-exec: add a new CF_USE_ICOUNT cflag
Backports commit 0266359e57987d6be53fbcb885f2dd39c1dae940 from qemu
2018-02-17 15:23:58 -05:00
Pavel Dovgalyuk
ac46898b3c
cpu-exec: invalidate nocache translation if they are interrupted
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache.  Do this manually through a new compile
flag.

Backports commit d8a499f17ee5f05407874f29f69f0e3e3198a853 from qemu
2018-02-17 15:23:58 -05:00
Richard Henderson
1cbd175736
tcg: Pass data argument to restore_state_to_opc
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.

Backports commit bad729e272387de7dbfa3ec4319036552fc6c107 from qemu
2018-02-17 15:23:58 -05:00
Peter Crosthwaite
7a004907c7
translate-all: Move tcg_handle_interrupt() to -common
Move this function to common code. It has no arch specific
dependencies. Prepares support for multi-arch where the translate-all
interface needs to be virtualised. One less thing to virtualise.

Backports commit 9b68a7754a892d8deb7696cfe609fe2ec3c6034a from qemu
2018-02-17 15:23:51 -05:00
Paolo Bonzini
195a86283f
exec: make mmap_lock/mmap_unlock globally available
There is some iffy lock hierarchy going on in translate-all.c. To
fix it, we need to take the mmap_lock in cpu-exec.c. Make the
functions globally available.

Backports commit 8fd19e6cfd5b6cdf028c6ac2ff4157ed831ea3a6 from qemu
2018-02-17 15:23:49 -05:00
Peter Crosthwaite
8200453545
translate-all: Change tb_flush() env argument to cpu
All of the core-code usages of this API have the cpu pointer handy so
pass it in. There are only 3 architecture specific usages (2 of which
are commented out) which can just use ENV_GET_CPU() locally to get the
cpu pointer. The reduces core code usage of the CPU env, which brings
us closer to common-obj'ing these core files.

Backports commit bbd77c180d7ff1b04a7661bb878939b2e1d23798 from qemu
2018-02-17 15:23:18 -05:00
Peter Crosthwaite
09d23c6604
include/exec: Move tb hash functions out
This is one of very few things in exec-all with a genuine CPU
architecture dependency. Move these hashing helpers to a new
header to trim exec-all.h down to a near architecture-agnostic
header.

The defs are only used by cpu-exec and translate-all which are both
arch-obj's so the new tb-hash.h has no core code usage.

Backports commit e1b89321bafea9fb33d87852fc91fee579d17dfe from qemu
2018-02-17 15:23:15 -05:00
Aurelien Jarno
20c2ed80a2
translate-all: fix watchpoints if retranslation not possible
The tb_check_watchpoint function currently assumes that all memory
access is done either directly through the TCG code or through an
helper which knows its return address. This is obviously wrong as the
helpers use cpu_ldxx/stxx_data functions to access the memory.

Instead of aborting in that case, don't try to retranslate the code, but
assume that the CPU state (and especially the program counter) has been
saved before calling the helper. Then invalidate the TB based on this
address.

Backports commit 8d302e76755b8157373073d7107e31b0b13f80c1 from qemu
2018-02-17 15:22:43 -05:00
Paolo Bonzini
c333585a4d
translate-all: make less of tb_invalidate_phys_page_range depend on is_cpu_write_access
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called
from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write.
However:

- the code bitmap can be built directly in tb_invalidate_phys_page_fast
(unconditionally, since is_cpu_write_access would always be passed as 1);

- the virtual address is not needed to mark the page as "not containing
code" (dirty code bitmap = 1), so we can also remove that use of
is_cpu_write_access. For calls of tb_invalidate_phys_page_range
that do not come from notdirty_mem_write, the next call to
notdirty_mem_write will notice that the page does not contain code
anymore, and will fix up the TLB entry.

The parameter needs to remain in order to guard accesses to cpu->mem_io_pc.

Backports commit fc377bcf617a48233a99a9fe0a26247c38b5cb76 from qemu
2018-02-13 09:18:49 -05:00
Paolo Bonzini
f578c89e8b
cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.

The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.

Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
2018-02-13 09:07:41 -05:00
Paolo Bonzini
b5c9645a0f
translate-all: remove unnecessary argument to tb_invalidate_phys_range
The is_cpu_write_access argument is always 0, remove it.

Backports commit 358653391b0c0beaa0e3f9e28304e1918cd223b3 from qemu
2018-02-13 09:04:51 -05:00
Emilio G. Cota
df41e9ffd3
target-i386: remove superfluous TARGET_HAS_SMC macro
Backports commit 9c04146ad4696b20c440bfbb4a6ab27ea254e7ca from qemu
2018-02-12 16:41:55 -05:00
Maciej W. Rozycki
4d9107be8a
target-mips: Correct MIPS16/microMIPS branch size calculation
Correct MIPS16/microMIPS branch size calculation in PC adjustment
needed:

- to set the value of CP0.ErrorEPC at the entry to the reset exception,

- for the purpose of branch reexecution in the context of device I/O.

Follow the approach taken in `exception_resume_pc' for ordinary, Debug
and NMI exceptions.

MIPS16 and microMIPS branches can be 2 or 4 bytes in size and that has
to be reflected in calculation. Original MIPS ISA branches, which is
where this code originates from, are always 4 bytes long, just as all
original MIPS ISA instructions.

Backports commit c3577479815f5bcf9d38993967bca2115af245d8 from qemu
2018-02-11 16:09:33 -05:00
xorstream
df41c49e2d Fixed warning about {} initialisers. 2017-01-21 11:41:11 +11:00
xorstream
fac6a66860 platform.h move #3 2017-01-21 00:13:21 +11:00
xorstream
1aeaf5c40d This code should now build the x86_x64-softmmu part 2. 2017-01-19 22:50:28 +11:00
Chris Eagle
fccbcfd4c2 revert to use of g_free to make future qemu integrations easier (#695)
* revert to use of g_free to make future qemu integrations easier

* bracing
2016-12-21 22:28:36 +08:00
Chris Eagle
e46545f722 remove glib dependency by provide compatible replacements 2016-12-18 14:56:58 -08:00
Ryan Hileman
cb615fdba7 remove uc->cpus 2016-09-23 07:38:21 -07:00
Andrew Dutcher
97b10da133 Undo the disaster that was the patch to unicorn github issue #266 and fix it correctly. makes normal self-modifying code work. 2016-08-09 19:35:20 -07:00
Hoang-Vu Dang
b9a10152f1 memleak: code_gen_buffer using g_free for non-linux 2016-07-11 10:13:13 -05:00
Chris Eagle
3add48feb5 Merge branch 'master' into smaller_nothreads 2016-03-25 19:47:52 -07:00
Ryan Hileman
977863401e static -> dynamic code buffer, and shrink 32M->8M 2016-03-25 18:28:03 -07:00
Chris Eagle
9467254fc0 strip out per cpu thread code 2016-03-25 17:24:28 -07:00
Ryan Hileman
0886ae8ede rework code/block tracing 2016-01-22 18:42:27 -08:00
Ryan Hileman
93052f6566 refactor to allow multiple hooks for one type 2016-01-22 18:41:43 -08:00
Nguyen Anh Quynh
e0cb02569e remove unused tcg_register_jit() and related code 2016-01-05 16:02:34 +07:00