Commit Graph

25 Commits

Author SHA1 Message Date
Paolo Bonzini
9d64a89acf
tcg: comment on which functions have to be called with tb_lock held
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.

This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.

Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.

Backports commit 7d7500d99895f888f97397ef32bb536bb0df3b74 from qemu
2018-02-28 10:26:28 -05:00
Alex Bennée
33589eb75f
cpus: pass CPUState to run_on_cpu helpers
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.

This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.

Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
2018-02-26 04:54:55 -05:00
Sergey Sorokin
d1e4ac0451
Fix confusing argument names in some common functions
There are functions tlb_fill(), cpu_unaligned_access() and
do_unaligned_access() that are called with access type and mmu index
arguments. But these arguments are named 'is_write' and 'is_user' in their
declarations. The patches fix the arguments to avoid a confusion.

Backports commit b35399bb4e9968296a12303b00f9f2066470e987 from qemu
2018-02-25 03:58:27 -05:00
Paolo Bonzini
9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Sergey Fedorov
1a768018c2
tcg: Remove needless CPUState::current_tb
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.

Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
2018-02-23 23:45:42 -05:00
Sergey Fedorov
ba9a237586
tcg: Rework tb_invalidated_flag
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
2018-02-23 23:34:51 -05:00
Stefan Weil
baa477d324
Remove unneeded include statements for setjmp.h
As soon as setjmp.h is included from qemu/osdep.h, those old include
statements are no longer needed.

Add also setjmp.h to the list in scripts/clean-includes.

Backports commit 8ff98f1ed2f50cd05c3c5027c7efdf69859ec664 from qemu
2018-02-21 22:57:32 -05:00
Lluís Vilanova
d111e2df2d
typedefs: Add CPUState
Backports commit b23197f9cf2f221a6cc6272d36852f4f70cf9c1b from qemu
2018-02-21 01:55:22 -05:00
Sergey Fedorov
6a3038db7c
cpu: Add callback to check architectural watchpoint match
When QEMU watchpoint matches, that is not definitely an architectural
watchpoint match yet. If it is a stop-before-access watchpoint then that
is hardly possible to ignore it after throwing a TCG exception.

A special callback is introduced to check for architectural watchpoint
match before raising a TCG exception.

Backports commit 568496c0c0f1863a4bc18539962cd8d81baa4e30 from qemu
2018-02-20 11:43:56 -05:00
Peter Crosthwaite
ce997e1caf
qom/cpu: Add MemoryRegion property
Add a MemoryRegion property, which if set is used to construct
the CPU's initial (default) AddressSpace.

Backports commit 6731d864f80938e404dc3e5eb7f6b76b891e3e43 from qemu
2018-02-18 21:54:50 -05:00
Peter Maydell
8edd6ffdfd
cputlb.c: Use correct address space when looking up MemoryRegionSection
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().

Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
2018-02-17 23:15:22 -05:00
Peter Maydell
d23831f4dd
cpu: Add new asidx_from_attrs() method
Add a new method to CPUClass which the memory system core can
use to obtain the correct address space index to use for a memory
access with a given set of transaction attributes, together
with the wrapper function cpu_asidx_from_attrs() which implements
the default behaviour ("always use asidx 0") for CPU classes
which don't provide the method.

Backports commit d7f25a9e6a6b2c69a0be6033903b7d6087bcf47d from qemu
2018-02-17 22:45:32 -05:00
Lioncash
1cc4b92c67
cpu: Add new get_phys_page_attrs_debug() method
Add a new optional method get_phys_page_attrs_debug() to CPUClass.
This is like the existing get_phys_page_debug(), but also returns
the memory transaction attributes to use for the access.
This will be necessary for CPUs which have multiple address
spaces and use the attributes to select the correct address
space.

We provide a wrapper function cpu_get_phys_page_attrs_debug()
which falls back to the existing get_phys_page_debug(), so we
don't need to change every target CPU.

Backports commit 1dc6fb1f5cc5cea5ba01010a19c6acefd0ae4b73 from qemu
2018-02-17 22:43:42 -05:00
Peter Maydell
51369b67cd
exec.c: Allow target CPUs to define multiple AddressSpaces
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.

Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.

Backports commit 12ebc9a76dd7702aef0a3618717a826c19c34ef4 from qemu
2018-02-17 22:35:13 -05:00
Peter Maydell
f1b237236c
exec.c: Don't set cpu->as until cpu_address_space_init
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.

This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).

For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().

Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
2018-02-17 22:24:36 -05:00
Richard Henderson
3ec0adcc07
target-*: Introduce and use cpu_breakpoint_test
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.

Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.

Backports commit b933066ae03d924a92b2616b4a24e7d91cd5b841 from qemu
2018-02-17 15:24:10 -05:00
Lioncash
da8e73b887
qom/cpu: Add throttle_thread_scheduled member
Extracts the member out of commit 2adcc85d407c1ab985f5abed808c78dbb84f4773
2018-02-17 15:23:55 -05:00
Lioncash
5e2862b29d
cpu: Add crash_occurred flag into CPUState
CPUState::crash_occurred field inside CPUState marks
that guest crash occurred. This value is added into
cpu common migration subsection.

Backports commit bac05aa9a77af1ca7972c8dc07560f4daa7c2dfc from qemu
2018-02-17 15:23:51 -05:00
Peter Crosthwaite
6279dfc113
cpu: Add wrapper for the set_pc() hook
Add a wrapper around the CPUClass::set_pc() hook.

Backports commit 2991b8904730d663f12ad42e35798ecc22fe151c from qemu
2018-02-17 15:23:19 -05:00
Paolo Bonzini
a46accd252
exec: make iotlb RCU-friendly
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.

With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.

Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
2018-02-12 15:20:39 -05:00
xorstream
1aeaf5c40d This code should now build the x86_x64-softmmu part 2. 2017-01-19 22:50:28 +11:00
Nguyen Anh Quynh
52cb0ba78e cleanup more synchronization code 2017-01-09 14:05:39 +08:00
Chris Eagle
9467254fc0 strip out per cpu thread code 2016-03-25 17:24:28 -07:00
Nguyen Anh Quynh
2f297bdd3a handle some errors properly so avoid exit() during initialization. this fixes issue #237 2015-11-12 01:43:41 +08:00
Nguyen Anh Quynh
344d016104 import 2015-08-21 15:04:50 +08:00