Commit Graph

77 Commits

Author SHA1 Message Date
Richard Henderson
a276496ebc
tcg: Adjust CODE_GEN_AVG_BLOCK_SIZE
At present, the "average" guestimate of TB size is way too small, leading
to many unused entries in the pre-allocated TB array. For a guest with 1GB
ram, we're currently allocating 256MB for the array.

Survey arm, alpha, aarch64, ppc, sparc, i686, x86_64 guests running on
x86_64 and ppc64 hosts and select a new average. The size of the array
drops to 81MB with no more flushing than before.

Backports commit 126d89e8cdfa3be15d51f76906eaccbcd0023f98 from qemu
2018-02-17 15:24:01 -05:00
Richard Henderson
bdf667fd4e
tcg: Check for overflow via highwater mark
We currently pre-compute an worst case code size for any TB, which
works out to be 122kB. Since the average TB size is near 1kB, this
wastes quite a lot of storage.

Instead, check for overflow in between generating code for each opcode.
The overhead of the check isn't measurable and wastage is minimized.

Backports commit b125f9dc7bd68cd4c57189db4da83b0620b28a72 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
a5ac288135
tcg: Remove gen_intermediate_code_pc
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.

Backports commit 4e5e1215156662b2b153255c49d4640d82c5568b from qemu
2018-02-17 15:23:59 -05:00
Richard Henderson
66de6cc37c
tcg: Save insn data and use it in cpu_restore_state_from_tb
We can now restore state without retranslation.

Backports commit fca8a500d519a56abeaedf8073167a61d3c6b9c4 from qemu
2018-02-17 15:23:59 -05:00
Paolo Bonzini
cab4c979f0
cpu-exec: add a new CF_USE_ICOUNT cflag
Backports commit 0266359e57987d6be53fbcb885f2dd39c1dae940 from qemu
2018-02-17 15:23:58 -05:00
Pavel Dovgalyuk
ac46898b3c
cpu-exec: invalidate nocache translation if they are interrupted
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache.  Do this manually through a new compile
flag.

Backports commit d8a499f17ee5f05407874f29f69f0e3e3198a853 from qemu
2018-02-17 15:23:58 -05:00
Richard Henderson
1cbd175736
tcg: Pass data argument to restore_state_to_opc
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.

Backports commit bad729e272387de7dbfa3ec4319036552fc6c107 from qemu
2018-02-17 15:23:58 -05:00
Peter Crosthwaite
afb48e9fc5
cputlb: Change tlb_set_dirty() arg to cpu
Change tlb_set_dirty() to accept a CPU instead of an env pointer. This
allows for removal of another CPUArchState usage from prototypes that
need to be QOMified.

Backports commit bcae01e468d961ad9afaf4148329147e4be209ab from qemu
2018-02-17 15:23:52 -05:00
Paolo Bonzini
195a86283f
exec: make mmap_lock/mmap_unlock globally available
There is some iffy lock hierarchy going on in translate-all.c. To
fix it, we need to take the mmap_lock in cpu-exec.c. Make the
functions globally available.

Backports commit 8fd19e6cfd5b6cdf028c6ac2ff4157ed831ea3a6 from qemu
2018-02-17 15:23:49 -05:00
Pavel Dovgalyuk
4a05c9ee28
cpu-exec: introduce loop exit with restore function
This patch introduces loop exit function, which also
restores guest CPU state according to the value of host
program counter.

Backports commit 1c3c8af1fb40a481c07749e0448644d9b7700415 from qemu
2018-02-17 15:23:38 -05:00
Pavel Dovgalyuk
28f154129b
softmmu: remove now unused functions
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can
drop the old helper_ld/st functions.

Backports commit b8611499b940b1b4db67aa985e3a844437bcbf00 from qemu
2018-02-17 15:23:38 -05:00
Pavel Dovgalyuk
6cdaaf9b1b
softmmu: add helper function to pass through retaddr
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.

Backports commit 282dffc8a4bfe8724548cabb8a26698bde0a6e18 from qemu
2018-02-17 15:23:38 -05:00
Benjamin Herrenschmidt
1722be3e73
tlb: Add ifetch argument to cpu_mmu_index()
This is set to true when the index is for an instruction fetch
translation.

The core get_page_addr_code() sets it, as do the SOFTMMU_CODE_ACCESS
acessors.

All targets ignore it for now, and all other callers pass "false".

This will allow targets who wish to split the mmu index between
instruction and data accesses to do so. A subsequent patch will
do just that for PowerPC.

Backports commit 97ed5ccdee95f0b98bedc601ff979e368583472c from qemu
2018-02-17 15:23:37 -05:00
Lioncash
f81894dddb
exec: Add semihosting stubs 2018-02-17 15:23:33 -05:00
Peter Maydell
6e94bda144
cputlb: Add functions for flushing TLB for a single MMU index
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.

Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
2018-02-17 15:23:31 -05:00
Peter Crosthwaite
590c3dbb76
cpu_defs: Simplify CPUTLB padding logic
There was a complicated subtractive arithmetic for determining the
padding on the CPUTLBEntry structure. Simplify this with a union.

Backports commit b4a4b8d0e0767c85946fd8fc404643bf5766351a from qemu
2018-02-17 15:23:27 -05:00
Peter Crosthwaite
9e23308b66
cpu: Change cpu_exec_init() arg to cpu, not env
The callers (most of them in target-foo/cpu.c) to this function all
have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from
core code (in exec.c).

Backports commit 4bad9e392e788a218967167a38ce2ae7a32a6231 from qemu
2018-02-17 15:23:18 -05:00
Peter Crosthwaite
8200453545
translate-all: Change tb_flush() env argument to cpu
All of the core-code usages of this API have the cpu pointer handy so
pass it in. There are only 3 architecture specific usages (2 of which
are commented out) which can just use ENV_GET_CPU() locally to get the
cpu pointer. The reduces core code usage of the CPU env, which brings
us closer to common-obj'ing these core files.

Backports commit bbd77c180d7ff1b04a7661bb878939b2e1d23798 from qemu
2018-02-17 15:23:18 -05:00
Peter Crosthwaite
13b919f5c8
cpu-all: complete real host page size API
Currently the "host" page size alignment API is really aligning to both
host and target page sizes. There is the qemu_real_page_size which can
be used for the actual host page size but it's missing a mask and ALIGN
macro as provided for qemu_page_size. Complete the API. This allows
system level code that cares about the host page size to use a
consistent alignment interface without having to un-needingly align to
the target page size. This also reduces system level code dependency
on the cpu specific TARGET_PAGE_SIZE.

Backports commit 4e51361d79289aee2985dfed472f8d87bd53a8df from qemu
2018-02-17 15:23:16 -05:00
Peter Maydell
2f3f2ae092
Stop including qemu-common.h in memory.h
Including qemu-common.h from other header files is generally a bad
idea, because it means it's very easy to end up with a circular
dependency. For instance, if we wanted to include memory.h from
qom/cpu.h we'd end up with this loop:
memory.h -> qemu-common.h -> cpu.h -> cpu-qom.h -> qom/cpu.h -> memory.h

Remove the include from memory.h. This requires us to fix up a few
other files which were inadvertently getting declarations indirectly
through memory.h.

The biggest change is splitting the fprintf_function typedef out
into its own header so other headers can get at it without having
to include qemu-common.h.

Backports commit fba0a593b2809ecdda68650952cf3d3332ac1990 from qemu
2018-02-17 15:23:16 -05:00
Jan Kiszka
b93c24ba31
memory: Add global-locking property to memory regions
This introduces the memory region property "global_locking". It is true
by default. By setting it to false, a device model can request BQL-free
dispatching of region accesses to its r/w handlers. The actual BQL
break-up will be provided in a separate patch.

Backports commit 196ea13104f802c508e57180b2a0d2b3418989a3 from qemu
2018-02-17 15:23:16 -05:00
Peter Crosthwaite
82a22d8f3a
cpu-defs: Move out TB_JMP defines
These are not Architecture specific in any way so move them out of
cpu-defs.h. tb-hash.h is an appropriate place as a leading user and
their strong relationship to TB hashing and caching.

Backports commit 41da4bd6420afd1209c408974920f63ff9c658e1 from qemu
2018-02-17 15:23:15 -05:00
Peter Crosthwaite
09d23c6604
include/exec: Move tb hash functions out
This is one of very few things in exec-all with a genuine CPU
architecture dependency. Move these hashing helpers to a new
header to trim exec-all.h down to a near architecture-agnostic
header.

The defs are only used by cpu-exec and translate-all which are both
arch-obj's so the new tb-hash.h has no core code usage.

Backports commit e1b89321bafea9fb33d87852fc91fee579d17dfe from qemu
2018-02-17 15:23:15 -05:00
Peter Crosthwaite
860e4184df
include/exec: Move standard exceptions to cpu-all.h
These exception indicies are generic and don't have any reliance on the
per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core
code that does not have access to cpu-defs.h.

Backports commit 9e0dc48c9f05505b53cb28f860456a0648e56ddf from qemu
2018-02-17 15:23:15 -05:00
Peter Crosthwaite
a591219ad6
cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcg
The usages of this define are pure TCG and there is no architecture
specific variation of the value. Localise it to the TCG engine to
remove another architecture agnostic piece from cpu-defs.h.

This follows on from a28177820a868eafda8fab007561cc19f41941f4 where
temp_buf was moved out of the CPU_COMMON obsoleting the need for
the super early definition.

Backports commit 6e0b07306d1793e8402dd218d2e38a7377b5fc27 from qemu
2018-02-17 15:23:15 -05:00
Aurelien Jarno
93df793d4d
softmmu: provide tlb_vaddr_to_host function for user mode
To avoid to many #ifdef in target code, provide a tlb_vaddr_to_host for
both user and softmmu modes. In the first case the function always
succeed and just call the g2h function.

Backports commit 2e83c496261c799b0fe6b8e18ac80cdc0a5c97ce from qemu
2018-02-17 15:22:43 -05:00
Paolo Bonzini
dc80b0893f
target-i386: introduce cpu_get_mem_attrs
Backports commit f794aa4a2fd772a3ec413c4e478cc23857cfee98 from qemu
2018-02-13 11:33:39 -05:00
Stefan Hajnoczi
fc7b95d06a
memory: replace cpu_physical_memory_reset_dirty() with test-and-clear
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.

Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().

Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
2018-02-13 11:25:45 -05:00
Stefan Hajnoczi
18ccd4b5be
memory: use atomic ops for setting dirty memory bits
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.

Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
2018-02-13 11:07:48 -05:00
Paolo Bonzini
6d509f7333
exec: only check relevant bitmaps for cleanliness
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.

In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.

With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.

Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
2018-02-13 11:03:26 -05:00
Paolo Bonzini
6bbfcf65e8
memory: do not touch code dirty bitmap unless TCG is enabled
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.

Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
2018-02-13 10:48:14 -05:00
Paolo Bonzini
1b1f82cef7
exec: invert return value of cpu_physical_memory_get_clean, rename
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).

To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well

Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
2018-02-13 09:54:12 -05:00
Paolo Bonzini
f578c89e8b
cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.

The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.

Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
2018-02-13 09:07:41 -05:00
Lioncash
72c8e4d264
exec: move functions to translate-all.h
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.

Backports commit 1652b974766401743879d78f796f44b8929b0787 from qemu
2018-02-13 09:01:45 -05:00
Paolo Bonzini
c82ea2b20b
memory: track DIRTY_MEMORY_CODE in mr->dirty_log_mask
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.

Backports commit 677e7805cf95f3b2bca8baf0888d1ebed7f0c606 from qemu
2018-02-13 08:55:42 -05:00
Paolo Bonzini
e3d1cef8fb
memory: prepare for multiple bits in the dirty log mask
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.

For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.

For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.

Backports commit b2dfd71c4843a762f2befe702adb249cf55baf66 from qemu
2018-02-13 08:52:23 -05:00
Paolo Bonzini
1551573acc
memory: differentiate memory_region_is_logging and memory_region_get_dirty_log_mask
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.

While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).

Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
2018-02-13 08:41:44 -05:00
Paolo Bonzini
96e7e32972
softmmu: support up to 12 MMU modes
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.

Backports commit 1de29aef17a7d70dbc04a7fe51e18942e3ebe313 from qemu
2018-02-13 08:34:52 -05:00
Peter Maydell
e1a7c13fb4
target-arm: Add user-mode transaction attribute
Add a transaction attribute indicating that a memory access is being
done from user-mode (unprivileged). This corresponds to an equivalent
signal in ARM AMBA buses.

Backports commit 0995bf8cd91b81ec9c1078e37b808794080dc5c0 from qemu
2018-02-12 20:41:58 -05:00
Peter Maydell
6c8b7e0fed
target-arm: Honour NS bits in page tables
Honour the NS bit in ARM page tables:
* when adding entries to the TLB, include the Secure/NonSecure
transaction attribute
* set the NS bit in the PAR when doing ATS operations

Note that we don't yet correctly use the NSTable bit to
cause the page table walk itself to use the right attributes.

Backports commit 8bf5b6a9c1911d2c8473385fc0cebfaaeef42dbc from qem
2018-02-12 20:36:35 -05:00
Peter Maydell
df0fac6b6a
exec.c: Add new address_space_ld*/st* functions
Add new address_space_ld*/st* functions which allow transaction
attributes and error reporting for basic load and stores. These
are named to be in line with the address_space_read/write/rw
buffer operations.

The existing ld/st*_phys functions are now wrappers around
the new functions.

Backports commit 500131154d677930fce35ec3a6f0b5a26bcd2973 from qemu
2018-02-12 19:22:47 -05:00
Peter Maydell
b94c89e559
exec.c: Make address_space_rw take transaction attributes
Make address_space_rw take transaction attributes, rather
than always using the 'unspecified' attributes.

Backports commit 5c9eb0286c819c1836220a32f2e1a7b5004ac79a from qemu
2018-02-12 19:04:09 -05:00
Peter Maydell
933e3bd8d1
Add MemTxAttrs to the IOTLB
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.

Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
2018-02-12 18:38:38 -05:00
Peter Maydell
2aecce835b
Make CPU iotlb a structure rather than a plain hwaddr
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.

Backports commit e469b22ffda40188954fafaf6e3308f58d50f8f8 from qemu
2018-02-12 18:34:05 -05:00
Peter Maydell
825e74410f
memory: Replace io_mem_read/write with memory_region_dispatch_read/write
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.

(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)

Backports commit 3b6434953934e6d4a776ed426d8c6d6badee176f from qemu
2018-02-12 17:26:52 -05:00
Peter Maydell
b2962f4613
memory: Define API for MemoryRegionOps to take attrs and return status
Define an API so that devices can register MemoryRegionOps whose read
and write callback functions are passed an arbitrary pointer to some
transaction attributes and can return a success-or-failure status code.
This will allow us to model devices which:
* behave differently for ARM Secure/NonSecure memory accesses
* behave differently for privileged/unprivileged accesses
* may return a transaction failure (causing a guest exception)
for erroneous accesses

This patch defines the new API and plumbs the attributes parameter through
to the memory.c public level functions io_mem_read() and io_mem_write(),
where it is currently dummied out.

The success/failure response indication is also propagated out to
io_mem_read() and io_mem_write(), which retain the old-style
boolean true-for-error return.

Backports commit cc05c43ad942165ecc6ffd39e41991bee43af044 from qemu
2018-02-12 17:17:27 -05:00
Paolo Bonzini
a46accd252
exec: make iotlb RCU-friendly
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.

With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.

Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
2018-02-12 15:20:39 -05:00
Paolo Bonzini
3fbda890df
exec: introduce cpu_reload_memory_map
This for now is a simple TLB flush. This can change later for two
reasons:

1) an AddressSpaceDispatch will be cached in the CPUState object

2) it will not be possible to do tlb_flush once the TCG-generated code
runs outside the BQL.

Backports commit 76e5c76f2e2e0d20bab2cd5c7a87452f711654fb from qemu
2018-02-12 15:09:49 -05:00
Peter Maydell
9d02c52b8a
cpu_ldst.h: Allow NB_MMU_MODES to be 7
Support guest CPUs which need 7 MMU index values.
Add a comment about what would be required to raise the limit
further (trivial for 8, TCG backend rework for 9 or more).

Backports commit 8f3ae2ae2d02727f6d56610c09d7535e43650dd4 from qemu
2018-02-12 11:21:19 -05:00
Richard Henderson
232632e76c
tcg: Change translator-side labels to a pointer
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.

Note that the code generating backends still manipulate labels as int.

With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.

Backports commit 42a268c241183877192c376d03bd9b6d527407c7 from qemu
2018-02-09 14:17:56 -05:00