Commit Graph

96 Commits

Author SHA1 Message Date
Fam Zheng
2c1a72635d
memory: Implement memory_region_get_ram_addr with mr->ram_block
Backports commit 7ebb2745acbb8d910eab07dc5f0aa01a4457703c from qemu
2018-02-21 08:53:08 -05:00
Paolo Bonzini
aa5be4d6ca
target-arm: implement setend
Since this is not a high-performance path, just use a helper to
flip the E bit and force a lookup in the hash table since the
flags have changed.

Backports commit 9886ecdf31165de2d4b8bccc1a220bd6ac8bc192 from qemu
2018-02-21 02:39:13 -05:00
Peter Maydell
6ae2357be6
target-arm: Give CPSR setting on 32-bit exception return its own helper
The rules for setting the CPSR on a 32-bit exception return are
subtly different from those for setting the CPSR via an instruction
like MSR or CPS. (In particular, in Hyp mode changing the mode bits
is not valid via MSR or CPS.) Split the exception-return case into
its own helper for setting CPSR, so we can eventually handle them
differently in the helper function.

Backports commit 235ea1f5c89abf30e452539b973b0dbe43d3fe2b from qemu
2018-02-20 22:08:35 -05:00
Yongbok Kim
46284f1a41
target-mips: implement R6 multi-threading
MIPS Release 6 provides multi-threading features which replace
pre-R6 MT Module. CP0.Config3.MT is always 0 in R6, instead there is new
CP0.Config5.VP (Virtual Processor) bit which indicates presence of
multi-threading support which includes CP0.GlobalNumber register and
DVP/EVP instructions.

Backports commit 01bc435b44b8802cc4697faa07d908684afbce4e from qemu
2018-02-20 22:02:40 -05:00
Sergey Fedorov
dfb78118ff
target-arm: Implement checking of fired watchpoint
ARM stops before access to a location covered by watchpoint. Also, QEMU
watchpoint fire is not necessarily an architectural watchpoint match.
Unfortunately, that is hardly possible to ignore a fired watchpoint in
debug exception handler. So move watchpoint check from debug exception
handler to the dedicated watchpoint checking callback.

Backports commit 3826121d9298cde1d29ead05910e1f40125ee9b0 from qemu
2018-02-20 11:50:29 -05:00
Peter Maydell
d3eb5fb710
target-arm: Implement cpu_get_phys_page_attrs_debug
Implement cpu_get_phys_page_attrs_debug instead of cpu_get_phys_page_debug.

Backports commit 0faea0c7e6b729c64035b3591b184eeeeef6f1d4 from qemu
2018-02-18 22:15:50 -05:00
Peter Crosthwaite
b82e711a65
memory: Add address_space_init_shareable()
This will either create a new AS or return a pointer to an
already existing equivalent one, if we have already created
an AS for the specified root memory region.

The motivation is to reuse address spaces as much as possible.
It's going to be quite common that bus masters out in device land
have pointers to the same memory region for their mastering yet
each will need to create its own address space. Let the memory
API implement sharing for them.

Aside from the perf optimisations, this should reduce the amount
of redundant output on info mtree as well.

Thee returned value will be malloced, but the malloc will be
automatically freed when the AS runs out of refs.

Backports commit f0c02d15b57da6f5463e3768aa0cfeedccf4b8f4 from qemu
2018-02-18 00:18:21 -05:00
Peter Maydell
1dfba71bef
exec.c: Add cpu_get_address_space()
Add a function to return the AddressSpace for a CPU based on
its numerical index. (Callers outside exec.c don't have access
to the CPUAddressSpace struct so can't just fish it out of the
CPUState struct directly.)

Backports commit 651a5bc03705102de519ebf079a40ecc1da991db from qemu
2018-02-17 23:22:23 -05:00
Peter Maydell
8edd6ffdfd
cputlb.c: Use correct address space when looking up MemoryRegionSection
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().

Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
2018-02-17 23:15:22 -05:00
Peter Maydell
f1b237236c
exec.c: Don't set cpu->as until cpu_address_space_init
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.

This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).

For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().

Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
2018-02-17 22:24:36 -05:00
Alvise Rigo
1e3e75fa44
target-arm: Use the right MMU index in arm_regime_using_lpae_format
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.

In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.

Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.

Backports commit deb2db996cbb9470b39ae1e383791ef34c4eb3c2 from qemu
2018-02-17 20:56:32 -05:00
Paolo Bonzini
1650af8c8b
memory: try to inline constant-length reads
memcpy can take a large amount of time for small reads and writes.
Handle the common case of reading s/g descriptors from memory (there
is no corresponding "write" case that is as common, because writes
often use address_space_st* functions) by inlining the relevant
parts of address_space_read into the caller.

Backports commit 3cc8f884996584630734a90c9b3c535af81e3c92 from qemu
2018-02-17 20:44:39 -05:00
Paolo Bonzini
712c300639
memory: inline a few small accessors
These are used in the address_space_* fast paths.

Backports commit 1619d1fe737d2af068aefe134386a69b76164794 from qemu
2018-02-17 20:35:28 -05:00
Paolo Bonzini
9a78c61145
memory: extract first iteration of address_space_read and address_space_write
We want to inline the case where there is only one iteration, because
then the compiler can also inline the memcpy. As a start, extract
everything after the first address_space_translate call.

Backports commit a203ac702e0720135fac8b1f2061d119814c1798 from qemu
2018-02-17 20:31:21 -05:00
Eduardo Habkost
26791ea61b
exec: Eliminate qemu_ram_free_from_ptr()
Replace qemu_ram_free_from_ptr() with qemu_ram_free().

The only difference between qemu_ram_free_from_ptr() and
qemu_ram_free() is that g_free_rcu() is used instead of
call_rcu(reclaim_ramblock). We can safely replace it because:

* RAM blocks allocated by qemu_ram_alloc_from_ptr() always have
RAM_PREALLOC set;
* reclaim_ramblock(block) will do nothing except g_free(block)
if RAM_PREALLOC is set at block->flags.

Backports commit a29ac16632aec6065c72985b9f7eeb1ca6fbef4a from qemu
2018-02-17 19:37:45 -05:00
Andrew Baumann
e1701b069f
target-arm: raise exception on misaligned LDREX operands
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.

This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).

Backports commit 30901475b91ef1f46304404ab4bfe89097f61b96 from qemu
2018-02-17 19:29:29 -05:00
Dr. David Alan Gilbert
60975685ce
qemu_ram_block_by_name
Add a function to find a RAMBlock by name; use it in two
of the places that already open code that loop; we've
got another use later in postcopy.

Backports commit e3dd74934f2d2c8c67083995928ff68e8c1d0030 from qemu
2018-02-17 18:01:16 -05:00
Dr. David Alan Gilbert
cc088f84b5
qemu_ram_block_from_host
Postcopy sends RAMBlock names and offsets over the wire (since it can't
rely on the order of ramaddr being the same), and it starts out with
HVA fault addresses from the kernel.

qemu_ram_block_from_host translates a HVA into a RAMBlock, an offset
in the RAMBlock and the global ram_addr_t value.

Rewrite qemu_ram_addr_from_host to use qemu_ram_block_from_host.

Provide qemu_ram_get_idstr since its the actual name text sent on the
wire.

Backports commit 422148d3e56c3c9a07c0cf36c1e0a0b76f09c357 from qemu
2018-02-17 17:54:03 -05:00
Peter Maydell
e1a4e4208f
pc: resizeable ROM blocks
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.

Backports commit aaf03019175949eda5087329448b8a0033b89479 from qemu
2018-02-17 17:18:38 -05:00
Michael S. Tsirkin
dce38dd8eb
memory: add memory_region_set_size
Add API to change MR size.
Will be used internally for RAM resize.

Backports commit e7af4c67300b3f9382e96f7a6741a5992116b2d2 from qemu
2018-02-17 16:02:26 -05:00
Yongbok Kim
4a9cd8ec0b
target-mips: add PC, XNP reg numbers to RDHWR
Add Performance Counter (4) and XNP (5) register numbers to RDHWR.
Add check_hwrena() to simplify access control checkings.
Add RDHWR support to microMIPS R6.

Backports commit b00c72180c36510bf9b124e190bd520e3b7e1358 from qemu
2018-02-17 15:24:13 -05:00
Sergey Fedorov
e4e0c75f0f
target-arm: Fix CPU breakpoint handling
A QEMU breakpoint match is not definitely an architectural breakpoint
match. If an exception is generated unconditionally during translation,
it is hardly possible to ignore it in the debug exception handler.

Generate a call to a helper to check CPU breakpoints and raise an
exception only if any breakpoint matches architecturally.

Backports commit 5d98bf8f38c17a348ab6e8af196088cd4953acd0 from qemu
2018-02-17 15:24:02 -05:00
Richard Henderson
532877a366
tcg: Remove tcg_gen_code_search_pc
It's no longer used, so tidy up everything reached by it.

Backports commit 04fe64000162c45d8974da9ca4d266f8d0e67eb7 from qemu
2018-02-17 15:24:00 -05:00
Richard Henderson
a5ac288135
tcg: Remove gen_intermediate_code_pc
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.

Backports commit 4e5e1215156662b2b153255c49d4640d82c5568b from qemu
2018-02-17 15:23:59 -05:00
Lioncash
f8d54a8f3c
Drop unused crypto source files 2018-02-17 15:23:57 -05:00
Richard Henderson
5637099383
target-*: Drop cpu_gen_code define
This symbol no longer exists.

Backports commit dc03246cc377268db63abc8c5663ef571aec2eea from qemu
2018-02-17 15:23:57 -05:00
Pavel Dovgaluk
011861cd0e
target-mips: improve exception handling
This patch improves exception handling in MIPS.
Instructions generate several types of exceptions.
When exception is generated, it breaks the execution of the current
translation block. Implementation of the exceptions handling does not
correctly restore icount for the instruction which caused the exception.
In most cases icount will be decreased by the value equal to the size of
TB. This patch passes pointer to the translation block internals to the
exception handler. It allows correct restoring of the icount value.

Backports commit 9c708c7f9fbb813a3fac02f2728e51e62f2f5ffc from qemu
2018-02-17 15:23:53 -05:00
Pavel Dovgalyuk
4a05c9ee28
cpu-exec: introduce loop exit with restore function
This patch introduces loop exit function, which also
restores guest CPU state according to the value of host
program counter.

Backports commit 1c3c8af1fb40a481c07749e0448644d9b7700415 from qemu
2018-02-17 15:23:38 -05:00
Markus Armbruster
97ad660361
error: On abort, report where the error was created
This is particularly useful when we abort in error_propagate(),
because there the stack backtrace doesn't lead to where the error was
created. Looks like this:

Unexpected error in parse_block_error_action() at .../qemu/blockdev.c:322:
qemu-system-x86_64: -drive if=none,werror=foo: 'foo' invalid write error action
Aborted (core dumped)

Note: to get this example output, I monkey-patched drive_new() to pass
&error_abort to blockdev_init().

To keep the error handling boiler plate from growing even more, all
error_setFOO() become macros expanding into error_setFOO_internal()
with additional __FILE__, __LINE__, __func__ arguments. Not exactly
pretty, but it works.

The macro trickery breaks down when you take the address of an
error_setFOO(). Fortunately, we do that in just one place: qemu-ga's
Windows VSS provider and requester DLL wants to call
error_setg_win32() through a function pointer "to avoid linking glib
to the DLL". Use error_setg_win32_internal() there. The use of the
function pointer is already wrapped in a macro, so the churn isn't
bad.

Code size increases by some 35KiB for me (0.7%). Tolerable. Could be
less if we passed relative rather than absolute source file names to
the compiler, or forwent reporting __func__.

Backports commit 1e9b65bb1bad51735cab6c861c29b592dccabf0e from qemu
2018-02-17 15:23:37 -05:00
Peter Crosthwaite
a249923d4d
qom: Add recursive version of object_child_for_each
Useful for iterating through an entire QOM subtree.

Backports commit d714b8de7747f20fe42e5716d1d44f91e2b891f4 from qemu
2018-02-17 15:23:35 -05:00
Peter Maydell
1b88e0e8c8
target-arm: Wire up HLT 0xf000 as the A64 semihosting instruction
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.

Backports commit 8012c84ff92a36d05dfe61af9b24dd01a7ea25e4 from qemu
2018-02-17 15:23:34 -05:00
Lioncash
f81894dddb
exec: Add semihosting stubs 2018-02-17 15:23:33 -05:00
Peter Maydell
6e94bda144
cputlb: Add functions for flushing TLB for a single MMU index
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.

Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
2018-02-17 15:23:31 -05:00
Lioncash
cef0353be4
qemu-common: Add missing string util functions 2018-02-17 15:23:28 -05:00
Peter Maydell
6c24603b23
target-arm: Add the AArch64 view of the Secure physical timer
On CPUs with EL3, there are two physical timers, one for Secure and one
for Non-secure. Implement this extra timer and the AArch64 registers
which access it.

Backports commit b4d3978c2fdf944e428a46d2850dbd950b6fbe78 from qemu
2018-02-17 15:23:26 -05:00
Lioncash
d706680ad6
target-arm: Add the Hypervisor timer
Backports commit b0e66d95e4f587b5818d2760668301ee0871ba5e from qemu
2018-02-17 15:23:25 -05:00
Daniel P. Berrange
5019f39c15
crypto: introduce new module for computing hash digests
Introduce a new crypto/ directory that will (eventually) contain
all the cryptographic related code. This initially defines a
wrapper for initializing gnutls and for computing hashes with
gnutls. The former ensures that gnutls is guaranteed to be
initialized exactly once in QEMU regardless of CLI args. The
block quorum code currently fails to initialize gnutls so it
only works by luck, if VNC server TLS is not requested. The
hash APIs avoids the need to litter the rest of the code with
preprocessor checks and simplifies callers by allocating the
correct amount of memory for the requested hash.

Backports commit ddbb0d09661f5fce21b335ba9aea8202d189b98e from qemu
2018-02-17 15:23:17 -05:00
Jan Kiszka
b93c24ba31
memory: Add global-locking property to memory regions
This introduces the memory region property "global_locking". It is true
by default. By setting it to false, a device model can request BQL-free
dispatching of region accesses to its r/w handlers. The actual BQL
break-up will be provided in a separate patch.

Backports commit 196ea13104f802c508e57180b2a0d2b3418989a3 from qemu
2018-02-17 15:23:16 -05:00
Peter Maydell
8840d8370d
target-arm: Split DISAS_YIELD from DISAS_WFE
Currently we use DISAS_WFE for both WFE and YIELD instructions.
This is functionally correct because at the moment both of them
are implemented as "yield this CPU back to the top level loop so
another CPU has a chance to run". However it's rather confusing
that YIELD ends up calling HELPER(wfe), and if we ever want to
implement real behaviour for WFE and SEV it's likely to trip us up.

Split out the yield codepath to use DISAS_YIELD and a new
HELPER(yield) function, and have HELPER(wfe) call HELPER(yield).

Backports commit 049e24a191c212d9468db84169197887f2c91586 from qemu
2018-02-17 15:23:14 -05:00
Leon Alrae
57f57a9de4
target-mips: add ERETNC instruction and Config5.LLB bit
ERETNC is identical to ERET except that an ERETNC will not clear the LLbit
that is set by execution of an LL instruction, and thus when placed between
an LL and SC sequence, will never cause the SC to fail.

Presence of ERETNC is denoted by the Config5.LLB.

Backports commit ce9782f40ac16660ea9437bfaa2c9c34d5ed8110 from qemu
2018-02-13 13:33:37 -05:00
Yongbok Kim
922d30c448
target-mips: Misaligned memory accesses for MSA
MIPS SIMD Architecture vector loads and stores require misalignment support.
MSA Memory access should work as an atomic operation. Therefore, it has to
check validity of all addresses for a vector store access if it is spanning
into two pages.

Separating helper functions for each data format as format is known in
translation.
To use mmu_idx from cpu_mmu_index() instead of calculating it from hflag.
Removing save_cpu_state() call in translation because it is able to use
cpu_restore_state() on fault as GETRA() is passed.

Backports commit adc370a48fd26b92188fa4848dfb088578b1936c from qemu
2018-02-13 13:27:31 -05:00
Lioncash
25a58231fc
target-i386: Use correct memory attributes for memory accesses
These include page table walks, SVM accesses and SMM state save accesses.

The bulk of the patch is obtained with

sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/'

Backports commit b216aa6c0fcbaa8ff4128969c14594896a5485a4 from qemu
2018-02-13 11:54:12 -05:00
Stefan Hajnoczi
fc7b95d06a
memory: replace cpu_physical_memory_reset_dirty() with test-and-clear
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.

Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().

Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
2018-02-13 11:25:45 -05:00
Paolo Bonzini
1b1f82cef7
exec: invert return value of cpu_physical_memory_get_clean, rename
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).

To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well

Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
2018-02-13 09:54:12 -05:00
Paolo Bonzini
f578c89e8b
cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.

The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.

Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
2018-02-13 09:07:41 -05:00
Paolo Bonzini
1551573acc
memory: differentiate memory_region_is_logging and memory_region_get_dirty_log_mask
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.

While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).

Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
2018-02-13 08:41:44 -05:00
Greg Bellows
5ad81f095a
target-arm: Update interrupt handling to use target EL
Updated the interrupt handling to utilize and report through the target EL
exception field. This includes consolidating and cleaning up code where
needed. Target EL is now calculated once in arm_cpu_exec_interrupt() and
do_interrupt was updated to use the target_el exception field. The
necessary code from arm_excp_target_el() was merged in where needed and the
function removed.

Backports commit 012a906b19e99b126403ff4a257617dab9b34163 from qemu
2018-02-12 22:42:37 -05:00
Peter Maydell
171bf0fc3e
target-arm: Move setting of exception info into tlb_fill
Move the code which sets exception information out of
arm_cpu_handle_mmu_fault and into tlb_fill. tlb_fill
is the only caller which wants to raise_exception()
so it makes more sense for it to handle the whole of
the exception setup.

As part of this cleanup, move the user-mode-only
implementation function for the handle_mmu_fault CPU
method into cpu.c so we don't need to make it globally
visible, and rename the softmmu-only utility function
arm_cpu_handle_mmu_fault to arm_tlb_fill so it's clear
that it's not the same thing.

Backports commit 8c6084bf10fe721929ca94cf16acd6687e61d3ec from qemu
2018-02-12 22:28:34 -05:00
Peter Maydell
df0fac6b6a
exec.c: Add new address_space_ld*/st* functions
Add new address_space_ld*/st* functions which allow transaction
attributes and error reporting for basic load and stores. These
are named to be in line with the address_space_read/write/rw
buffer operations.

The existing ld/st*_phys functions are now wrappers around
the new functions.

Backports commit 500131154d677930fce35ec3a6f0b5a26bcd2973 from qemu
2018-02-12 19:22:47 -05:00
Peter Maydell
933e3bd8d1
Add MemTxAttrs to the IOTLB
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.

Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
2018-02-12 18:38:38 -05:00