Commit Graph

154 Commits

Author SHA1 Message Date
Beata Michalska
0716794d86 Memory: Enable writeback for given memory region
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.

Backports commit 61c490e25e081af39ff40556f6c1229b8b011585 from qemu
2020-01-14 07:44:24 -05:00
Richard Henderson
07f30382c0 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
2020-01-14 06:58:33 -05:00
Tony Nguyen
103d6f51c8 memory: Single byte swap along the I/O path
Now that MemOp has been pushed down into the memory API, and
callers are encoding endianness, we can collapse byte swaps
along the I/O path into the accelerator and target independent
adjust_endianness.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Backports commit 9bf825bf3df4ebae3af51566c8088e3f1249a910 from qemu
2020-01-07 19:12:04 -05:00
Tony Nguyen
da98d0da4e memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.

This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.

Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.

Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
2020-01-07 18:54:11 -05:00
Tony Nguyen
ab64c53bd0 exec: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit 3d9e7c3e7bf11962e1100d077e46f93f780b7310 from qemu
2020-01-07 18:25:19 -05:00
Wei Yang
813ec29d3c
exec.c: add a check between constants to see whether we could skip
The maximum level is defined as P_L2_LEVELS and skip is defined with 6
bits, which means if P_L2_LEVELS < (1 << 6), skip never exceeds the
boundary.

Since this check is between two constants, which leverages compiler
to optimize the code based on different configuration.

Backports commit 526ca2360ea1cd947f74c8c6c38b91b9d6fcfdb5 from qemu
2019-11-28 02:55:42 -05:00
Wei Yang
623632f3ac
exec.c: correct the maximum skip value during compact
skip is defined with 6 bits. So the maximum value should be (1 << 6).

Backports commit 26ca2075babd7775e246b9eb7da75d6de77eb658 from qemu
2019-11-28 02:55:31 -05:00
Wei Yang
2e55ddd339
exec.c: subpage->sub_section is already initialized to 0
In subpage_init(), we will set subpage->sub_section to
PHYS_SECTION_UNASSIGNED by subpage_register. Since
PHYS_SECTION_UNASSIGNED is defined to be 0, and we allocate subpage with
g_malloc0, this means subpage->sub_section is already initialized to 0.

This patch removes the redundant setup for a new subpage and also fix
the code style.

Backports commit b797ab1a15ba8d2b2fc4ec3e1f24d755f6855d05 from qemu
2019-11-28 02:55:23 -05:00
Wei Yang
ef3cf096e7
exec.c: get nodes_nb_alloc with one MAX calculation
The purpose of these two MAX here is to get the maximum of these three
variables:

A: map->nodes_nb + nodes
B: map->nodes_nb_alloc
C: alloc_hint

We can write it like MAX(A, B, C). Since the if condition says A > B,
this means MAX(A, B, C) = MAX(A, C).

This patch just simplify the calculation a bit.

Backports commit c95cfd040078db8017f74fd3a4d6f798385d960c from qemu
2019-11-28 02:55:16 -05:00
Wei Yang
4274315278
exec.c: replace hwaddr with uint64_t for better understanding
Function phys_page_set() and phys_page_set_level() 's argument *nb*
stands for number of pages to set instead of hardware address.

This would be more proper to use uint64_t instead of hwaddr for its
type.

Backports commit 56b15076805a29673c1a90ea9c3ebef25bfcc912 from qemu
2019-11-28 02:55:08 -05:00
Wei Yang
20994eca53
exec.c: refactor function flatview_add_to_dispatch()
flatview_add_to_dispatch() registers page based on the condition of
*section*, which may looks like this:

|s|PPPPPPP|s|

where s stands for subpage and P for page.

The procedure of this function could be described as:

- register first subpage
- register page
- register last subpage

This means the procedure could be simplified into these three steps
instead of a loop iteration.

This patch refactors the function into three corresponding steps and
adds some comment to clarify it.

Backports commit 494d199727ba248c96326b4e1c97f86eb11a5ec7 from qemu
2019-03-11 17:00:46 -04:00
Lioncash
3fa54df972
exec.c: Use correct attrs in cpu_memory_rw_debug()
In the softmmu version of cpu_memory_rw_debug(), we ask the
CPU for the attributes to use for the virtual memory access,
and we correctly use those to identify the address space
index. However, we were not passing them in to the
address_space_write_rom() and address_space_rw() functions.

The effect of this was that a memory access from the gdbstub
to a device which had behaviour that was sensitive to the
memory attributes (such as some ARMv8M NVIC registers) was
incorrectly always performed as if non-secure, rather than
using the right security state for the CPU's current state.

Fixes: https://bugs.launchpad.net/qemu/+bug/1812091

Backports commit ea7a5330b79523540ba776c529b09dc8cf3fa0c5 from qemu
2019-01-29 17:05:50 -05:00
Lioncash
3a0ab1a64a
Partial backport of: exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
We just want the parameter changes here.

Partial backport of commit 1f871c5e6b0f30644a60a81a6a7aadb3afb030ac from
qemu
2018-11-16 21:24:55 -05:00
Emilio G. Cota
dfb3954571
exec: introduce tlb_init
Paves the way for the addition of a per-TLB lock.

Backports commit 5005e2537d090bee87aca3b924dcd17920fd146a from qemu
2018-10-23 14:41:29 -04:00
Junyan He
6ead2c3d1f
memory, exec: Expose all memory block related flags.
We need to use these flags in other files rather than just in exec.c,
For example, RAM_SHARED should be used when create a ram block from file.
We expose them the exec/memory.h

Backports commit b0e5de93811077254a536c23b713b49e12efb742 from qemu
2018-08-22 13:00:05 -04:00
Eric Auger
7ecf09a13d
exec: Fix MAP_RAM for cached access
When an IOMMUMemoryRegion is in front of a virtio device,
address_space_cache_init does not set cache->ptr as the memory
region is not RAM. However when the device performs an access,
we end up in glue() which performs the translation and then uses
MAP_RAM. This latter uses the unset ptr and returns a wrong value
which leads to a SIGSEV in address_space_lduw_internal_cached_slow,
for instance.

In slow path cache->ptr is NULL and MAP_RAM must redirect to
qemu_map_ram_ptr((mr)->ram_block, ofs).

As MAP_RAM, IS_DIRECT and INVALIDATE are the same in _cached_slow
and non cached mode, let's remove those macros.

This fixes the use cases featuring vIOMMU (Intel and ARM SMMU)
which lead to a SIGSEV.

Fixes: 48564041a73a (exec: reintroduce MemoryRegion caching)

Backports part of commit a99761d3c85679da380c0f597468acd3dc1b53b3 from
qemu
2018-07-03 01:11:12 -04:00
Peter Maydell
0a23259560
exec.c: Use stn_p() and ldn_p() instead of explicit switches
Now we have stn_p() and ldn_p() we can use them in various
functions in exec.c that used to have their own switch-on-size code.

Backports commit 6d3ede5410e05c5f6221dab1daf99164fd6bf879 from qemu
2018-06-15 12:20:59 -04:00
Peter Maydell
cb879422e9
exec.c: Don't accidentally sign-extend 4-byte loads in subpage_read()
In subpage_read() we perform a load of the data into a local buffer
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
depending on its size, storing the result into the uint64_t *data.
Since ldl_p() returns an 'int', this means that for the 4-byte
case we will sign-extend the data, whereas for 1 and 2 byte
reads we zero-extend it.

This ought not to matter since the caller will likely ignore values in
the high bytes of the data, but add a cast so that we're consistent.

Backports commit 22672c6075a16d1998e37686f02ed4bd2fb30f78 from qemu
2018-06-15 12:18:40 -04:00
Peter Maydell
7a6ae26346
cputlb: Pass cpu_transaction_failed() the correct physaddr
The API for cpu_transaction_failed() says that it takes the physical
address for the failed transaction. However we were actually passing
it the offset within the target MemoryRegion. We don't currently
have any target CPU implementations of this hook that require the
physical address; fix this bug so we don't get confused if we ever
do add one.

Backports commit 2d54f19401bc54b3b56d1cc44c96e4087b604b97 from qemu
2018-06-15 12:03:23 -04:00
Peter Maydell
76fd93726c
exec.c: Initialize sa_flags passed to sigaction()
Coverity points out that in the user-only version of cpu_abort() we
call sigaction() with a partially initialized struct sigaction
(CID 1005351). Correct the omission.

Backports commit 8347c18506c3f8619527d19134cb4aac071dc54a from qemu
2018-06-07 11:47:08 -04:00
Bharata B Rao
309b85548f
cpu: Convert cpu_index into a bitmap
Currently CPUState::cpu_index is monotonically increasing and a newly
created CPU always gets the next higher index. The next available
index is calculated by counting the existing number of CPUs. This is
fine as long as we only add CPUs, but there are architectures which
are starting to support CPU removal, too. For an architecture like PowerPC
which derives its CPU identifier (device tree ID) from cpu_index, the
existing logic of generating cpu_index values causes problems.

With the currently proposed method of handling vCPU removal by parking
the vCPU fd in QEMU
(Ref: http://lists.gnu.org/archive/html/qemu-devel/2015-02/msg02604.html),
generating cpu_index this way will not work for PowerPC.

This patch changes the way cpu_index is handed out by maintaining
a bit map of the CPUs that tracks both addition and removal of CPUs.

The CPU bitmap allocation logic is part of cpu_exec_init(), which is
called by instance_init routines of various CPU targets. Newly added
cpu_exec_exit() API handles the deallocation part and this routine is
called from generic CPU instance_finalize.

Note: This new CPU enumeration is for !CONFIG_USER_ONLY only.
CONFIG_USER_ONLY continues to have the old enumeration logic.

Backports commit b7bca7333411bd19c449147e8202ae6b0e4a8e09 from qemu
2018-03-21 08:06:07 -04:00
Bharata B Rao
e373c001fa
cpu: Add Error argument to cpu_exec_init()
Add an Error argument to cpu_exec_init() to let users collect the
error. This is in preparation to change the CPU enumeration logic
in cpu_exec_init(). With the new enumeration logic, cpu_exec_init()
can fail if cpu_index values corresponding to max_cpus have already
been handed out.

Since all current callers of cpu_exec_init() are from instance_init,
use error_abort Error argument to abort in case of an error.

Backports commit 5a790cc4b942e651fec7edc597c19b637fad5a76 from qemu
2018-03-21 07:50:33 -04:00
Igor Mammedov
f8eeacb280
Use cpu_create(type) instead of cpu_init(cpu_model)
With all targets defining CPU_RESOLVING_TYPE, refactor
cpu_parse_cpu_model(type, cpu_model) to parse_cpu_model(cpu_model)
so that callers won't have to know internal resolving cpu
type. Place it in exec.c so it could be called from both
target independed vl.c and *-user/main.c.

That allows us to stop abusing cpu type from
MachineClass::default_cpu_type
as resolver class in vl.c which were confusing part of
cpu_parse_cpu_model().

Also with new parse_cpu_model(), the last users of cpu_init()
in null-machine.c and bsd/linux-user targets could be switched
to cpu_create() API and cpu_init() API will be removed by
follow up patch.

With no longer users left remove MachineState::cpu_model field,
new code should use MachineState::cpu_type instead and
leave cpu_model parsing to generic code in vl.c.

Backports commit 2278b93941d42c30e2950d4b8dff4943d064e7de from qemu
2018-03-20 14:20:30 -04:00
Lioncash
a81439c7ca
exec: Drop unnecessary code for unicorn
The dirty memory code isn't strictly necessary
2018-03-12 10:11:46 -04:00
Alexey Kardashevskiy
d9bc1bcc8c
memory: Rename mem_begin/mem_commit/mem_add helpers
This renames some helpers to reflect better what they do.

This should cause no behavioural change.

Backports commit 8629d3fcb77e9775e44d9051bad0fb5187925eae from qemu
2018-03-11 21:36:50 -04:00
Alexey Kardashevskiy
489abbbd8b
memory: Cleanup after switching to FlatView
We store AddressSpaceDispatch* in FlatView anyway so there is no need
to carry it from mem_add() to register_subpage/register_multipage.

This should cause no behavioural change.

Backports commit 9950322a593ff900a860fb52938159461798a831 from qemu
2018-03-11 21:27:51 -04:00
Alexey Kardashevskiy
aa2b76b4e8
memory: Switch memory from using AddressSpace to FlatView
FlatView's will be shared between AddressSpace's and subpage_t
and MemoryRegionSection cannot store AS anymore, hence this change.

In particular, for:

typedef struct subpage_t {
MemoryRegion iomem;
- AddressSpace *as;
+ FlatView *fv;
hwaddr base;
uint16_t sub_section[];
} subpage_t;

struct MemoryRegionSection {
MemoryRegion *mr;
- AddressSpace *address_space;
+ FlatView *fv;
hwaddr offset_within_region;
Int128 size;
hwaddr offset_within_address_space;
bool readonly;
};

This should cause no behavioural change.

Backports commit 166206845f7fd75e720e6feea0bb01957c8da07f from qemu
2018-03-11 21:21:37 -04:00
Alexey Kardashevskiy
06b53d8e40
memory: Remove AddressSpace pointer from AddressSpaceDispatch
AS in ASD is only used to pass AS from mem_begin() to register_subpage()
to store it in MemoryRegionSection, we can do this directly now.

This should cause no behavioural change.

Backports commit c7752523787dc148f5ee976162e80ab594c386a1 from qemu
2018-03-11 20:45:43 -04:00
Lioncash
1591f208c0
memory: Move AddressSpaceDispatch from AddressSpace to FlatView
As we are going to share FlatView's between AddressSpace's,
and AddressSpaceDispatch is a structure to perform quick lookup
in FlatView, this moves ASD to FlatView.

After previosly open coded ASD rendering, we can also remove
as->next_dispatch as the new FlatView pointer is stored
on a stack and set to an AS atomically.

flatview_destroy() is executed under RCU instead of
address_space_dispatch_free() now.

This makes mem_begin/mem_commit to work with ASD and mem_add with FV
as later on mem_add will be taking FV as an argument anyway.

This should cause no behavioural change.

Backports commit 66a6df1dc6d5b28cc3e65db0d71683fbdddc6b62 from qemu
2018-03-11 20:40:24 -04:00
Peter Xu
cd93d4eb52
cpu: suffix cpu address spaces with cpu index
Renaming cpu address space names so that they won't be the same when
there are more than one.

Backports commit 87a621d857be1b2b3dd1d0847ca311a863dbcb53 from qemu
2018-03-05 14:41:25 -05:00
Peter Xu
1bb34aadf9
cpu: refactor cpu_address_space_init()
Normally we create an address space for that CPU and pass that address
space into the function. Let's just do it inside to unify address space
creations. It'll simplify my next patch to rename those address spaces.

Backports commit 80ceb07a83375e3a0091591f96bd47bce2f640ce from qemu
2018-03-05 14:39:25 -05:00
Richard Henderson
28061c2e59
qom: Introduce CPUClass.tcg_initialize
Move target cpu tcg initialization to common code,
called from cpu_exec_realizefn.

Backports commit 55c3ceef61fcf06fc98ddc752b7cce788ce7680b from qemu
2018-03-05 09:49:26 -05:00
Alexey Kardashevskiy
e723b8dd49
memory: Open code FlatView rendering
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.

Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.

This should cause no behavioural change.

Backports commit 1b04a1580917d9e41fd37ca62cbff9b4bf061e96 from qemu
2018-03-04 02:06:48 -05:00
Alexey Kardashevskiy
f74fcb194f
exec: Explicitly export target AS from address_space_translate_internal
This adds an AS** parameter to address_space_do_translate()
to make it easier for the next patch to share FlatViews.

This should cause no behavioural change.

Backports commit 6424975ce912061ac9e4a375237b0c89d83d93e3 from qemu
2018-03-04 01:56:13 -05:00
Anthony PERARD
567bc68803
exec: Add lock parameter to qemu_ram_ptr_length
Commit 04bf2526ce87f21b32c9acba1c5518708c243ad0 (exec: use
qemu_ram_ptr_length to access guest ram) start using qemu_ram_ptr_length
instead of qemu_map_ram_ptr, but when used with Xen, the behavior of
both function is different. They both call xen_map_cache, but one with
"lock", meaning the mapping of guest memory is never released
implicitly, and the second one without, which means, mapping can be
release later, when needed.

In the context of address_space_{read,write}_continue, the ptr to those
mapping should not be locked because it is used immediatly and never
used again.

The lock parameter make it explicit in which context qemu_ram_ptr_length
is called.

Backports commit f5aa69bdc3418773f26747ca282c291519626ece from qemu
2018-03-04 01:23:14 -05:00
Pranith Kumar
d0a70720a3
Revert "exec.c: Fix breakpoint invalidation race"
Now that we have proper locking after MTTCG patches have landed, we
can revert the commit. This reverts commit

a9353fe897ca2687e5b3385ed39e3db3927a90e0.

Backports commit 406bc339b0505fcfc2ffcbca1f05a3756e338a65 from qemu
2018-03-03 22:14:35 -05:00
Yang Zhong
d70c141675
tcg: move page_size_init() function
translate-all.c will be disabled if tcg is disabled in the build,
so page_size_init() function and related variables will be moved
to exec.c file.

Backports commit a0be0c585f5dcc4d50a37f6a20d3d625c5ef3a2c from qemu
2018-03-03 21:30:08 -05:00
Peter Xu
fb8d3e2f6a
exec: simplify phys_page_find() params
It really only plays with the dispatchers, so the parameter list does
not need that complexity. This helps for readability at least.

Backports commit 003a0cf2cd1828a1141a874428571267b117f765 from qemu
2018-03-03 14:28:25 -05:00
Peter Xu
fce1b469e5
memory: tune last param of iommu_ops.translate()
This patch converts the old "is_write" bool into IOMMUAccessFlags. The
difference is that "is_write" can only express either read/write, but
sometimes what we really want is "none" here (neither read nor write).
Replay is an good example - during replay, we should not check any RW
permission bits since thats not an actual IO at all.

Backports commit bf55b7afce53718ef96f4e6616da62c0ccac37dd from qemu
2018-03-02 18:59:12 -05:00
Peter Xu
5621c7e09f
exec: abstract address_space_do_translate()
This function is an abstraction helper for address_space_translate() and
address_space_get_iotlb_entry(). It does the lookup of address into
memory region section, then does proper IOMMU translation if necessary.
Refactor the two existing functions to use it.

This fixes vhost when IOMMU is disabled by guest.

Backports commit a764040cc831cfe5b8bf1c80e8341b9bf2de3ce8 from qemu
2018-03-02 18:59:12 -05:00
Paolo Bonzini
c27870520a
exec: revert MemoryRegionCache
MemoryRegionCache did not know about virtio support for IOMMUs (because the
two features were developed at the same time). Revert MemoryRegionCache
to "normal" address_space_* operations for 2.9, as it is simpler than
undoing the virtio patches.

Backports commit 90c4fe5fc517a045e7a7cf2f23472e114042ca29 from qemu
2018-03-02 14:30:41 -05:00
Dr. David Alan Gilbert
55d79cf4c0
RAMBlocks: qemu_ram_is_shared
Provide a helper to say whether a RAMBlock was created as a
shared mapping.

Backports commit 463a4ac23bcf0f0b65c850fa66f5ae6e43edd243 from qemu
2018-03-02 13:05:35 -05:00
Paolo Bonzini
37918ba5b0
exec: make address_space_cache_destroy idempotent
Clear cache->mr so that address_space_cache_destroy does nothing
the second time it is called.

Backports commit 91047df38dffa80222179f63fbb74c1dfefa25ed from qemu
2018-03-02 08:16:17 -05:00
Ladi Prosek
babf848b82
memory: don't sign-extend 32-bit writes
ldl_p has a signed return type so assigning it to uint64_t implicitly
sign-extends the value. This results in devices with min_access_size = 8
seeing unexpected values passed to their write handlers.

Example: guest performs a 32-bit write of 0x80000000 to an mmio region
and the handler receives 0xFFFFFFFF80000000 in its value argument.

Backports commit 6da67de6803e93cbb7e93ac3497865832f8c00ea from qemu
2018-03-02 00:00:22 -05:00
Lioncash
d905278b86
Make unicorn happy with TLB execution 2018-03-01 20:13:37 -05:00
Alex Bennée
e3e57ca08e
cputlb: drop flush_global flag from tlb_flush
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.

Backports commit  d10eb08f5d8389c814b554d01aa2882ac58221bf from qemu
2018-03-01 19:36:04 -05:00
Jason Wang
fdca6292a1
exec: introduce address_space_get_iotlb_entry()
This patch introduces a helper to query the iotlb entry for a
possible iova. This will be used by later device IOTLB API to enable
the capability for a dataplane (e.g vhost) to query the IOTLB.

Backports commit 052c8fa9983f553fdfa0d61034774070dd639c2b from qemu
2018-03-01 13:05:08 -05:00
Paolo Bonzini
81ad780e5e
exec: introduce MemoryRegionCache
Device models often have to perform multiple access to a single
memory region that is known in advance, but would to use "DMA-style"
functions instead of address_space_map/unmap. This can happen
for example when the data has to undergo endianness conversion.
Introduce a new data structure to cache the result of
address_space_translate without forcing usage of a host address
like address_space_map does.

Backports commit 1f4e496e1fc2eb6c8bf377a0f9695930c380bfd3 from qemu
2018-03-01 10:50:30 -05:00
Paolo Bonzini
03a9bbb3d3
exec: introduce address_space_extend_translation
This extracts the common part of address_space_map and
address_space_cache_init into a new function.

Backports commit 715c31ec8e12107f47ac74b464c97e813c76f898 from qemu
2018-03-01 10:06:43 -05:00
Paolo Bonzini
88ad0f4f6e
exec: introduce memory_ldst.inc.c
Templatize the address_space_* and *_phys functions, so that we can add
similar functions in the next patch that work with a lightweight,
cache-like version of address_space_map/unmap.

Backports commit 0ce265ffef87f19f4dd1ff0663e09a63d66ae408 from qemu
2018-03-01 09:59:34 -05:00