Commit Graph

3048 Commits

Author SHA1 Message Date
Sergey Fedorov
eab60b7c77
cpu-exec: Clean up 'interrupt_request' reloading in cpu_handle_interrupt()
Backports commit 8b1fe3f439eaa2f0a6ee7737942bb6c405725867 from qemu
2018-02-24 00:27:05 -05:00
Sergey Fedorov
b4b7b88f69
cpu-exec: Remove unused 'x86_cpu' and 'env' from cpu_exec()
Backports commit ba048a4ae15ba0f70c6dcb12ee05db120408de78 from qemu
2018-02-24 00:16:40 -05:00
Sergey Fedorov
aefb8935a9
cpu-exec: Move TB execution stuff out of cpu_exec()
Simplify cpu_exec() by extracting TB execution code outside of
cpu_exec() into a new static inline function cpu_loop_exec_tb().

Backports commit 928de9ee14b0b63ee9f9275732ed3e1c8b5f4790 from qemu
2018-02-24 00:15:24 -05:00
Sergey Fedorov
d4ef96abf2
cpu-exec: Move interrupt handling out of cpu_exec()
Simplify cpu_exec() by extracting interrupt handling code outside of
cpu_exec() into a new static inline function cpu_handle_interrupt().

Backports commit c385e6e49763c6dd5dbbd90fadde95d986f8bd38 from qemu
2018-02-24 00:09:06 -05:00
Sergey Fedorov
c1b52a4387
cpu-exec: Move exception handling out of cpu_exec()
Simplify cpu_exec() by extracting exception handling code out of
cpu_exec() into a new static inline function cpu_handle_exception().
Also make cpu_handle_debug_exception() inline as it is used only once.

Backports commit ea284766ec6b9f1712369249566b4c372f3cec8b from qemu
2018-02-24 00:03:37 -05:00
Sergey Fedorov
fc3d135dac
cpu-exec: Move halt handling out of cpu_exec()
Simplify cpu_exec() by extracting CPU halt state handling code out of
cpu_exec() into a new static inline function cpu_handle_halt().

Backports commit 8b2d34e997371c9729a0f41e3cc624d4300bbe78 from qemu
2018-02-23 23:53:20 -05:00
Lioncash
88d00a75ca
cpu-exec: move cpu_exec to the bottom of the file
Remove forward declarations
2018-02-23 23:50:28 -05:00
Sergey Fedorov
0088ca994f
cpu-exec: Remove relic orphaned comment
This comment should have been deleted by commit 0ac087f1f3ae ("removed
unused code") but somehow it is still here. There's no point to keep it.

Backports commit c6f0d9f84c43ae973270df1a77482466558ee487 from qemu
2018-02-23 23:47:05 -05:00
Sergey Fedorov
1a768018c2
tcg: Remove needless CPUState::current_tb
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.

Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
2018-02-23 23:45:42 -05:00
Sergey Fedorov
73c75b4cf7
cpu-exec: Move TB chaining into tb_find_fast()
Move tb_add_jump() call and surrounding code from cpu_exec() into
tb_find_fast(). That simplifies cpu_exec() a little by hiding the direct
chaining optimization details into tb_find_fast(). It also allows to
move tb_lock()/tb_unlock() pair into tb_find_fast(), putting it closer
to tb_find_slow() which also manipulates the lock.

Backports commit a0522c7a55cc8ac76d82884cf8e52f76daa664cc from qemu
2018-02-23 23:38:57 -05:00
Sergey Fedorov
ba9a237586
tcg: Rework tb_invalidated_flag
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
2018-02-23 23:34:51 -05:00
Sergey Fedorov
c9700af2bd
tcg: Clean up from 'next_tb'
The value returned from tcg_qemu_tb_exec() is the value passed to the
corresponding tcg_gen_exit_tb() at translation time of the last TB
attempted to execute. It is a little confusing to store it in a variable
named 'next_tb'. In fact, it is a combination of 4-byte aligned pointer
and additional information in its two least significant bits. Break it
down right away into two variables named 'last_tb' and 'tb_exit' which
are a pointer to the last TB attempted to execute and the TB exit
reason, correspondingly. This simplifies the code and improves its
readability.

Correct a misleading documentation comment for tcg_qemu_tb_exec() and
fix logging in cpu_tb_exec(). Also rename a misleading 'next_tb' in
another couple of places.

Backports commit 819af24b9c1e95e6576f1cefd32f4d6bf56dfa56 from qemu
2018-02-23 23:29:04 -05:00
Paolo Bonzini
66faf3b5df
tcg: code_bitmap and code_write_count are not used by user-mode emulation
Backports commit 6fad459c91e8a1dedbb6681d3f57ede5222a225c from qemu
2018-02-23 23:17:37 -05:00
Sergey Fedorov
ffdc9d6323
tcg: Allow goto_tb to any target PC in user mode
In user mode, there's only a static address translation, TBs are always
invalidated properly and direct jumps are reset when mapping change.
Thus the destination address is always valid for direct jumps and
there's no need to restrict it to the pages the TB resides in.

Backports commit 90aa39a1cc4837360889f0e033ca25cc82100308 from qemu
2018-02-23 23:12:14 -05:00
Sergey Fedorov
73c59faad5
tcg: Clean up direct block chaining safety checks
We don't take care of direct jumps when address mapping changes. Thus we
must be sure to generate direct jumps so that they always keep valid
even if address mapping changes. Luckily, we can only allow to execute a
TB if it was generated from the pages which match with current mapping.

Document tcg_gen_goto_tb() declaration and note the reason for
destination PC limitations.

Some targets with variable length instructions allow TB to straddle a
page boundary. However, we make sure that both of TB pages match the
current address mapping when looking up TBs. So it is safe to do direct
jumps into the both pages. Correct the checks for some of those targets.

Given that, we can safely patch a TB which spans two pages. Remove the
unnecessary check in cpu_exec() and allow such TBs to be patched.

Backports commit 5b053a4a28278bca606eeff7d1c0730df1b047e9 from qemu
2018-02-23 22:26:00 -05:00
Sergey Fedorov
39d262f0d2
tcg: Clean up tb_jmp_unlink()
Unify the code of this function with tb_jmp_remove_from_list(). Making
these functions similar improves their readability. Also this could be a
step towards making this function thread-safe.

Backports commit f9c5b66f487a04d3747dc6997b1503f9258df945 from qemu
2018-02-23 21:40:07 -05:00
Lioncash
68272af618
translate-all: Remove unused variable in size_code_gen_buffer
Also eliminates the unused parameter
2018-02-23 21:38:34 -05:00
Sergey Fedorov
c530eb06a9
tcg: Extract removing of jumps to TB from tb_phys_invalidate()
Move the code for removing jumps to a TB out of tb_phys_invalidate() to
a separate static inline function tb_jmp_unlink(). This simplifies
tb_phys_invalidate() and improves code structure.

Backports commit 89bba496322d4cf996d42cdd4bb0912231656c3d from qemu
2018-02-23 21:36:29 -05:00
Sergey Fedorov
0d2e91518b
tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list()
tb_jmp_remove() was only used to remove the TB from a list of all TBs
jumping to the same TB which is n-th jump destination of the given TB.
Put a comment briefly describing the function behavior and rename it to
better reflect its purpose.

Backports commit 133626783aa5a1bf86332fa3e6f7b8efe005f924 from qemu
2018-02-23 21:34:01 -05:00
Sergey Fedorov
d60af028c5
tcg: Clarify thread safety check in tb_add_jump()
The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.

Backports commit 9962c478b153a18fe88a6509fe58cd178aff8abc from qemu
2018-02-23 21:32:47 -05:00
Sergey Fedorov
e93f68a755
tcg: Init TB's direct jumps before making it visible
Initialize TB's direct jump list data fields and reset the jumps before
tb_link_page() puts it into the physical hash table and the physical
page list. So TB is completely initialized before it becomes visible.

This is pure rearrangement of code to a more suitable place, though it
could be a preparation for relaxing the locking scheme in future.

Backports commit 901bc3deb43bf37c85e43955905d003be7ae5fa5 from qemu
2018-02-23 21:31:36 -05:00
Sergey Fedorov
87f2bb42d4
tcg: Rearrange tb_link_page() to avoid forward declaration
Backports commit e90d96b158665a684ab89b4f002838034b5fafc8 from qemu
2018-02-23 21:28:20 -05:00
Sergey Fedorov
fbc0a1105f
tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.

Backports commit c37e6d7e3589ecb96914faa21025ad7ba6654aea from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov
e60c24cecf
tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
2018-02-23 21:28:19 -05:00
Richard Henderson
bb0b055a99
translate-all: Adjust 256mb testing for mips64
Make sure we preserve the high 32-bits when masking for mips64.

Backports commit 7ba6a512ae439c98c0c1f0f4348c079d90f9dd9d from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota
de17843702
translate-all: add missing munmap of the code_gen guard page for MIPS
Backports commit 8bdf4997823126a39bd4c99e4b2283b02cc7865f from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota
9a2b02b241
translate-all: remove redundant setting of tcg_ctx.code_gen_buffer_size
The setting of tcg_ctx.code_gen_buffer_size is done by the only caller of
size_code_gen_buffer(), which is code_gen_alloc():

$ git grep size_code_gen_buffer
translate-all.c:static inline size_t size_code_gen_buffer(size_t tb_size)
translate-all.c: tcg_ctx.code_gen_buffer_size = size_code_gen_buffer(tb_size);

Backports commit 835154b6e2200460f04719d0028716a37c178368 from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov
c5b234ed1f
tcg: Note requirement on atomic direct jump patching
Backports commit 10b4f4855537dd421e193a7d0416513116370558 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov
87c3382dc8
tcg/mips: Make direct jump patching thread-safe
Ensure direct jump patching in MIPS is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit c82460a560176ef69c2f0662bd280612e274db96 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov
7538001da9
tcg/sparc: Make direct jump patching thread-safe
Ensure direct jump patching in SPARC is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 84f79fb7c6e857edc807e4a251338243ce0cbac3 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov
a45f8cb49d
tcg/aarch64: Make direct jump patching thread-safe
Ensure direct jump patching in AArch64 is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 9e269112953be4d670cb0d25042bd6546fcf3e45 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov
52e2972300
tcg/arm: Make direct jump patching thread-safe
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 7d14e0e2d661479985197203589c38840e1066df from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov
57359fbe6c
tcg/s390: Make direct jump patching thread-safe
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit ed3d51ecd7fe248d3959e469d53890ac9ffe0cd2 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov
5eb2d6618f
tcg/i386: Make direct jump patching thread-safe
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit 0d07abf05e98903c7faf204a9a90f7d45b7554dc from qemu
2018-02-23 21:28:17 -05:00
Lioncash
fffa27d269
osdep: MSVC-compatible alignment macros 2018-02-23 21:28:17 -05:00
Sergey Fedorov
3456f0879e
include/qemu/osdep.h: Add macros for pointer alignment
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.

Backports commit 6b587d3cda48e7ba26de8d30bf0d8a7063970715 from qemu
2018-02-23 21:28:17 -05:00
Sergey Fedorov
47eac70cb9
include/qemu/osdep.h: Add a macro to check for alignment
Backports commit 18a60a76147569ca9e11b0607e50ce4012fe1aaa from qemu
2018-02-23 21:28:17 -05:00
Emilio G. Cota
170f6e0b3b
tb: consistently use uint32_t for tb->flags
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.

Compile-tested for all targets.

Backports commit 89fee74a0f066dfd73830a7b5fa137e87888c870 from qemu
2018-02-23 21:28:11 -05:00
Peter Maydell
fe2000aa32
target-arm: Avoid unnecessary TLB flush on TCR_EL2, TCR_EL3 writes
The TCR_EL2 and TCR_EL3 regdefs were incorrectly using the
vmsa_tcr_el1_write function for writes. Since these registers don't
have the A1 bit that TCR_EL1 does, we don't need to do a tlb_flush()
when they are written. Remove the unnecessary .writefn and also the
harmless but unneeded .raw_writefn and .resetfn definitions.

Backports commit 6459b94c26dd666badb3547fef1456992a08e60b from qemu
2018-02-23 20:09:12 -05:00
Edgar E. Iglesias
eb79db28d5
target-arm/translate-a64.c: Unify some of the ldst_reg decoding
The various load/store variants under disas_ldst_reg can all reuse the
same decoding for opc, size, rt and is_vector.

This patch unifies the decoding in preparation for generating
instruction syndromes for data aborts.
This will allow us to reduce the number of places to hook in updates
to the load/store state needed to generate the insn syndromes.

No functional change.

Backports commit cd694521ca061a5d0436d5df4ec8c17c8f4dfcdb from qemu
2018-02-23 20:06:31 -05:00
Edgar E. Iglesias
602e9e34b9
target-arm/translate-a64.c: Use extract32 in disas_ldst_reg_imm9
Use extract32 instead of open coding the bit masking when decoding
is_signed and is_extended. This streamlines the decoding with some
of the other ldst variants.

No functional change.

Backports commit 026a19c3128678d4fe301fc36e8ffacdc9ecccb8 from qemu
2018-02-23 20:04:11 -05:00
Peter Maydell
56e9d7c09e
target-arm: Split data abort syndrome generator
Split the data abort syndrome generator into two versions:
One with a valid Instruction Specific Syndrome (ISS) and another without.

The following new flags are supported by the syndrome generator
with ISS:
* isv - Instruction syndrome valid
* sas - Syndrome access size
* sse - Syndrome sign extend
* srt - Syndrome register transfer
* sf - Sixty-Four bit register width
* ar - Acquire/Release

These flags are not yet used, so this patch has no functional change
except that we will now correctly set the IL bit in data abort
syndromes without ISS information.

Backports commit 094d028a7968236cd2b7f7b96394f7a3b8ad97c8 from qemu
2018-02-23 20:03:04 -05:00
Edgar E. Iglesias
bfc74c4da2
gen-icount: Use tcg_set_insn_param
Use tcg_set_insn_param() instead of directly accessing internal
tcg data structures to update an insn param.

Backports commit 25caa94c4a26daaab1e65c6d887e2972aeb5749e from qemu
2018-02-23 20:01:17 -05:00
Edgar E. Iglesias
a30a478538
tcg: Add tcg_set_insn_param
Add tcg_set_insn_param as a mechanism to modify an insn
parameter after emiting the insn. This is useful for icount
and also for embedding fault information for a specific insn.

Backports commit 1d41478fd428e01f057d3248292e4cdcdb048523 from qemu
2018-02-23 19:58:49 -05:00
Sergey Sorokin
98a6d44c54
target-arm: Fix descriptor address masking in ARM address translation
There is a bug in ARM address translation regime with a long-descriptor
format. On the descriptor reading its address is formed from an index
which is a part of the input address. And on the first iteration this index
is incorrectly masked with 'grainsize' mask. But it can be wider according
to pseudo-code.
On the other hand on the iterations other than first the descriptor address
is formed from the previous level descriptor by masking with 'descaddrmask'
value. It always clears just 12 lower bits, but it must clear 'grainsize'
lower bits instead according to pseudo-code.
The patch fixes both cases.

Backports commit dddb5223413c5425ae6eaeb3b967627efc9675f7 from qemu
2018-02-23 19:56:56 -05:00
Sergey Sorokin
00e751f18e
target-arm: Stage 2 permission fault was fixed in AArch32 state
As described in AArch32.CheckS2Permission an instruction fetch fails if
XN bit is set or there is no read permission for the address.

Backports commit dfda68377e20943f474505e75238cb96bc6874bf from qemu
2018-02-23 19:55:11 -05:00
Eric Blake
2f42c2c195
qapi: Change visit_type_FOO() to no longer return partial objects
Returning a partial object on error is an invitation for a careless
caller to leak memory. We already fixed things in an earlier
patch to guarantee NULL if visit_start fails ("qapi: Guarantee
NULL obj on input visitor callback error"), but that does not
help the case where visit_start succeeds but some other failure
happens before visit_end, such that we leak a partially constructed
object outside visit_type_FOO(). As no one outside the testsuite
was actually relying on these semantics, it is cleaner to just
document and guarantee that ALL pointer-based visit_type_FOO()
functions always leave a safe value in *obj during an input visitor
(either the new object on success, or NULL if an error is
encountered), so callers can now unconditionally use
qapi_free_FOO() to clean up regardless of whether an error occurred.

The decision is done by adding visit_is_input(), then updating the
generated code to check if additional cleanup is needed based on
the type of visitor in use.

Note that we still leave *obj unchanged after a scalar-based
visit_type_FOO(); I did not feel like auditing all uses of
visit_type_Enum() to see if the callers would tolerate a specific
sentinel value (not to mention having to decide whether it would
be better to use 0 or ENUM__MAX as that sentinel).

Backports commit 68ab47e4b4ecc1c4649362b8cc1e49794d1a6537 from qemu
2018-02-23 19:53:17 -05:00
Eric Blake
0d52542da2
qapi: Simplify semantics of visit_next_list()
The semantics of the list visit are somewhat baroque, with the
following pseudocode when FooList is used:

start()
for (prev = head; cur = next(prev); prev = &cur) {
visit(&cur->value)
}

Note that these semantics (advance before visit) requires that
the first call to next() return the list head, while all other
calls return the next element of the list; that is, every visitor
implementation is required to track extra state to decide whether
to return the input as-is, or to advance. It also requires an
argument of 'GenericList **' to next(), solely because the first
iteration might need to modify the caller's GenericList head, so
that all other calls have to do a layer of dereferencing.

Thankfully, we only have two uses of list visits in the entire
code base: one in spapr_drc (which completely avoids
visit_next_list(), feeding in integers from a different source
than uint8List), and one in qapi-visit.py. That is, all other
list visitors are generated in qapi-visit.c, and share the same
paradigm based on a qapi FooList type, so we can refactor how
lists are laid out with minimal churn among clients.

We can greatly simplify things by hoisting the special case
into the start() routine, and flipping the order in the loop
to visit before advance:

start(head)
for (tail = *head; tail; tail = next(tail)) {
visit(&tail->value)
}

With the simpler semantics, visitors have less state to track,
the argument to next() is reduced to 'GenericList *', and it
also becomes obvious whether an input visitor is allocating a
FooList during visit_start_list() (rather than the old way of
not knowing if an allocation happened until the first
visit_next_list()). As a minor drawback, we now allocate in
two functions instead of one, and have to pass the size to
both functions (unless we were to tweak the input visitors to
cache the size to start_list for reuse during next_list, but
that defeats the goal of less visitor state).

The signature of visit_start_list() is chosen to match
visit_start_struct(), with the new parameters after 'name'.

The spapr_drc case is a virtual visit, done by passing NULL for
list, similarly to how NULL is passed to visit_start_struct()
when a qapi type is not used in those visits. It was easy to
provide these semantics for qmp-output and dealloc visitors,
and a bit harder for qmp-input (several prerequisite patches
refactored things to make this patch straightforward). But it
turned out that the string and opts visitors munge enough other
state during visit_next_list() to make it easier to just
document and require a GenericList visit for now; an assertion
will remind us to adjust things if we need the semantics in the
future.

Several pre-requisite cleanup patches made the reshuffling of
the various visitors easier; particularly the qmp input visitor.

Backports commit d9f62dde1303286b24ac8ce88be27e2b9b9c5f46 from qemu
2018-02-23 19:50:26 -05:00
Lioncash
ed72ba0f8b
qapi: Fix string input visitor handling of invalid list
As shown in the previous commit, the string input visitor was
treating bogus input as an empty list rather than an error.
Fix parse_str() to set errp, then the callers to exit early if
an error was reported.

Meanwhile, fix the testsuite to use the generated
qapi_free_int16List() instead of rolling our own, and to
validate the fixed behavior, while at the same time documenting
one more change that we'd like to make in a later patch (a
failed visit_start_list should guarantee a NULL pointer,
regardless of what things were on input).

Backports commit 74f24cb6306d065045d0e2215a7d10533fa59c57 from qemu
2018-02-23 19:25:26 -05:00
Eric Blake
6084be1882
qapi: Split visit_end_struct() into pieces
As mentioned in previous patches, we want to call visit_end_struct()
functions unconditionally, so that visitors can release resources
tied up since the matching visit_start_struct() without also having
to worry about error priority if more than one error occurs.

Even though error_propagate() can be safely used to ignore a second
error during cleanup caused by a first error, it is simpler if the
cleanup cannot set an error. So, split out the error checking
portion (basically, input visitors checking for unvisited keys) into
a new function visit_check_struct(), which can be safely skipped if
any earlier errors are encountered, and leave the cleanup portion
(which never fails, but must be called unconditionally if
visit_start_struct() succeeded) in visit_end_struct().

Generated code in qapi-visit.c has diffs resembling:

|@@ -59,10 +59,12 @@ void visit_type_ACPIOSTInfo(Visitor *v,
| goto out_obj;
| }
| visit_type_ACPIOSTInfo_members(v, obj, &err);
|- error_propagate(errp, err);
|- err = NULL;
|+ if (err) {
|+ goto out_obj;
|+ }
|+ visit_check_struct(v, &err);
| out_obj:
|- visit_end_struct(v, &err);
|+ visit_end_struct(v);
| out:

and in qapi-event.c:

@@ -47,7 +47,10 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
| visit_type_q_obj_ACPI_DEVICE_OST_arg_members(v, &param, &err);
|- visit_end_struct(v, err ? NULL : &err);
|+ if (!err) {
|+ visit_check_struct(v, &err);
|+ }
|+ visit_end_struct(v);
| if (err) {
| goto out;

Backports commit 15c2f669e3fb2bc97f7b42d1871f595c0ac24af8 from qemu
2018-02-23 19:13:47 -05:00