Change RGBACluster to be a class that only really persists once per block.
When we switch shapes and do operations on them, then we really only need
to change which points in the block are accessed. We don't need to do this
very often, so just change the mask whenever we need it. This brings us back
closer to our original performance, but we're still not where we were when
we started refactoring.
We suffered another performance hit. This time it comes from the fact
that we're copying around a lot of data based on what partition we're
choosing. We can get rid of this a tad by only copying the data that we
need once and then using getters/setters that selectively pull from
an array based on our shape index.
Changed the RGBAEndpoints to use the vector/matrix classes in
FasTCBase. This caused a ~20ms performance hit on an 8-core machine
which is likely due to the compiler having difficulty compiling away
some procedure call overheads. Upon profiling, the biggest bottleneck
is still by far the QuantizedError function, so any and all further
optimization should be focused on that.
In order to better facilitate the change from block stream order to non-block stream order,
a lot of changes were introduced to the way that we feed texture data to the compressors. This
data is embodied in the CompressionJob struct. We have made it so that the compression job
points to both the in and out pointers for our compressed and uncompressed data. Furthermore,
we have made sure that the struct also contains the format that its compressing for, so that if
any threading programs would like to chop up a compression job into smaller chunks based on the
format, it doesn't need to know the format explicitly, it just needs to know certain properties
about the format.
Moreover, the user can now define the start and end pixels from which we would like to compress
to. We can compress subsets of data by changing the in and out pointers and the width and height
values. The compressors will read data linearly until they reach the out pixels based on the width
of the given pixel.
In the previous commit, we simply accomodated for alpha errors when compressing single color partitions. In fact, the issue was a bit more greivous: we weren't computing the proper error term at all! This fixed that function so that we emphasize the error metrics induced by *squaring* the error in each channel and then returning that as a measurement of the acceptability of using a single color compression for that partition.
When the compressor recognized that a shape was a single color, it determines
an optimal encoding for that color. However, only the error in the single
pixel was returned as the error for the overall shape. This caused problems
with modes that do not support alpha and shapes that do have alpha.
When we detect that a partition has a single color in each subset, we can generate almost an exact representation of this value for most compression modes. However, when we were doing this subset matching, we were ignoring the error introduced by modes that had completely opaque representations against data that had transparent pixels. This bug fix essentially includes this error in our "best fit" calculations and makes everything work out for the better.
With the old code, it was possible that we skipped a compression with unlucky
preemption of our threads. I'm not exactly sure why, but that caused deadlock
(livelock?) in some very unfortunate circumstances. This new algorithm should
work regardless of how many threads execute at once and should also prevent
textures in the compression job list from being skipped. This algorithm seems
to be an improvement on low-core count machines (around 4 cores), but it is
slower on high-core count machines (40 cores or more)...
In general, we want to use this algorithm only with self-contained compression
lists. As such, we've added all of the proper synchronization primitives in
the list object itself. That way, different threads that are working on the
same list will be able to communicate. Ideally, this should eliminate the
number of user-space context switches that happen. Whether or not this is
faster than the other synchronization algorithms that we've tried remains
to be seen...
This is a first pass of what I believe to be a not too terrible
implementation of a cooperative thread-based compressor. The idea is
simple... If a compressor is invoked with the same parameters on multiple
threads, then the threads cooperate via an atomic counter to compress the
texture. Each thread can take as long as possible until the texture is finished.
If a caller calls a compression routine that has different parameters, then
it will help the current compression finish before starting on its own compression. In this
way, we can split the textures up among the threads and guarantee that we maximize the
resource usage between them. I.e. this becomes more efficient:
Thread 1: Thread 2: Thread N:
tex0 texN tex(N-1)N
tex1 texN+1 tex(N-1)(N+1)
.. .. ..
texN-1 tex2N tex(N-1)N
I have not tested this for bugs, so I'm still not completely convinced that it is deadlock-free
although it should be...