There is only one y1 and one y2 axis. Therefore it
doesn't make sense to have their definition in the
AggregateHandlers, because this would produce multiple
definitions for these axises.
The old method of creating brighter/darker colors was producing color
shades that did not look like the same color. E.g. a darker yellow
looked more like brown.
I am now trying to use hand crafted shades.
The drop down for plot types should only contain plot types that can
be combined. The reason is, that we can only draw images with two
x/y-axis. Therefore a combination of types that would need three or more
axis is not supported.
Classpathes in Eclipse are different from classpathes in Gradle due to
Gradle's 'implementation' configuration which does not provide
dependency transitively in Gradle.
In the previous changeset the code that determined
which axis the plots used was implemented as a
side effect of getting the Gnuplot definition of
an axis.
Changed that to an explit update call with simpler
logic.
Gnuplot does not handle long x-axis ticks very good.
It should know how wide the labels are and could adapt
the increment size accordingly, but it doesn't.
Fixed by explicitly defining the increment for x-axis
labels.
We generate CSV files with comma as separator.
When we write times with milli seconds, then
we use floating point numbers. Depending
on the locale those floating point numbers
may be written with comma instead of point.
If that happens, then the plots are messed up.
Fixed by enforcing the locale when formatting floats.
This makes it easier to use the mouse wheel
to zoom in. Without it you could zoom into
a region that had not data and then had to
use the date picker to change the date.
Rendering plots with millions of values is expensive. Before this fix
we wrote all values into CSV files. The CSV files were then read by
Gnuplot that did the rendering. But in an image with n×m pixes there
can only be nm different values. In most realistic scenarios we will
have many values that will be drawn to the same pixels. So we are
wasting time yb first generation the CSV for too many values and then
by parsing that CSV again.
Fixed by using a sparse 2D array to de-duplicate many values before
they get written to the CSV. The additional time we spend de-duplicating
is often smaller than the time saved when writing the CSV, so that the
total CSV writing is about as 'fast' as before (sometimes a little
faster, sometimes a little slower). But the time Gnuplot needs for
rendering drastically reduces. The factor depends on the data, of
course. We have seen factor 50 for realistic examples. Making a 15s
job run in 300ms.
Before: To compute the cumulative distribution we added every duration
into a LongList. This requires O(n) memory, where n is the number of
values.
Now: We store the durations + the number of occurrences in a
LongLongHashMap. This has the potential to reduce the memory
requirements if durations occur multiple times. There are a lot of
durations with 0, 1, 2 milliseconds. In the worst case every duration
is different. In that case the memory usage doubled with this solution.
Future: We are currently storing durations with milli seconds precision.
We don't have to do that. We cannot draw 100 million different values
on the y-axis in an images with only 1000px.