d895eba47c969c98a55c05633ffa756917e7f8a5
Rendering plots with millions of values is expensive. Before this fix we wrote all values into CSV files. The CSV files were then read by Gnuplot that did the rendering. But in an image with n×m pixes there can only be nm different values. In most realistic scenarios we will have many values that will be drawn to the same pixels. So we are wasting time yb first generation the CSV for too many values and then by parsing that CSV again. Fixed by using a sparse 2D array to de-duplicate many values before they get written to the CSV. The additional time we spend de-duplicating is often smaller than the time saved when writing the CSV, so that the total CSV writing is about as 'fast' as before (sometimes a little faster, sometimes a little slower). But the time Gnuplot needs for rendering drastically reduces. The factor depends on the data, of course. We have seen factor 50 for realistic examples. Making a 15s job run in 300ms.
Description
No description provided
Languages
Java
79.4%
TypeScript
11.1%
HTML
4.5%
Python
2.8%
SCSS
1.1%
Other
0.9%