The last improvement of memory usage introduced a performance
regression. The ingestion performance dropped by 50%-80%, because
for every inserted entry the Tags were created inefficient.
For each entry we executed two calls to System.nanoTime().
The resulting numbers aren't very reliable and calling nanoTime
that often (160k - 300k per second) is quite expensive.
Now we can find writer much faster, because we don't have to execute
a query for documents that match the tags. We can just look up the
documents in the map.
Speedup: 2-4ms -> 0.002-0.01ms
Plotting can take a long time and use a lot of resources.
Multiple plot requests can cause the machine to run OOM.
We are now allowing plots for 500k files again. This is mainly to
prevent unwanted plots of everything.
Added buttons to shift left/right, so that you do not have to
edit the date offset.
I decided to replace icons.css with font-awesome.min.css,
because I am too lazy to add the CSS for the icons every
time I use a new icon.
1. Log the exception in PdbFileIterator with a logger instead
of just printing it to stderr.
2. Increase log level for exceptions when inserting entries.
3. Log exception when creation of entry failed in TcpIngestor.
We are using ANTLR listeners to find out where in the
query the cursor is. Then we generate a list of keys/values
that might fit at that position. With that information we
can generate new queries and sort them by the number
of results they yield.
This is done in preparation for the proposal API.
In order to compute proposals we need to consume the
API of the DataStore, but the code does not need to
be in the DataStore.
Extracting the API allows us to separate these concerns.
LuDB has a few disadvantages.
1. Most notably disk space. H2 wastes a lot of valuable disk space.
For my test data set with 44 million entries it is 14 MB
(sometimes a lot more; depends on H2 internal cleanup). With
data-store it is 15 KB.
Overall I could reduce the disk space from 231 MB to 200 MB (13.4 %
in this example). That is an average of 4.6 bytes per entry.
2. Speed:
a) Liquibase is slow. The first time it takes approx. three seconds
b) Query and insertion. with data-store we can insert entries
up to 1.6 times faster.
Data-store uses a few tricks to save disk space:
1. We encode the tags into the file names.
2. To keep them short we translate the key/value of the tag into
shorter numbers. For example "foo" -> 12 and "bar" to 47. So the
tag "foo"/"bar" would be 12/47.
We then translate this number into a numeral system of base 62
(a-zA-Z0-9), so it can be used for file names and it is shorter.
That way we only have to store the mapping of string to int.
3. We do that in a simple tab separated file.