a01c8b3907
fix flaky test and improve error handling
...
just ignore invalid entries
2017-03-18 10:14:41 +01:00
513c256352
update third party libraries
2017-03-17 16:23:21 +01:00
3456177291
add date range filter
2017-03-17 11:17:57 +01:00
5aee6f5e4d
use label '<none>' to for values that have not value for groupBy field
2017-02-12 18:56:37 +01:00
562dadb692
group plots by field
2017-02-12 09:59:14 +01:00
b238849d65
use text input for filtering, again
2017-02-12 09:32:46 +01:00
0c9195011a
use log4j in pdb-ui
2017-02-05 11:20:00 +01:00
3722ba02b1
add slf4j via log4j 2 logging
2017-02-05 09:53:25 +01:00
175a866c90
update third-party libraries
2017-02-05 08:54:49 +01:00
4f77515bbd
test for keywords db performance
2017-01-07 09:10:42 +01:00
c283568757
group plots by a single field
2016-12-30 18:45:01 +01:00
62437f384f
minor unimportant changes
2016-12-30 13:16:30 +01:00
58bb64c80a
save 12ms in when checking if cached writer can be used
2016-12-29 19:33:45 +01:00
f520f18e13
leverage the cached pdbwriters
...
this increased performance from 500 entries per second to 4000.
2016-12-29 19:24:16 +01:00
de241ceb6d
finalize refactoring
2016-12-29 18:27:15 +01:00
68ac1dd631
reuse pdb writers
2016-12-28 08:39:20 +01:00
db0b3d6d24
new file format
...
Store values in sequences of variable length. Instead of using 8 bytes
per entry we are now using between 2 and 20 bytes. But we are also able
to store every non-negative long value.
2016-12-27 10:24:56 +01:00
c5f0e8514c
remove debug output
2016-12-23 19:28:11 +01:00
580733d267
only store the tag specific base folder in the database
...
before that we added each file (one per day and tag combination) to the
db
2016-12-23 19:12:30 +01:00
6969c8ce46
all storage files for the same tags use the same storage folder
...
- added an additional data folder as first level
2016-12-23 16:35:00 +01:00
85eaee940e
change directory structure
...
- the tags come first, then the date,
e.g. "mykey=myvalue_<uuid>/2016/01/01/<uuid>"
- We do this, so that we don't have to tag each file,
but only the root folder. This should speed up searches
2016-12-23 15:07:08 +01:00
5efab12063
test which verifies the dates in each file are monotonically increasing
2016-12-23 13:04:05 +01:00
470f3c730d
add UT for testing multiple files for different days
2016-12-23 12:48:26 +01:00
95e34831d3
simple auto-completion for the search box
2016-12-23 10:32:51 +01:00
d1e39513f3
create web application
2016-12-21 17:48:36 +01:00
35054b00b8
check what starts faster json, ludb or mapdb
2016-12-17 10:54:54 +01:00
d4c694dea3
group results by a single field
2016-12-14 19:36:38 +01:00
b25060a5d2
add first most simple result object
2016-12-14 17:59:04 +01:00
fa4921fcc9
use custom csv writer for performance
2016-12-13 18:41:19 +01:00
876520eb4c
do not create a new ObjectMapper per entry
...
also read value with MappingIterator.
This made reading 20-30 times faster.
We can now read and index 100k-500k per second.
The varianz might be due to LuDB slowness.
2016-12-12 18:45:02 +01:00
89fbaf2d06
TcpIngestor that receives a stream of json objects and stores them
2016-12-11 18:40:44 +01:00
e936df6f7e
render plot with a single dataseriew
2016-12-10 18:50:29 +01:00
81b39c5675
small enhancements
2016-12-10 15:36:06 +01:00
4376f8f783
log4j does not guarantee monotonically increasing date values
2016-12-10 15:35:29 +01:00
34ee64fff1
insert entries for different tags in one stream
2016-12-10 14:10:41 +01:00
a409c4c5d0
use java.time for time
2016-12-10 08:16:55 +01:00
256b278428
inital commit
2016-12-04 10:41:37 +01:00