Woke up this morning to find that our dev server was in a crash loop, logs indicate that nedb can't handle a threshold of data.
Running parseInt on 0x3fffffe7 reveals 1,073,741,799 which, assuming we are dealing with mostly ascii characters, is about a gigabyte.
/var/app/current/node_modules/nedb/lib/datastore.js:77
if (err) { throw err; }
^
Error: Cannot create a string longer than 0x3fffffe7 characters
at stringSlice (buffer.js:594:43)
at Buffer.toString (buffer.js:667:10)
at FSReqWrap.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:48:23)
The update to our dev server was pushed before I finished yesterday, AWS monitoring suggests that it was fine until someone did a large request in the morning. Also, I was collecting all the data with a 15-minute expiry. Maybe implementing a max number of records would prevent this sort of thing from happening?

I've disabled tracing on the server for now as we don't have much use for it until the extension displays event data.
Woke up this morning to find that our dev server was in a crash loop, logs indicate that nedb can't handle a threshold of data.
Running parseInt on 0x3fffffe7 reveals 1,073,741,799 which, assuming we are dealing with mostly ascii characters, is about a gigabyte.
The update to our dev server was pushed before I finished yesterday, AWS monitoring suggests that it was fine until someone did a large request in the morning. Also, I was collecting all the data with a 15-minute expiry. Maybe implementing a max number of records would prevent this sort of thing from happening?
I've disabled tracing on the server for now as we don't have much use for it until the extension displays event data.