I am using MariaDB 10.11 on Linux.
I have the following table that is supposed to get about 40 inserts per second and about 40 deletes per second.
It contains trade data and is used to generate stock ticker data.
CREATE TABLE `trades` (
`instrument` smallint(5) unsigned NOT NULL DEFAULT 0,
`duration` int(10) unsigned NOT NULL DEFAULT 0,
`create_ts` int(10) unsigned NOT NULL DEFAULT 0,
`amount` decimal(64,30) NOT NULL DEFAULT 0.000000000000000000000000000000,
`price` decimal(64,30) NOT NULL DEFAULT 0.000000000000000000000000000000,
KEY `duration` (`duration`,`create_ts`),
KEY `instrument_2` (`instrument`,`duration`,`price`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
The duration field is in seconds, and represents the ticker duration (for example, 31557600 for a year, 60 for a minute, 3600 for an hour). So each new trade generates as many inserts in this table as there are different durations.
I run a DELETE query every second in order to purge entries with a create_ts that falls off a particular duration window.
It works great, except the performance on the DELETE query starts to noticeably slow down as the number of rows grows.
The DELETE query is:
DELETE FROM trades WHERE
(duration = 60 && create_ts < 1718102199) ||
(duration = 86400 && create_ts < 1718015859) ||
(duration = 2628288 && create_ts < 1715473971) ||
(duration = 31557600 && create_ts < 1686544659)
LIMIT 10;
With about 30 million rows in the table, that query takes about 0.015 seconds, which seems pretty slow. A SELECT query with the same WHERE clause takes only 0.001 sec.
Matter of fact, even only taking one duration into account (let's call it SIMPLE query), the results are the same:
DELETE FROM trades WHERE duration = 31557600 && create_ts < 1687170076 LIMIT 10;
Running time: 0.015 sec
SELECT instrument FROM trades WHERE duration = 31557600 && create_ts < 1687170076 LIMIT 10;
Running time: 0.001 sec
Here's the EXPLAIN on the SIMPLE SELECT query:
+------+-------------+-------+-------+--------------------+----------+---------+------+--------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+-------+--------------------+----------+---------+------+--------+-----------------------+
| 1 | SIMPLE | tmp | range | duration,create_ts | duration | 8 | NULL | 502248 | Using index condition |
+------+-------------+-------+-------+--------------------+----------+---------+------+--------+-----------------------+
And here's the EXPLAIN on the SIMPLE DELETE query:
+------+-------------+-------+-------+--------------------+----------+---------+------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+-------+--------------------+----------+---------+------+--------+-------------+
| 1 | SIMPLE | tmp | range | duration,create_ts | duration | 8 | NULL | 502248 | Using where |
+------+-------------+-------+-------+--------------------+----------+---------+------+--------+-------------+
How could I make the DELETE query more performant? Note that the 'rows' field in the EXPLAIN doesn't seem to matter so much. Even if I change the create_ts to a lower number so the 'rows' field in the EXPLAIN is only about ~1000, the time the query takes is still about 0.008 sec, which seems too high for deleting just 10 rows of simple data.
Note: I do need the instrument_2 index in there for other purposes, so dropping it isn't an option.
Note: the above figures are for the table containing about 30 million rows. The final objective is to have about 400 million rows in that table at any given time.
Thanks in advance for your suggestions.
PS: I do understand that LIMIT 10 doesn't compute with my above statement of "it must get about 40 deletes per second", I just use the 10 value for the sake of roundness while working on it.
PPS: OPTIMIZE'ing the table didn't yield any noticeable improvements.