I am developing an application that records the values of several probes and I am wondering about storing these values in our DBMS.
The traditional way would be to create a table like this:
CREATE TABLE `probe_temperature` (
`INSTANT` timestamp,
`TEMPERATURE` float,
`PROBE_ID` bigint,
CONSTRAINT `FK_probe_temperature_PROBE_ID` FOREIGN KEY (`PROBE_ID`) REFERENCES `probe` (`ID`)
)
In this way, the table will quickly have several billion records for around a hundred probes.
99.9% of our queries will be for a dataset linked to a single probe.
With this in mind, I wonder about the relevance of creating a table by probe. Something like :
CREATE TABLE `probe_temperature_xxxx` /* where xxxx is the id of the probe */(
`INSTANT` timestamp,
`TEMPERATURE` float
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
I think this would have several advantages:
- Save a few gigabytes by removing probe ids.
- Improve performance (Is this true?)
But this increases the complexity of the code and maintenance
I also read articles about table partitioning which would have the same benefits for performance but not for storage space (is it true ?)
Is the performance problem a real risk?
If yes, is it a good idea to use dynamic tables?