I need help optimizing a query for large data table when I test manually it shows fast , but in my slow query log it is logged as taken 10s+ for some reason .
SELECT q.id, q.village_id, q.to_player_id, q.to_village_id,
q.proc_type, TIMESTAMPDIFF(SECOND, NOW(),q.end_date) remainingTimeInSeconds
FROM table
I expect to output the results that's time is ended , meaning time left must be 0 or less , in ASC order .
it order by the end time itself ,because when we have many attacks we must arrange them according to which attack suppose to arrive first ,then process them one by one
TIMESTAMPDIFF(SECOND, NOW(),q.end_date)which calculates the remaining seconds is straightforward and easy to understand. However, the search conditionTIMESTAMPDIFF(SECOND, NOW(),(q.end_date - INTERVAL (q.execution_time*(q.threads-1)) SECOND)) <= 0is quite confusing without elaboration for those of us who are not sport enthusiasts. And when you say I expect to output the results that's time is ended , meaning time left must be 0 or less , couldn't it be done in a simpler manner than the one in the WHERE clause ?TIMESTAMPDIFF(SECOND, NOW(),q.end_date), why can't you useq.end_date <= now()as your search condition. Without clarification, it's all guess work. Please elaborate.<calculation involving one or more columns> <= 0has to be executed for every single row and can't be shortcut with an index. That means the whole table is loaded in to memory every single time. Where as<column> <= <calculation>could use a range seekp to eliminate the irrelevant rows. In your case the calculation will always use a column from the table, and so will always scan the whole table. You should create a calculated column with the timestamp that the row 'expires' (q.end_date - INTERVAL (q.execution_time*(q.threads-1))), index it, then useWHERE x <= NOW()