In databases such as PostgreSQL, doing exact counts like select count (*) from table where condition perform a full table scan.
PostgreSQL will need to scan either the entire table or the entirety of an index that includes all rows in the table.
What theoretical data structures or databases (analytics databases?) allow making this fast? I understand that this is not possible using a conventional index, since indexes only allow for identifying individual relevant rows quickly, but we are performing an aggregate function and not interested in any particular row. (More generally, this invites the question about whether data structures playing the role of indexes exist for aggregate functions.)
or the entirety of an index. However, it's not as effective as other implementations. We use triggers to maintain statistics, which works excellently and is very fast.