Conversation
…c-session-replays
…ck-auth/stack-auth into clickhouse-sync-extra-tables
…invites' into clickhouse-sync-email-outbox-table # Conflicts: # apps/backend/scripts/clickhouse-migrations.ts # apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts # apps/backend/src/lib/external-db-sync.ts # apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts # apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts
…le' into clickhouse-sync-session-replays
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR extends the external database synchronization system to track sequence IDs across ten new entity types (Team, TeamMember, direct permissions, email outbox, session replays, notification preferences, refresh tokens, OAuth accounts, and verification codes). It adds corresponding ClickHouse analytics tables with row-level isolation policies, updates CRUD handlers to record sync deletions, and includes comprehensive e2e test coverage. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Backend as Backend API
participant Prisma as Prisma Client
participant DeletedRow as DeletedRow Table
participant Sequencer as Sync Sequencer
participant ClickHouse as ClickHouse
participant ExtDB as External DB (Postgres)
Client->>Backend: Create/Update Entity (e.g., Team)
activate Backend
Backend->>Prisma: upsert/create with shouldUpdateSequenceId=true
Prisma->>Prisma: Execute mutation
Prisma-->>Backend: Return row
deactivate Backend
alt On Deletion
Client->>Backend: Delete Entity
activate Backend
Backend->>DeletedRow: recordExternalDbSyncDeletion(...)
DeletedRow->>DeletedRow: Insert deletion event
Backend->>Prisma: delete entity
Prisma-->>Backend: Confirm deletion
deactivate Backend
end
Sequencer->>Prisma: backfillSequenceIds (find shouldUpdateSequenceId=true rows)
Prisma-->>Sequencer: Return batch of rows
Sequencer->>Prisma: Update: set sequenceId, shouldUpdateSequenceId=false
Sequencer->>DeletedRow: Backfill sequence IDs for deleted rows
Sequencer->>Sequencer: Cascade updates (related entities)
Sequencer->>ClickHouse: POST telemetry spans
ClickHouse-->>Sequencer: Ack
Sequencer->>Sequencer: enqueueExternalDbSyncBatch(tenancy, mapping)
rect rgba(100, 150, 200, 0.5)
note over Sequencer, ExtDB: External DB Sync Batch Process
Sequencer->>ClickHouse: Fetch rows where sequence_id >= last_min AND sequence_id <= last_max
ClickHouse-->>Sequencer: Return synced rows (+ DeletedRow union)
Sequencer->>ExtDB: INSERT/UPDATE via CTE upsert or DELETE
ExtDB-->>Sequencer: Confirm upsert/delete
Sequencer->>ExtDB: UPDATE _stack_sync_metadata (advance cursor)
ExtDB-->>Sequencer: Confirm metadata update
end
Sequencer->>Sequencer: Update sync status (telemetry)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR improves the ClickHouse sync speed by removing the Key changes:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Sync as syncClickhouseMapping
participant CH as ClickHouse
Note over Sync,CH: Before this PR (per mapping, per invocation)
Sync->>CH: ensureClickhouseSchema (CREATE TABLE IF NOT EXISTS — DDL round trip)
Sync->>CH: getClickhouseLastSyncedSequenceId
loop Batch loop
Sync->>CH: pushRowsToClickhouse (INSERT)
Sync->>CH: updateClickhouseSyncMetadata
end
Note over Sync,CH: After this PR (DDL call removed)
Sync->>CH: getClickhouseLastSyncedSequenceId
loop Batch loop
Sync->>CH: pushRowsToClickhouse (INSERT)
Sync->>CH: updateClickhouseSyncMetadata
end
|
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
apps/backend/src/app/api/latest/users/crud.tsx (1)
1193-1203:⚠️ Potential issue | 🟠 MajorWrap the refresh-token sync write and
deleteMany()together.These tombstones are recorded before the delete and outside any transaction. If
projectUserRefreshToken.deleteMany()fails, external-db-sync will emit deletions for tokens that are still present; please execute both calls inside oneglobalPrismaClient.$transaction(...).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/users/crud.tsx` around lines 1193 - 1203, Wrap the tombstone write and the actual deletion in a single transaction so the external-db-sync deletions are only emitted if the DB delete succeeds: move the calls to recordExternalDbSyncRefreshTokenDeletionsForUser(...) and globalPrismaClient.projectUserRefreshToken.deleteMany(...) into a single globalPrismaClient.$transaction(...) and run them there (ensure you pass both operations as transactional actions so they either both commit or both roll back). Make sure to preserve the same parameters (auth.tenancy.id and params.user_id) and return/await the transaction result as appropriate.apps/backend/src/app/api/latest/auth/sessions/current/route.tsx (1)
37-48:⚠️ Potential issue | 🔴 CriticalDon't record the refresh-token tombstone before the delete outside a transaction.
recordExternalDbSyncDeletion()throws when it inserts 0DeletedRowrows forProjectUserRefreshToken, so an already-missing token now fails here beforedeleteMany()can translate it toKnownErrors.RefreshTokenNotFoundOrExpired(). It also commits the tombstone before the actual delete, so a later delete failure leaves external-db-sync ahead of the source DB. Please run both operations in oneglobalPrismaClienttransaction and preserve the 0-row mapping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/auth/sessions/current/route.tsx` around lines 37 - 48, Run the delete and the tombstone insertion inside a single Prisma transaction so the tombstone cannot be committed without the source delete: wrap globalPrismaClient.projectUserRefreshToken.deleteMany(...) and the call to recordExternalDbSyncDeletion(...) in a single globalPrismaClient.$transaction(...) and ensure you pass the delete result (the deleted count) or an explicit expected-deleted-count to recordExternalDbSyncDeletion so it can insert the 0-row mapping instead of throwing; if needed, update recordExternalDbSyncDeletion to accept an expectedCount/allowZero flag and avoid throwing when expectedCount === 0 while still inserting the corresponding DeletedRow record for tenancy.id and refreshTokenId.apps/backend/src/lib/permissions.tsx (2)
433-460:⚠️ Potential issue | 🟠 MajorRecord team-permission tombstones before
deleteMany.The project-scope branch now records external deletions, but the team-scope branch still bulk-deletes
TeamMemberDirectPermissionrows directly. Deleting a team permission definition will therefore leave staleteam_permissionsrows in external DBs.Possible fix
if (options.scope === "team") { + const teamPermissions = await sourceOfTruthTx.teamMemberDirectPermission.findMany({ + where: { + tenancyId: options.tenancy.id, + permissionId: options.permissionId, + }, + select: { id: true }, + }); + for (const perm of teamPermissions) { + await recordExternalDbSyncDeletion(sourceOfTruthTx, { + tableName: "TeamMemberDirectPermission", + tenancyId: options.tenancy.id, + permissionDbId: perm.id, + }); + } await sourceOfTruthTx.teamMemberDirectPermission.deleteMany({ where: { tenancyId: options.tenancy.id, permissionId: options.permissionId, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/permissions.tsx` around lines 433 - 460, The team-scope branch currently calls sourceOfTruthTx.teamMemberDirectPermission.deleteMany(...) directly and thus skips recording tombstones; mirror the project-scope logic by first fetching matching teamMemberDirectPermission rows (use sourceOfTruthTx.teamMemberDirectPermission.findMany with where { tenancyId: options.tenancy.id, permissionId: options.permissionId } and select { id: true }), loop over each result and call recordExternalDbSyncDeletion(sourceOfTruthTx, { tableName: "TeamMemberDirectPermission", tenancyId: options.tenancy.id, permissionDbId: perm.id }), then call the existing teamMemberDirectPermission.deleteMany(...) to remove them.
330-348:⚠️ Potential issue | 🟠 MajorMark permission renames for resync.
These
updateManycalls changepermissionIdin place but leaveshouldUpdateSequenceIduntouched, so syncedteam_permissions/project_permissionsrows can keep the old permission id until some unrelated write touches them.Possible fix
await sourceOfTruthTx.teamMemberDirectPermission.updateMany({ where: { tenancyId: options.tenancy.id, permissionId: options.oldId, }, - data: { - permissionId: newId, - }, + data: withExternalDbSyncUpdate({ + permissionId: newId, + }), }); await sourceOfTruthTx.projectUserDirectPermission.updateMany({ where: { tenancyId: options.tenancy.id, permissionId: options.oldId, }, - data: { - permissionId: newId, - }, + data: withExternalDbSyncUpdate({ + permissionId: newId, + }), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/permissions.tsx` around lines 330 - 348, The updateMany calls on sourceOfTruthTx.teamMemberDirectPermission.updateMany and sourceOfTruthTx.projectUserDirectPermission.updateMany only change permissionId and must also mark those rows for resync; update the data payload in both calls to set shouldUpdateSequenceId (e.g., to null or the sentinel value your sync uses) alongside permissionId: newId so the downstream sync will re-evaluate and propagate the renamed permission immediately.apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts (1)
236-283:⚠️ Potential issue | 🟠 MajorInclude
contact_channelsinmapping_status.Line 369 still computes
contactChannelStats, but this helper only emitsusersandemail_outboxes. As a result,sync_engine.mappingsandexternal_databases[].mapping_statuswill never report backlog forcontact_channels, even though the endpoint already exposessequencer.contact_channels.Also applies to: 368-378
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts` around lines 236 - 283, buildMappingInternalStats currently only sets mappings for "users" and "email_outboxes", so "contact_channels" never appears in mapping_status; add a new mapping entry for contact_channels using the existing contactChannelStats (and corresponding deletedRowsByTable lookup for ContactChannel similar to deletedProjectUserStats) and compute internal_min_sequence_id/internal_max_sequence_id/internal_pending_count with minBigIntString, maxBigIntString and addBigIntStrings just like you do for "users", then ensure mappingInternalStats.set("contact_channels", { mapping_id: "contact_channels", internal_min_sequence_id: ..., internal_max_sequence_id: ..., internal_pending_count: ... }) so contact_channels shows up in mappings and mappingStatuses.
🧹 Nitpick comments (1)
apps/backend/scripts/run-cron-jobs.ts (1)
33-33: Make the 1s loop test-only or configurable.This now drives both internal cron endpoints at a fixed 1Hz in every environment. That's a large steady-state traffic increase over the old jittered delay, and removing the jitter also makes replica bursts line up again. An env-controlled fast path would preserve the test-speed win without baking the hot polling cadence into normal deployments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/scripts/run-cron-jobs.ts` at line 33, The loop currently awaiting wait(1000) forces a 1s fixed poll in all environments; change it to use a configurable interval and restore non-deterministic/backoff behavior for production. Replace the hardcoded wait(1000) in run-cron-jobs.ts with a value derived from an env var (e.g., CRON_POLL_MS or a boolean TEST_FAST_CRON) so tests can opt into 1s polling while normal runs use a larger, jittered/default interval; or compute the delay when the loop starts (use wait(getCronDelayMs())) where getCronDelayMs() returns test-configured value if process.env.TEST_FAST_CRON is set, otherwise returns a randomized/backoff delay to avoid aligned bursts. Ensure references to wait(...) and the loop calling it are updated and documented in env usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql`:
- Around line 5-12: This migration creates several indexes for Team
(Team_sequenceId_key, Team_tenancyId_sequenceId_idx,
Team_shouldUpdateSequenceId_idx) (and similarly for TeamMember at lines 18-25)
inside the Prisma transaction which will block writes on large tables; split
these index creations into one or more follow-up migrations that run OUTSIDE the
migration transaction and build the indexes CONCURRENTLY (use CREATE INDEX
CONCURRENTLY / CREATE UNIQUE INDEX CONCURRENTLY where appropriate), and add the
CONDITIONALLY_REPEAT_MIGRATION_SENTINEL pattern and any temporary marker columns
needed to perform the change safely on >1,000,000-row tables so the original
transactional migration no longer includes these plain index builds.
In
`@apps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sql`:
- Around line 5-12: The three indexes on EmailOutbox
("EmailOutbox_sequenceId_key", "EmailOutbox_tenancyId_sequenceId_idx",
"EmailOutbox_shouldUpdateSequenceId_idx") must not be created inside this
transactional migration; instead remove these CREATE INDEX statements from this
migration and add separate non-transactional/concurrent migration files
following the repo's large-table pattern (use CREATE INDEX CONCURRENTLY and the
project’s concurrent/sentinel migration flow) so the index builds run outside a
transaction and as their own migration steps for the EmailOutbox table.
In
`@apps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sql`:
- Around line 5-12: The migration currently creates three indexes
("SessionReplay_sequenceId_key", "SessionReplay_tenancyId_sequenceId_idx",
"SessionReplay_shouldUpdateSequenceId_idx") inside the transactional migration
on table "SessionReplay"; remove these CREATE INDEX statements from this
migration and instead add one or more follow-up migrations that build the
indexes using CREATE INDEX CONCURRENTLY (or the DB-specific non-blocking
concurrent option), ensuring you follow the repository's safe-migration pattern
(use CONDITIONALLY_REPEAT_MIGRATION_SENTINEL and any temporary marking columns
needed) so the index builds run outside the transaction and can be retried
safely for large tables.
In
`@apps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sql`:
- Around line 5-22: The current migration creates heavy indexes inline
(TeamMemberDirectPermission_sequenceId_key,
TeamMemberDirectPermission_shouldUpdateSequenceId_idx,
TeamMemberDirectPermission_tenancyId_sequenceId_idx,
VerificationCode_sequenceId_key,
VerificationCode_shouldUpdateSequenceId_type_idx) inside a transaction which can
block large tables; instead keep only safe DDL in this migration (e.g., ALTER
TABLE "VerificationCode" ADD COLUMN "sequenceId" BIGINT and ADD COLUMN
"shouldUpdateSequenceId" BOOLEAN DEFAULT true) and move all index creation into
separate migration files that follow the large-table/sentinel pattern: create a
sentinel column if needed, run CREATE INDEX CONCURRENTLY ... for each index (use
the exact index names above), then run a small follow-up migration to set the
sentinel flags/visibility and finally drop the sentinel column; ensure the
concurrent index migrations are not wrapped in a transaction (use CREATE INDEX
CONCURRENTLY) and split them into distinct migration sql files.
In
`@apps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql`:
- Around line 5-25: This migration adds two new columns ("sequenceId",
"shouldUpdateSequenceId") to UserNotificationPreference and then creates six
indexes (ProjectUserDirectPermission_sequenceId_key,
ProjectUserDirectPermission_shouldUpdateSequenceId_idx,
ProjectUserDirectPermission_tenancyId_sequenceId_idx,
UserNotificationPreference_sequenceId_key,
UserNotificationPreference_shouldUpdateSequenceId_idx,
UserNotificationPreference_tenancyId_sequenceId_idx) inside the same
transactional migration, which risks long-running locks on large tables; split
this up by keeping only the fast, transactional schema change (ALTER TABLE to
add the "sequenceId" and "shouldUpdateSequenceId" columns in the current
migration) and move all CREATE INDEX statements into one or more separate
non-transactional large-table migrations following the repo's
concurrent/sentinel pattern (use CREATE INDEX CONCURRENTLY in those migrations
and the repo’s sentinel workflow for safe rollouts) so the index builds run
outside the transaction for ProjectUserDirectPermission and
UserNotificationPreference.
In
`@apps/backend/prisma/migrations/20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sql`:
- Around line 5-25: This migration must be split into safe, non-transactional
index-creation steps: remove the six CREATE INDEX / CREATE UNIQUE INDEX
statements from this transactional file (the lines creating
ProjectUserRefreshToken_sequenceId_key,
ProjectUserRefreshToken_shouldUpdateSequenceId_idx,
ProjectUserRefreshToken_tenancyId_sequenceId_idx,
ProjectUserOAuthAccount_sequenceId_key,
ProjectUserOAuthAccount_shouldUpdateSequenceId_idx,
ProjectUserOAuthAccount_tenancyId_sequenceId_idx) and keep only the ALTER TABLE
that adds sequenceId and shouldUpdateSequenceId; then add separate migration
files that (a) use the repo’s CONDITIONALLY_REPEAT_MIGRATION_SENTINEL pattern
and a temporary marking column if needed, (b) run outside the transaction or
explicitly create indexes CONCURRENTLY on ProjectUserRefreshToken and
ProjectUserOAuthAccount (creating the unique index on sequenceId and the two
non-unique indexes referencing shouldUpdateSequenceId and tenancyId), and (c)
ensure each index build is in its own migration to avoid Prisma transaction
timeouts.
In `@apps/backend/src/app/api/latest/auth/password/update/route.tsx`:
- Around line 82-88: Wrap the tombstone recording and the deleteMany into a
single serializable Prisma transaction so they commit or abort together: call
globalPrismaClient.$transaction([...], { isolationLevel: 'Serializable' }) and
inside it invoke recordExternalDbSyncRefreshTokenDeletionsForUser (passing
tenancyId, projectUserId and excludeRefreshToken) and then
globalPrismaClient.projectUserRefreshToken.deleteMany with the same predicate;
this ensures the recorded tombstones and the deleted set are aligned and
prevents phantom-insert windows.
In `@apps/backend/src/app/api/latest/auth/sessions/crud.tsx`:
- Around line 75-81: The tombstone insert (recordExternalDbSyncDeletion) is
currently committed before deleting the refresh token, risking inconsistent
external sync if the delete fails; wrap both operations in a single Prisma
transaction so they succeed or fail together. Modify the code to run
recordExternalDbSyncDeletion and
globalPrismaClient.projectUserRefreshToken.deleteMany inside a single
transaction (e.g., use globalPrismaClient.$transaction with an async callback or
pass both statements to $transaction) so the tombstone insert and delete are
atomic and reference the same tenancyId/refreshTokenId parameters.
In
`@apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts`:
- Around line 157-174: The UPDATE's subquery collapses changed teams to only
{projectId, branchId} (changed_teams), causing EVERY TEAM_INVITATION in that
tenancy to be marked; modify the subquery and WHERE join to preserve team-level
specificity by selecting "Team"."id" (or the appropriate team identifier used on
VerificationCode, e.g., "teamId") in changed_teams and add a matching condition
like AND "VerificationCode"."teamId" = changed_teams."id" (or the actual column
name) so only verification codes for the specific changed teams are toggled;
update the SELECT DISTINCT and the WHERE clause that references changed_teams
accordingly (symbols: "Team", "Tenancy", changed_teams, "VerificationCode").
- Around line 83-98: The cascade UPDATE currently re-queries all sequenced
ProjectUser rows per tenancy; change it to join only the exact (tenancyId,
projectUserId) pairs produced by the first CTE (the rows_to_update result) so
TeamMember rows are marked only for users updated in this batch. Locate the
globalPrismaClient.$executeRaw block that updates "TeamMember" and replace the
FROM subquery that selects from "ProjectUser" with a join to the first CTE (the
rows_to_update CTE name used earlier in this query) or propagate the updated
(tenancyId, projectUserId) pairs into a temp/table expression and join on those
pairs, keeping the existing filters (shouldUpdateSequenceId = FALSE, sequenceId
IS NOT NULL) but applying them only to the rows_to_update set.
In `@apps/backend/src/lib/external-db-sync.ts`:
- Around line 274-302: The permission tombstone INSERT currently filters only by
permission id, which can record a deletion for the wrong tenant; update the
WHERE clause in the tx.$executeRaw calls for the "TeamMemberDirectPermission"
branch (and the similar branch at lines 312-339) to include tenancy ID filtering
by adding AND "tenancyId" = ${target.tenancyId}::uuid, and ensure
target.tenancyId is asserted/validated (target.tenancyId /
target.permissionDbId) before running tx.$executeRaw so the lookup only affects
the intended tenant.
- Around line 454-485: The INSERT builds tombstones for VerificationCode but
scopes Tenancy only by projectId/branchId, so sibling tenancies can produce
extra rows; ensure the query restricts to the intended tenancy by validating and
using target.tenancyId and adding a Tenancy.id filter. Add a validation for
target.tenancyId (e.g., assertUuid or assertNonEmptyString depending on type)
and modify the tx.$executeRaw SQL (the JOIN/WHERE involving "Tenancy" and
"VerificationCode") to include AND "Tenancy"."id" = ${target.tenancyId} (or move
it into the JOIN condition) so the DeletedRow insert creates at most one row for
the specified tenancy; keep references to VerificationCode, Tenancy, DeletedRow,
and target.verificationCodeProjectId/verificationCodeBranchId/verificationCodeId
when making the change.
In `@apps/backend/src/route-handlers/verification-code-handler.tsx`:
- Around line 280-299: The external DB tombstone write
(recordExternalDbSyncDeletion) is done before and outside the verificationCode
delete, which can lead to inconsistent state if the delete fails; wrap the
conditional tombstone call and the globalPrismaClient.verificationCode.delete
together inside a single globalPrismaClient.$transaction so both operations
succeed or fail atomically (move the call to recordExternalDbSyncDeletion into
the transaction callback alongside the delete, using the same transaction
client), referencing recordExternalDbSyncDeletion,
globalPrismaClient.verificationCode.delete and globalPrismaClient.$transaction
to locate the changes.
In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts`:
- Around line 699-741: Add a post-membership user-update case to the "TeamMember
CRUD sync (Postgres)" test: after creating the membership (the addMemberResponse
and waitForSyncedTeamMember(client, teamId, user.userId) steps), update the
ProjectUser/User record (e.g., change display_name and/or primary_email via
User.update or the API using niceBackendFetch), then wait for the sync (reuse
waitForSyncedTeamMember or add a new wait helper) and query
"team_member_profiles" to assert the row reflects the updated
display_name/primary_email; finally, proceed to the existing remove-member and
waitForSyncedTeamMemberDeletion checks so the test covers insert,
post-membership update, and deletion.
In `@packages/stack-shared/src/config/db-sync-mappings.ts`:
- Around line 303-320: The tombstone rows are emitting NULL into non-null
ClickHouse columns which causes the first deletion sync to fail; update the
affected table definitions so deleted-row placeholders can be written by either
making the problematic columns nullable (e.g., LowCardinality(Nullable(String))
or Nullable(String)) or giving them non-null defaults (e.g., DEFAULT '') as
appropriate. Specifically, modify the analytics_internal.contact_channels CREATE
TABLE to allow NULL/default for contact_channels.type and contact_channels.value
(and likewise adjust teams.display_name and connected_accounts.provider and
connected_accounts.provider_account_id in their respective CREATE TABLEs
referenced in the review) so tombstone rows no longer violate the schema while
preserving existing ordering/partitioning keys.
- Around line 1111-1132: The query currently scopes by "Tenancy" via
projectId/branchId which can include sibling tenancies; instead make the "Team"
join authoritative by changing the LEFT JOIN "Team" to an INNER JOIN and adding
a tenancy filter on the team (e.g. add AND "Team"."tenancyId" = $1::uuid to the
"Team" ON clause) so rows are only returned for teams that belong to the target
tenancy; apply the same change to the other query block that starts at the
second occurrence (around lines 1160-1179) to ensure both queries are scoped by
"Team"."tenancyId" rather than only by "Tenancy".
- Around line 819-826: The mapping for last_active_at_millis incorrectly
extracts the createdAt timestamp; update the CASE expression in db-sync-mappings
so that when "ProjectUser"."lastActiveAt" IS NOT NULL you EXTRACT(EPOCH FROM
"ProjectUser"."lastActiveAt") * 1000 instead of using "ProjectUser"."createdAt"
(keep the existing NULL branch and overall structure that builds the "user"
jsonb). This change targets the CASE producing "last_active_at_millis" in the
db-sync-mappings file.
- Line 667: The team_member_profiles sync currently lists ProjectUser in
sourceTables but always advances the cursor using
TeamMember.sequenceId/sequence_id, so updates to ProjectUser never move its
cursor and embedded user payloads go stale; update the team_member_profiles
fetch/advance logic (where you reference sourceTables, TeamMember, ProjectUser,
sequenceId/sequence_id) to advance the cursor based on the originating source:
when the row came from ProjectUser use ProjectUser.sequenceId/sequence_id, when
from TeamMember use TeamMember.sequenceId/sequence_id (or maintain separate
per-source cursors), and ensure the SELECT/WHERE and subsequent cursor-update
statements reference the correct sequence column so ProjectUser changes trigger
re-sync of the embedded user payload.
---
Outside diff comments:
In `@apps/backend/src/app/api/latest/auth/sessions/current/route.tsx`:
- Around line 37-48: Run the delete and the tombstone insertion inside a single
Prisma transaction so the tombstone cannot be committed without the source
delete: wrap globalPrismaClient.projectUserRefreshToken.deleteMany(...) and the
call to recordExternalDbSyncDeletion(...) in a single
globalPrismaClient.$transaction(...) and ensure you pass the delete result (the
deleted count) or an explicit expected-deleted-count to
recordExternalDbSyncDeletion so it can insert the 0-row mapping instead of
throwing; if needed, update recordExternalDbSyncDeletion to accept an
expectedCount/allowZero flag and avoid throwing when expectedCount === 0 while
still inserting the corresponding DeletedRow record for tenancy.id and
refreshTokenId.
In `@apps/backend/src/app/api/latest/internal/external-db-sync/status/route.ts`:
- Around line 236-283: buildMappingInternalStats currently only sets mappings
for "users" and "email_outboxes", so "contact_channels" never appears in
mapping_status; add a new mapping entry for contact_channels using the existing
contactChannelStats (and corresponding deletedRowsByTable lookup for
ContactChannel similar to deletedProjectUserStats) and compute
internal_min_sequence_id/internal_max_sequence_id/internal_pending_count with
minBigIntString, maxBigIntString and addBigIntStrings just like you do for
"users", then ensure mappingInternalStats.set("contact_channels", { mapping_id:
"contact_channels", internal_min_sequence_id: ..., internal_max_sequence_id:
..., internal_pending_count: ... }) so contact_channels shows up in mappings and
mappingStatuses.
In `@apps/backend/src/app/api/latest/users/crud.tsx`:
- Around line 1193-1203: Wrap the tombstone write and the actual deletion in a
single transaction so the external-db-sync deletions are only emitted if the DB
delete succeeds: move the calls to
recordExternalDbSyncRefreshTokenDeletionsForUser(...) and
globalPrismaClient.projectUserRefreshToken.deleteMany(...) into a single
globalPrismaClient.$transaction(...) and run them there (ensure you pass both
operations as transactional actions so they either both commit or both roll
back). Make sure to preserve the same parameters (auth.tenancy.id and
params.user_id) and return/await the transaction result as appropriate.
In `@apps/backend/src/lib/permissions.tsx`:
- Around line 433-460: The team-scope branch currently calls
sourceOfTruthTx.teamMemberDirectPermission.deleteMany(...) directly and thus
skips recording tombstones; mirror the project-scope logic by first fetching
matching teamMemberDirectPermission rows (use
sourceOfTruthTx.teamMemberDirectPermission.findMany with where { tenancyId:
options.tenancy.id, permissionId: options.permissionId } and select { id: true
}), loop over each result and call recordExternalDbSyncDeletion(sourceOfTruthTx,
{ tableName: "TeamMemberDirectPermission", tenancyId: options.tenancy.id,
permissionDbId: perm.id }), then call the existing
teamMemberDirectPermission.deleteMany(...) to remove them.
- Around line 330-348: The updateMany calls on
sourceOfTruthTx.teamMemberDirectPermission.updateMany and
sourceOfTruthTx.projectUserDirectPermission.updateMany only change permissionId
and must also mark those rows for resync; update the data payload in both calls
to set shouldUpdateSequenceId (e.g., to null or the sentinel value your sync
uses) alongside permissionId: newId so the downstream sync will re-evaluate and
propagate the renamed permission immediately.
---
Nitpick comments:
In `@apps/backend/scripts/run-cron-jobs.ts`:
- Line 33: The loop currently awaiting wait(1000) forces a 1s fixed poll in all
environments; change it to use a configurable interval and restore
non-deterministic/backoff behavior for production. Replace the hardcoded
wait(1000) in run-cron-jobs.ts with a value derived from an env var (e.g.,
CRON_POLL_MS or a boolean TEST_FAST_CRON) so tests can opt into 1s polling while
normal runs use a larger, jittered/default interval; or compute the delay when
the loop starts (use wait(getCronDelayMs())) where getCronDelayMs() returns
test-configured value if process.env.TEST_FAST_CRON is set, otherwise returns a
randomized/backoff delay to avoid aligned bursts. Ensure references to wait(...)
and the loop calling it are updated and documented in env usage.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d3c429cf-f64b-4a0a-9826-3cec8b2c92f7
📒 Files selected for processing (29)
apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sqlapps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sqlapps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sqlapps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sqlapps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sqlapps/backend/prisma/migrations/20260318000000_add_sequence_id_to_refresh_tokens_and_oauth_accounts/migration.sqlapps/backend/prisma/schema.prismaapps/backend/scripts/clickhouse-migrations.tsapps/backend/scripts/run-cron-jobs.tsapps/backend/src/app/api/latest/auth/password/update/route.tsxapps/backend/src/app/api/latest/auth/sessions/crud.tsxapps/backend/src/app/api/latest/auth/sessions/current/route.tsxapps/backend/src/app/api/latest/emails/notification-preference/crud.tsxapps/backend/src/app/api/latest/emails/outbox/crud.tsxapps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.tsapps/backend/src/app/api/latest/internal/external-db-sync/status/route.tsapps/backend/src/app/api/latest/oauth-providers/crud.tsxapps/backend/src/app/api/latest/session-replays/batch/route.tsxapps/backend/src/app/api/latest/team-memberships/crud.tsxapps/backend/src/app/api/latest/teams/crud.tsxapps/backend/src/app/api/latest/users/crud.tsxapps/backend/src/lib/email-queue-step.tsxapps/backend/src/lib/external-db-sync.tsapps/backend/src/lib/permissions.tsxapps/backend/src/route-handlers/verification-code-handler.tsxapps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.tspackages/stack-shared/src/config/db-sync-mappings.ts
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "Team_sequenceId_key" ON "Team"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "Team_tenancyId_sequenceId_idx" ON "Team"("tenancyId", "sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "Team_shouldUpdateSequenceId_idx" ON "Team"("shouldUpdateSequenceId", "tenancyId"); |
There was a problem hiding this comment.
Do not build six indexes for Team and TeamMember inside one transactional migration.
These plain index builds will run under the Prisma migration transaction, so on the repo's >1,000,000-row assumption they can block writes to both tables and easily overrun the migration timeout. Please split this into follow-up migrations and create the indexes concurrently.
As per coding guidelines, "When writing database migration files, assume that we have >1,000,000 rows in every table. This means you may have to use CONDITIONALLY_REPEAT_MIGRATION_SENTINEL to avoid running the migration. Use concurrent index builds and temporary marking columns for safe migrations."
Also applies to: 18-25
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/prisma/migrations/20260316000000_add_team_team_member_sequence_columns/migration.sql`
around lines 5 - 12, This migration creates several indexes for Team
(Team_sequenceId_key, Team_tenancyId_sequenceId_idx,
Team_shouldUpdateSequenceId_idx) (and similarly for TeamMember at lines 18-25)
inside the Prisma transaction which will block writes on large tables; split
these index creations into one or more follow-up migrations that run OUTSIDE the
migration transaction and build the indexes CONCURRENTLY (use CREATE INDEX
CONCURRENTLY / CREATE UNIQUE INDEX CONCURRENTLY where appropriate), and add the
CONDITIONALLY_REPEAT_MIGRATION_SENTINEL pattern and any temporary marker columns
needed to perform the change safely on >1,000,000-row tables so the original
transactional migration no longer includes these plain index builds.
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "EmailOutbox_sequenceId_key" ON "EmailOutbox"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "EmailOutbox_tenancyId_sequenceId_idx" ON "EmailOutbox"("tenancyId", "sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "EmailOutbox_shouldUpdateSequenceId_idx" ON "EmailOutbox"("shouldUpdateSequenceId", "tenancyId"); |
There was a problem hiding this comment.
Use the large-table migration pattern for these indexes.
These CREATE INDEX statements will run inside Prisma's migration transaction. On an EmailOutbox table sized in the millions, that can block writes and exceed the short migration timeout; please split the index builds into the repo's concurrent/sentinel migration flow instead of plain CREATE INDEX here. As per coding guidelines, **/migrations/**/*.{ts,sql} assumes >1,000,000 rows, requires concurrent index builds, and says long-running operations should be split into separate migration files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/prisma/migrations/20260316000001_add_email_outbox_sequence_columns/migration.sql`
around lines 5 - 12, The three indexes on EmailOutbox
("EmailOutbox_sequenceId_key", "EmailOutbox_tenancyId_sequenceId_idx",
"EmailOutbox_shouldUpdateSequenceId_idx") must not be created inside this
transactional migration; instead remove these CREATE INDEX statements from this
migration and add separate non-transactional/concurrent migration files
following the repo's large-table pattern (use CREATE INDEX CONCURRENTLY and the
project’s concurrent/sentinel migration flow) so the index builds run outside a
transaction and as their own migration steps for the EmailOutbox table.
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "SessionReplay_sequenceId_key" ON "SessionReplay"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "SessionReplay_tenancyId_sequenceId_idx" ON "SessionReplay"("tenancyId", "sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "SessionReplay_shouldUpdateSequenceId_idx" ON "SessionReplay"("shouldUpdateSequenceId", "tenancyId"); |
There was a problem hiding this comment.
Build these indexes outside the transactional Prisma migration.
These three plain index builds will run inside the migration transaction, so on a large SessionReplay table they can hold write-blocking locks long enough to hit the migration timeout. Please split the index work into follow-up migrations and create them concurrently.
As per coding guidelines, "When writing database migration files, assume that we have >1,000,000 rows in every table. This means you may have to use CONDITIONALLY_REPEAT_MIGRATION_SENTINEL to avoid running the migration. Use concurrent index builds and temporary marking columns for safe migrations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/prisma/migrations/20260316000002_add_session_replay_sequence_columns/migration.sql`
around lines 5 - 12, The migration currently creates three indexes
("SessionReplay_sequenceId_key", "SessionReplay_tenancyId_sequenceId_idx",
"SessionReplay_shouldUpdateSequenceId_idx") inside the transactional migration
on table "SessionReplay"; remove these CREATE INDEX statements from this
migration and instead add one or more follow-up migrations that build the
indexes using CREATE INDEX CONCURRENTLY (or the DB-specific non-blocking
concurrent option), ensuring you follow the repository's safe-migration pattern
(use CONDITIONALLY_REPEAT_MIGRATION_SENTINEL and any temporary marking columns
needed) so the index builds run outside the transaction and can be retried
safely for large tables.
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "TeamMemberDirectPermission_sequenceId_key" ON "TeamMemberDirectPermission"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "TeamMemberDirectPermission_shouldUpdateSequenceId_idx" ON "TeamMemberDirectPermission"("shouldUpdateSequenceId", "tenancyId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "TeamMemberDirectPermission_tenancyId_sequenceId_idx" ON "TeamMemberDirectPermission"("tenancyId", "sequenceId"); | ||
|
|
||
| -- AlterTable | ||
| ALTER TABLE "VerificationCode" ADD COLUMN "sequenceId" BIGINT, | ||
| ADD COLUMN "shouldUpdateSequenceId" BOOLEAN NOT NULL DEFAULT true; | ||
|
|
||
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "VerificationCode_sequenceId_key" ON "VerificationCode"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "VerificationCode_shouldUpdateSequenceId_type_idx" ON "VerificationCode"("shouldUpdateSequenceId", "type"); |
There was a problem hiding this comment.
Use the concurrent/sentinel migration flow for these index builds.
These indexes are created non-concurrently inside one Prisma migration transaction. On the table sizes this repo plans for, that can block writes or time out; please split the index work into the large-table migration pattern instead of doing it inline here. As per coding guidelines, **/migrations/**/*.{ts,sql} assumes >1,000,000 rows, requires concurrent index builds, and says long-running operations should be split into separate migration files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/prisma/migrations/20260317000000_add_team_permission_invitation_sequence_columns/migration.sql`
around lines 5 - 22, The current migration creates heavy indexes inline
(TeamMemberDirectPermission_sequenceId_key,
TeamMemberDirectPermission_shouldUpdateSequenceId_idx,
TeamMemberDirectPermission_tenancyId_sequenceId_idx,
VerificationCode_sequenceId_key,
VerificationCode_shouldUpdateSequenceId_type_idx) inside a transaction which can
block large tables; instead keep only safe DDL in this migration (e.g., ALTER
TABLE "VerificationCode" ADD COLUMN "sequenceId" BIGINT and ADD COLUMN
"shouldUpdateSequenceId" BOOLEAN DEFAULT true) and move all index creation into
separate migration files that follow the large-table/sentinel pattern: create a
sentinel column if needed, run CREATE INDEX CONCURRENTLY ... for each index (use
the exact index names above), then run a small follow-up migration to set the
sentinel flags/visibility and finally drop the sentinel column; ensure the
concurrent index migrations are not wrapped in a transaction (use CREATE INDEX
CONCURRENTLY) and split them into distinct migration sql files.
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "ProjectUserDirectPermission_sequenceId_key" ON "ProjectUserDirectPermission"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "ProjectUserDirectPermission_shouldUpdateSequenceId_idx" ON "ProjectUserDirectPermission"("shouldUpdateSequenceId", "tenancyId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "ProjectUserDirectPermission_tenancyId_sequenceId_idx" ON "ProjectUserDirectPermission"("tenancyId", "sequenceId"); | ||
|
|
||
| -- AlterTable | ||
| ALTER TABLE "UserNotificationPreference" ADD COLUMN "sequenceId" BIGINT, | ||
| ADD COLUMN "shouldUpdateSequenceId" BOOLEAN NOT NULL DEFAULT true; | ||
|
|
||
| -- CreateIndex | ||
| CREATE UNIQUE INDEX "UserNotificationPreference_sequenceId_key" ON "UserNotificationPreference"("sequenceId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "UserNotificationPreference_shouldUpdateSequenceId_idx" ON "UserNotificationPreference"("shouldUpdateSequenceId", "tenancyId"); | ||
|
|
||
| -- CreateIndex | ||
| CREATE INDEX "UserNotificationPreference_tenancyId_sequenceId_idx" ON "UserNotificationPreference"("tenancyId", "sequenceId"); |
There was a problem hiding this comment.
Don't build all of these indexes inside one transactional migration.
This migration creates six indexes with plain CREATE INDEX, so all of that work runs under Prisma's migration transaction. On million-row tables, that's likely to block writes or time out; please move the index builds to the repo's concurrent/sentinel large-table migration pattern instead of doing them inline here. As per coding guidelines, **/migrations/**/*.{ts,sql} assumes >1,000,000 rows, requires concurrent index builds, and says long-running operations should be split into separate migration files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/prisma/migrations/20260317000001_add_project_permission_notification_preference_sequence_columns/migration.sql`
around lines 5 - 25, This migration adds two new columns ("sequenceId",
"shouldUpdateSequenceId") to UserNotificationPreference and then creates six
indexes (ProjectUserDirectPermission_sequenceId_key,
ProjectUserDirectPermission_shouldUpdateSequenceId_idx,
ProjectUserDirectPermission_tenancyId_sequenceId_idx,
UserNotificationPreference_sequenceId_key,
UserNotificationPreference_shouldUpdateSequenceId_idx,
UserNotificationPreference_tenancyId_sequenceId_idx) inside the same
transactional migration, which risks long-running locks on large tables; split
this up by keeping only the fast, transactional schema change (ALTER TABLE to
add the "sequenceId" and "shouldUpdateSequenceId" columns in the current
migration) and move all CREATE INDEX statements into one or more separate
non-transactional large-table migrations following the repo's
concurrent/sentinel pattern (use CREATE INDEX CONCURRENTLY in those migrations
and the repo’s sentinel workflow for safe rollouts) so the index builds run
outside the transaction for ProjectUserDirectPermission and
UserNotificationPreference.
| test('TeamMember CRUD sync (Postgres)', async () => { | ||
| const dbName = 'team_member_crud_test'; | ||
| const connectionString = await dbManager.createDatabase(dbName); | ||
|
|
||
| await createProjectWithExternalDb({ | ||
| main: { | ||
| type: 'postgres', | ||
| connectionString, | ||
| } | ||
| }); | ||
|
|
||
| const client = dbManager.getClient(dbName); | ||
|
|
||
| const user = await User.create({ primary_email: 'tm-crud@example.com' }); | ||
| const createTeamResponse = await niceBackendFetch('/api/v1/teams', { | ||
| accessType: 'admin', | ||
| method: 'POST', | ||
| body: { display_name: 'TM CRUD Team' }, | ||
| }); | ||
| expect(createTeamResponse.status).toBe(201); | ||
| const teamId = createTeamResponse.body.id; | ||
|
|
||
| // Add user as team member | ||
| const addMemberResponse = await niceBackendFetch(`/api/v1/team-memberships/${teamId}/${user.userId}`, { | ||
| accessType: 'admin', | ||
| method: 'POST', | ||
| body: {}, | ||
| }); | ||
| expect(addMemberResponse.status).toBe(201); | ||
|
|
||
| await waitForSyncedTeamMember(client, teamId, user.userId); | ||
|
|
||
| const res1 = await client.query(`SELECT * FROM "team_member_profiles" WHERE "team_id" = $1 AND "user_id" = $2`, [teamId, user.userId]); | ||
| expect(res1.rows.length).toBe(1); | ||
|
|
||
| // Remove member | ||
| await niceBackendFetch(`/api/v1/team-memberships/${teamId}/${user.userId}`, { | ||
| accessType: 'admin', | ||
| method: 'DELETE', | ||
| }); | ||
|
|
||
| await waitForSyncedTeamMemberDeletion(client, teamId, user.userId); | ||
| }, TEST_TIMEOUT); |
There was a problem hiding this comment.
Add the post-membership user-update case here.
This test only covers member insert/delete, but the new team_member_profiles sync also depends on ProjectUser data. A user display-name or primary-email change after membership creation is the critical regression surface and currently has no E2E coverage.
As per coding guidelines: **/apps/e2e/**/*.{ts,tsx}: ALWAYS add new E2E tests when you change the API or SDK interface. Generally, err on the side of creating too many tests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts`
around lines 699 - 741, Add a post-membership user-update case to the
"TeamMember CRUD sync (Postgres)" test: after creating the membership (the
addMemberResponse and waitForSyncedTeamMember(client, teamId, user.userId)
steps), update the ProjectUser/User record (e.g., change display_name and/or
primary_email via User.update or the API using niceBackendFetch), then wait for
the sync (reuse waitForSyncedTeamMember or add a new wait helper) and query
"team_member_profiles" to assert the row reflects the updated
display_name/primary_email; finally, proceed to the existing remove-member and
waitForSyncedTeamMemberDeletion checks so the test covers insert,
post-membership update, and deletion.
| CREATE TABLE IF NOT EXISTS analytics_internal.contact_channels ( | ||
| project_id String, | ||
| branch_id String, | ||
| id UUID, | ||
| user_id UUID, | ||
| type LowCardinality(String), | ||
| value String, | ||
| is_primary UInt8, | ||
| is_verified UInt8, | ||
| used_for_auth UInt8, | ||
| created_at DateTime64(3, 'UTC'), | ||
| sync_sequence_id Int64, | ||
| sync_is_deleted UInt8, | ||
| sync_created_at DateTime64(3, 'UTC') DEFAULT now64(3) | ||
| ) | ||
| ENGINE ReplacingMergeTree(sync_sequence_id) | ||
| PARTITION BY toYYYYMM(created_at) | ||
| ORDER BY (project_id, branch_id, id); |
There was a problem hiding this comment.
Deleted-row placeholders violate the ClickHouse schema.
These tombstone branches emit NULL for non-null ClickHouse columns (contact_channels.type/value, teams.display_name, connected_accounts.provider/provider_account_id). The first deletion sync for any of these mappings will fail before the tombstone row is written.
Also applies to: 347-360, 501-517, 543-555, 2273-2288, 2323-2335
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/stack-shared/src/config/db-sync-mappings.ts` around lines 303 - 320,
The tombstone rows are emitting NULL into non-null ClickHouse columns which
causes the first deletion sync to fail; update the affected table definitions so
deleted-row placeholders can be written by either making the problematic columns
nullable (e.g., LowCardinality(Nullable(String)) or Nullable(String)) or giving
them non-null defaults (e.g., DEFAULT '') as appropriate. Specifically, modify
the analytics_internal.contact_channels CREATE TABLE to allow NULL/default for
contact_channels.type and contact_channels.value (and likewise adjust
teams.display_name and connected_accounts.provider and
connected_accounts.provider_account_id in their respective CREATE TABLEs
referenced in the review) so tombstone rows no longer violate the schema while
preserving existing ordering/partitioning keys.
| }, | ||
| }, | ||
| "team_member_profiles": { | ||
| sourceTables: { "TeamMember": "TeamMember", "ProjectUser": "ProjectUser" }, |
There was a problem hiding this comment.
team_member_profiles ignores the ProjectUser sync cursor.
Line 667 declares ProjectUser as a source table, but both fetch queries still advance on TeamMember.sequenceId/sequence_id only. Any later user/profile change leaves the embedded user payload stale until the membership row itself changes.
Also applies to: 708-785, 787-857
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/stack-shared/src/config/db-sync-mappings.ts` at line 667, The
team_member_profiles sync currently lists ProjectUser in sourceTables but always
advances the cursor using TeamMember.sequenceId/sequence_id, so updates to
ProjectUser never move its cursor and embedded user payloads go stale; update
the team_member_profiles fetch/advance logic (where you reference sourceTables,
TeamMember, ProjectUser, sequenceId/sequence_id) to advance the cursor based on
the originating source: when the row came from ProjectUser use
ProjectUser.sequenceId/sequence_id, when from TeamMember use
TeamMember.sequenceId/sequence_id (or maintain separate per-source cursors), and
ensure the SELECT/WHERE and subsequent cursor-update statements reference the
correct sequence column so ProjectUser changes trigger re-sync of the embedded
user payload.
| 'profile_image_url', "ProjectUser"."profileImageUrl", | ||
| 'signed_up_at_millis', EXTRACT(EPOCH FROM "ProjectUser"."createdAt") * 1000, | ||
| 'client_metadata', COALESCE("ProjectUser"."clientMetadata", '{}'::jsonb), | ||
| 'client_read_only_metadata', COALESCE("ProjectUser"."clientReadOnlyMetadata", '{}'::jsonb), | ||
| 'server_metadata', COALESCE("ProjectUser"."serverMetadata", '{}'::jsonb), | ||
| 'is_anonymous', "ProjectUser"."isAnonymous", | ||
| 'last_active_at_millis', CASE WHEN "ProjectUser"."lastActiveAt" IS NOT NULL THEN EXTRACT(EPOCH FROM "ProjectUser"."createdAt") * 1000 ELSE NULL END | ||
| ) AS "user", |
There was a problem hiding this comment.
last_active_at_millis serializes the wrong timestamp.
Line 825 checks lastActiveAt but writes createdAt, so every synced Postgres profile stores signup time instead of last activity.
Suggested fix
- 'last_active_at_millis', CASE WHEN "ProjectUser"."lastActiveAt" IS NOT NULL THEN EXTRACT(EPOCH FROM "ProjectUser"."createdAt") * 1000 ELSE NULL END
+ 'last_active_at_millis', CASE WHEN "ProjectUser"."lastActiveAt" IS NOT NULL THEN EXTRACT(EPOCH FROM "ProjectUser"."lastActiveAt") * 1000 ELSE NULL END📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 'profile_image_url', "ProjectUser"."profileImageUrl", | |
| 'signed_up_at_millis', EXTRACT(EPOCH FROM "ProjectUser"."createdAt") * 1000, | |
| 'client_metadata', COALESCE("ProjectUser"."clientMetadata", '{}'::jsonb), | |
| 'client_read_only_metadata', COALESCE("ProjectUser"."clientReadOnlyMetadata", '{}'::jsonb), | |
| 'server_metadata', COALESCE("ProjectUser"."serverMetadata", '{}'::jsonb), | |
| 'is_anonymous', "ProjectUser"."isAnonymous", | |
| 'last_active_at_millis', CASE WHEN "ProjectUser"."lastActiveAt" IS NOT NULL THEN EXTRACT(EPOCH FROM "ProjectUser"."createdAt") * 1000 ELSE NULL END | |
| ) AS "user", | |
| 'profile_image_url', "ProjectUser"."profileImageUrl", | |
| 'signed_up_at_millis', EXTRACT(EPOCH FROM "ProjectUser"."createdAt") * 1000, | |
| 'client_metadata', COALESCE("ProjectUser"."clientMetadata", '{}'::jsonb), | |
| 'client_read_only_metadata', COALESCE("ProjectUser"."clientReadOnlyMetadata", '{}'::jsonb), | |
| 'server_metadata', COALESCE("ProjectUser"."serverMetadata", '{}'::jsonb), | |
| 'is_anonymous', "ProjectUser"."isAnonymous", | |
| 'last_active_at_millis', CASE WHEN "ProjectUser"."lastActiveAt" IS NOT NULL THEN EXTRACT(EPOCH FROM "ProjectUser"."lastActiveAt") * 1000 ELSE NULL END | |
| ) AS "user", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/stack-shared/src/config/db-sync-mappings.ts` around lines 819 - 826,
The mapping for last_active_at_millis incorrectly extracts the createdAt
timestamp; update the CASE expression in db-sync-mappings so that when
"ProjectUser"."lastActiveAt" IS NOT NULL you EXTRACT(EPOCH FROM
"ProjectUser"."lastActiveAt") * 1000 instead of using "ProjectUser"."createdAt"
(keep the existing NULL branch and overall structure that builds the "user"
jsonb). This change targets the CASE producing "last_active_at_millis" in the
db-sync-mappings file.
| clickhouse: ` | ||
| SELECT * | ||
| FROM ( | ||
| SELECT | ||
| "Tenancy"."projectId" AS "project_id", | ||
| "Tenancy"."branchId" AS "branch_id", | ||
| "VerificationCode"."id"::uuid AS "id", | ||
| ("VerificationCode"."data"->>'team_id')::uuid AS "team_id", | ||
| "Team"."displayName" AS "team_display_name", | ||
| "VerificationCode"."method"->>'email' AS "recipient_email", | ||
| FLOOR(EXTRACT(EPOCH FROM "VerificationCode"."expiresAt") * 1000)::bigint AS "expires_at_millis", | ||
| "VerificationCode"."createdAt" AS "created_at", | ||
| "VerificationCode"."sequenceId" AS "sync_sequence_id", | ||
| "Tenancy"."id" AS "tenancyId", | ||
| false AS "sync_is_deleted" | ||
| FROM "VerificationCode" | ||
| JOIN "Tenancy" ON "Tenancy"."projectId" = "VerificationCode"."projectId" | ||
| AND "Tenancy"."branchId" = "VerificationCode"."branchId" | ||
| LEFT JOIN "Team" ON "Team"."teamId" = ("VerificationCode"."data"->>'team_id')::uuid | ||
| AND "Team"."tenancyId" = "Tenancy"."id" | ||
| WHERE "Tenancy"."id" = $1::uuid | ||
| AND "VerificationCode"."type" = 'TEAM_INVITATION' |
There was a problem hiding this comment.
Scope invitation fetches by the owning team tenancy.
These queries join VerificationCode to Tenancy via projectId/branchId, so a tenancy will also pick up invitation codes created for sibling tenancies on the same branch. Those rows either leak into the wrong external DB or fail here because team_display_name becomes NULL. Make the Team join authoritative for tenancy scoping.
Also applies to: 1160-1179
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/stack-shared/src/config/db-sync-mappings.ts` around lines 1111 -
1132, The query currently scopes by "Tenancy" via projectId/branchId which can
include sibling tenancies; instead make the "Team" join authoritative by
changing the LEFT JOIN "Team" to an INNER JOIN and adding a tenancy filter on
the team (e.g. add AND "Team"."tenancyId" = $1::uuid to the "Team" ON clause) so
rows are only returned for teams that belong to the target tenancy; apply the
same change to the other query block that starts at the second occurrence
(around lines 1160-1179) to ensure both queries are scoped by "Team"."tenancyId"
rather than only by "Tenancy".
Summary by CodeRabbit
New Features
Chores