rename session recording to session replay#1207
rename session recording to session replay#1207BilalG1 merged 2 commits intorename-replay-tab-idfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR completes the rename of the "session recording" feature to "session replay" across the entire stack: database tables ( Key findings:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Client as Client SDK
participant BatchAPI as POST /api/v1/session-replays/batch
participant S3 as S3 Storage
participant DB as PostgreSQL
participant AdminAPI as Admin Internal APIs
Client->>BatchAPI: POST events (browser_session_id, session_replay_segment_id, batch_id, events[])
BatchAPI->>DB: Find recent SessionReplay for refresh_token (within idle timeout)
alt No recent replay exists
BatchAPI->>DB: Create new SessionReplay row
else Recent replay found
BatchAPI->>DB: Update SessionReplay (startedAt, lastEventAt)
end
BatchAPI->>DB: Check for duplicate (tenancyId, sessionReplayId, batchId)
alt Duplicate found
BatchAPI-->>Client: 200 deduped=true
else New batch
BatchAPI->>S3: Upload gzipped payload to session-replays/{projectId}/{branchId}/{replayId}/{batchId}.json.gz
BatchAPI->>DB: Create SessionReplayChunk row
BatchAPI-->>Client: 200 deduped=false, session_replay_id
end
Note over AdminAPI,DB: Admin viewing replays
AdminAPI->>DB: List SessionReplays (paginated by lastEventAt DESC)
AdminAPI->>DB: Aggregate chunk counts per session
AdminAPI-->>AdminAPI: Return replay list with user info
AdminAPI->>DB: List SessionReplayChunks for session (paginated by firstEventAt ASC)
AdminAPI->>S3: Download & gunzip chunk objects (up to 10 concurrent)
Note over AdminAPI,S3: Validates parsed.session_replay_id matches URL param
AdminAPI-->>AdminAPI: Return decompressed events
Last reviewed commit: d7da1f6 |
|
|
||
| ALTER TABLE "SessionRecording" RENAME TO "SessionReplay"; | ||
| ALTER TABLE "SessionRecordingChunk" RENAME TO "SessionReplayChunk"; | ||
| ALTER TABLE "SessionReplayChunk" RENAME COLUMN "sessionRecordingId" TO "sessionReplayId"; |
There was a problem hiding this comment.
Modifying an already-committed migration
This migration file previously contained only one statement (renaming the tabId column), which was already committed on the base branch rename-replay-tab-id. This PR appends three additional DDL statements to the same migration file.
Prisma tracks which migrations have been applied by storing the SQL content hash in _prisma_migrations. Any environment that already ran the original single-line migration will have a hash mismatch against this updated file. When prisma migrate deploy is next run on such an environment, Prisma will report a "drift" error and refuse to apply, because the applied migration no longer matches the file on disk.
Since the base branch is a pre-merge feature branch (not yet applied to dev or production), the risk here is scoped to developer local environments and CI runs on the base branch. However, if any developer or CI runner has already applied the one-line migration against a real database, they will need to manually resolve the drift (e.g. prisma migrate resolve --applied).
The safer practice is to add a new, separate migration file for the table renames (e.g. 20260217000000_rename_session_recording_tables_to_session_replay.sql). This keeps each migration atomic, avoids hash conflicts on any already-migrated database, and also fixes the misleading migration name (which currently only references the tabId rename, not the table renames).
| model SessionRecordingChunk { | ||
| model SessionReplayChunk { | ||
| id String @id @default(uuid()) @db.Uuid | ||
|
|
There was a problem hiding this comment.
Redundant @map annotation
The @map("sessionReplayId") decorator is redundant here because the Prisma field name (sessionReplayId) is identical to the mapped database column name. Prisma only uses @map when the field name differs from the underlying column name. This can be removed to avoid confusion.
| sessionReplayId String @db.Uuid |
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Additional Comments (1)
The payload written to S3 previously used the key {
"session_recording_id": "<id>",
...
}This PR renames it to if (parsed.session_replay_id !== sessionReplayId) {
throw new StackAssertionError("Decoded session replay chunk session_replay_id mismatch", ...);
}Any Since Consider making the check backwards-compatible: // Accept both old ("session_recording_id") and new ("session_replay_id") field names
const payloadId = parsed.session_replay_id ?? parsed.session_recording_id;
if (payloadId !== sessionReplayId) {
throw new StackAssertionError("Decoded session replay chunk id mismatch", {
expected: sessionReplayId,
actual: payloadId,
});
}The same fix is needed in both |
No description provided.