Skip to content

rename session recording to session replay#1207

Merged
BilalG1 merged 2 commits intorename-replay-tab-idfrom
rename-to-session-replay
Feb 17, 2026
Merged

rename session recording to session replay#1207
BilalG1 merged 2 commits intorename-replay-tab-idfrom
rename-to-session-replay

Conversation

@BilalG1
Copy link
Copy Markdown
Collaborator

@BilalG1 BilalG1 commented Feb 17, 2026

No description provided.

@vercel
Copy link
Copy Markdown

vercel bot commented Feb 17, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stack-backend Ready Ready Preview, Comment Feb 17, 2026 6:55pm
stack-dashboard Ready Ready Preview, Comment Feb 17, 2026 6:55pm
stack-demo Ready Ready Preview, Comment Feb 17, 2026 6:55pm
stack-docs Ready Ready Preview, Comment Feb 17, 2026 6:55pm

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 17, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch rename-to-session-replay

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Feb 17, 2026

Greptile Summary

This PR completes the rename of the "session recording" feature to "session replay" across the entire stack: database tables (SessionRecordingSessionReplay, SessionRecordingChunkSessionReplayChunk), API routes (/session-recordings//session-replays/), Prisma models, SDK interfaces, and dashboard UI. The rename is thorough — no residual session_recording references remain in non-migration files.

Key findings:

  • S3 payload key backwards incompatibility: The batch upload route previously stored the replay identifier in S3 JSON under the key session_recording_id; it now stores it as session_replay_id. Both event-reading routes assert parsed.session_replay_id === sessionReplayId, meaning any chunk object uploaded before this PR will fail with a StackAssertionError at read time. A backwards-compatible fallback (parsed.session_replay_id ?? parsed.session_recording_id) would prevent breaking existing dev/staging data.

  • Migration modifies an already-committed file: The migration 20260216000000_rename_tab_id_to_session_replay_segment_id previously only renamed the tabId column (committed on the base feature branch). This PR appends three more ALTER TABLE statements to the same file. Any environment that already applied the one-line version will see a Prisma hash mismatch ("drift") on next migrate deploy. A separate new migration file would be safer.

  • Redundant @map annotation: sessionReplayId String @db.Uuid @map("sessionReplayId") in schema.prisma maps the field to itself — the @map decorator has no effect here and can be removed.

Confidence Score: 3/5

  • Safe to merge after addressing the S3 payload backwards-compatibility issue and the modified-migration concern.
  • The rename itself is complete and consistent across all layers. However, two issues lower confidence: (1) the S3 payload key rename from session_recording_id to session_replay_id silently breaks reading any chunk data uploaded before the PR, because both event routes throw a StackAssertionError on mismatch; (2) appending to an existing migration file will cause Prisma drift errors on any environment that already applied the original single-line migration. Both are fixable before merge with low effort.
  • apps/backend/src/app/api/latest/session-replays/batch/route.tsx, apps/backend/src/app/api/latest/internal/session-replays/[session_replay_id]/events/route.tsx, apps/backend/src/app/api/latest/internal/session-replays/[session_replay_id]/chunks/[chunk_id]/events/route.tsx, apps/backend/prisma/migrations/20260216000000_rename_tab_id_to_session_replay_segment_id/migration.sql

Important Files Changed

Filename Overview
apps/backend/prisma/migrations/20260216000000_rename_tab_id_to_session_replay_segment_id/migration.sql Extends an existing migration (that only renamed a column) with table renames; migration name is now misleading as it renames tables too, not just the tab_id column.
apps/backend/prisma/schema.prisma Correctly renames SessionRecording/SessionRecordingChunk models to SessionReplay/SessionReplayChunk with proper @@Map directives; contains a minor redundant @Map("sessionReplayId") on sessionReplayId field.
apps/backend/src/app/api/latest/session-replays/batch/route.tsx Renames the batch upload endpoint from session-recordings to session-replays; changes the S3 payload key from "session_recording_id" to "session_replay_id" and the S3 path prefix from "session-recordings/" to "session-replays/" — both are backwards-incompatible changes for any previously stored S3 data.
apps/backend/src/app/api/latest/internal/session-replays/[session_replay_id]/events/route.tsx Updated to check for "session_replay_id" field in S3 JSON data; this will throw StackAssertionError for any existing S3 chunks that stored "session_recording_id" in their payload (backwards incompatibility).
apps/backend/src/app/api/latest/internal/session-replays/[session_replay_id]/chunks/[chunk_id]/events/route.tsx Checks for "session_replay_id" in S3 payload but old uploaded chunks stored "session_recording_id"; will throw StackAssertionError for any pre-existing S3 data.

Sequence Diagram

sequenceDiagram
    participant Client as Client SDK
    participant BatchAPI as POST /api/v1/session-replays/batch
    participant S3 as S3 Storage
    participant DB as PostgreSQL
    participant AdminAPI as Admin Internal APIs

    Client->>BatchAPI: POST events (browser_session_id, session_replay_segment_id, batch_id, events[])
    BatchAPI->>DB: Find recent SessionReplay for refresh_token (within idle timeout)
    alt No recent replay exists
        BatchAPI->>DB: Create new SessionReplay row
    else Recent replay found
        BatchAPI->>DB: Update SessionReplay (startedAt, lastEventAt)
    end
    BatchAPI->>DB: Check for duplicate (tenancyId, sessionReplayId, batchId)
    alt Duplicate found
        BatchAPI-->>Client: 200 deduped=true
    else New batch
        BatchAPI->>S3: Upload gzipped payload to session-replays/{projectId}/{branchId}/{replayId}/{batchId}.json.gz
        BatchAPI->>DB: Create SessionReplayChunk row
        BatchAPI-->>Client: 200 deduped=false, session_replay_id
    end

    Note over AdminAPI,DB: Admin viewing replays
    AdminAPI->>DB: List SessionReplays (paginated by lastEventAt DESC)
    AdminAPI->>DB: Aggregate chunk counts per session
    AdminAPI-->>AdminAPI: Return replay list with user info

    AdminAPI->>DB: List SessionReplayChunks for session (paginated by firstEventAt ASC)
    AdminAPI->>S3: Download & gunzip chunk objects (up to 10 concurrent)
    Note over AdminAPI,S3: Validates parsed.session_replay_id matches URL param
    AdminAPI-->>AdminAPI: Return decompressed events
Loading

Last reviewed commit: d7da1f6

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

20 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +2 to +5

ALTER TABLE "SessionRecording" RENAME TO "SessionReplay";
ALTER TABLE "SessionRecordingChunk" RENAME TO "SessionReplayChunk";
ALTER TABLE "SessionReplayChunk" RENAME COLUMN "sessionRecordingId" TO "sessionReplayId";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modifying an already-committed migration

This migration file previously contained only one statement (renaming the tabId column), which was already committed on the base branch rename-replay-tab-id. This PR appends three additional DDL statements to the same migration file.

Prisma tracks which migrations have been applied by storing the SQL content hash in _prisma_migrations. Any environment that already ran the original single-line migration will have a hash mismatch against this updated file. When prisma migrate deploy is next run on such an environment, Prisma will report a "drift" error and refuse to apply, because the applied migration no longer matches the file on disk.

Since the base branch is a pre-merge feature branch (not yet applied to dev or production), the risk here is scoped to developer local environments and CI runs on the base branch. However, if any developer or CI runner has already applied the one-line migration against a real database, they will need to manually resolve the drift (e.g. prisma migrate resolve --applied).

The safer practice is to add a new, separate migration file for the table renames (e.g. 20260217000000_rename_session_recording_tables_to_session_replay.sql). This keeps each migration atomic, avoids hash conflicts on any already-migrated database, and also fixes the misleading migration name (which currently only references the tabId rename, not the table renames).

model SessionRecordingChunk {
model SessionReplayChunk {
id String @id @default(uuid()) @db.Uuid

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant @map annotation

The @map("sessionReplayId") decorator is redundant here because the Prisma field name (sessionReplayId) is identical to the mapped database column name. Prisma only uses @map when the field name differs from the underlying column name. This can be removed to avoid confusion.

Suggested change
sessionReplayId String @db.Uuid

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Feb 17, 2026

Additional Comments (1)

apps/backend/src/app/api/latest/session-replays/batch/route.tsx
S3 payload key renamed — breaks reading pre-existing chunk data

The payload written to S3 previously used the key session_recording_id:

{
  "session_recording_id": "<id>",
  ...
}

This PR renames it to session_replay_id. The events routes (both /events and /chunks/{id}/events) read S3 objects and then assert:

if (parsed.session_replay_id !== sessionReplayId) {
  throw new StackAssertionError("Decoded session replay chunk session_replay_id mismatch", ...);
}

Any SessionReplayChunk row whose s3Key points to an object uploaded by the old code will contain session_recording_id, not session_replay_id, causing this assertion to throw a StackAssertionError for every existing chunk.

Since session_recording_id will be undefined in the old payload, the condition undefined !== sessionReplayId is always true, so no existing session replay can be viewed after this migration is applied until all old S3 objects are either re-uploaded or the assertion is made forward-compatible.

Consider making the check backwards-compatible:

// Accept both old ("session_recording_id") and new ("session_replay_id") field names
const payloadId = parsed.session_replay_id ?? parsed.session_recording_id;
if (payloadId !== sessionReplayId) {
  throw new StackAssertionError("Decoded session replay chunk id mismatch", {
    expected: sessionReplayId,
    actual: payloadId,
  });
}

The same fix is needed in both events/route.tsx and chunks/[chunk_id]/events/route.tsx.

expected: sessionRecordingId,
actual: parsed.session_recording_id,
if (parsed.session_replay_id !== sessionReplayId) {
throw new StackAssertionError("Decoded session replay chunk session_replay_id mismatch", {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Session replay event retrieval throws error for old S3 data that uses session_recording_id instead of session_replay_id

Fix on Vercel

@BilalG1 BilalG1 requested a review from N2D4 February 17, 2026 18:56
@BilalG1 BilalG1 merged commit 98c9acb into rename-replay-tab-id Feb 17, 2026
28 of 32 checks passed
@BilalG1 BilalG1 deleted the rename-to-session-replay branch February 17, 2026 18:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants