Skip to content

session replays#1187

Merged
BilalG1 merged 34 commits intodevfrom
analytics-replays-4
Feb 16, 2026
Merged

session replays#1187
BilalG1 merged 34 commits intodevfrom
analytics-replays-4

Conversation

@BilalG1
Copy link
Copy Markdown
Collaborator

@BilalG1 BilalG1 commented Feb 11, 2026

https://www.loom.com/share/3b7c9288149e4f878693281778c9d7e0

Todos (future PRs)

  • Fix pre-login recording
  • Better session search (filters, cmd-k, etc)

Summary by CodeRabbit

  • New Features

    • Analytics → Replays: session recording & multi-tab replay with timeline, speed, seek, and playback settings; dashboard UI for listing and viewing replays.
  • Admin APIs

    • Admin endpoints to list recordings, list chunks, fetch chunk events, and retrieve all events (paginated).
  • Client

    • Client-side rrweb recording with batching, deduplication, upload API and a send-batch client method.
  • Configuration

    • New STACK_S3_PRIVATE_BUCKET for private session storage.
  • Tests

    • Extensive unit and end-to-end tests for replay logic, streams, playback, and APIs.
  • Chores

    • Removed an E2E API test GitHub Actions workflow.

…ad endpoint

- Add SessionRecording and SessionRecordingChunk Prisma models
- Add migration for session_recordings_mvp
- Add seed data for session recordings
- Add S3 uploadBytes, downloadBytes, and getS3PublicUrl utilities
- Add POST /session-recordings/batch endpoint for client uploads
- Add StackAnalyticsInternal component with rrweb DOM capture and batched flush
- Accept AnalyticsOptions prop in StackProvider
- Conditionally render StackAnalyticsInternal in client provider
- Export AnalyticsOptions types from template package
- Add rrweb dependency to template, js, react, and stack packages
…ings

- Add SessionRecording and SessionRecordingChunk CRUD response types
- Add listSessionRecordings, listChunks, getEvents to admin interface
- Implement admin SDK methods for session recording access
- Add GET /internal/session-recordings endpoint (list sessions)
- Add GET /internal/session-recordings/[id]/chunks endpoint (list chunks)
- Add GET /internal/session-recordings/[id]/chunks/[id]/events endpoint (fetch from S3)
- Add E2E tests for batch upload and admin endpoints
- Add server component wrapper and main replay viewer (page + page-client)
- Add playback timing/state logic (session-replay-playback)
- Add multi-tab stream grouping utilities (session-replay-streams)
- Add unit tests for stream grouping
- Register "Replays" nav item in apps-frontend
- Disable session recording on dashboard itself
- Add rrweb dependency to dashboard package
@vercel
Copy link
Copy Markdown

vercel bot commented Feb 11, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stack-backend Ready Ready Preview, Comment Feb 16, 2026 9:33pm
stack-dashboard Ready Ready Preview, Comment Feb 16, 2026 9:33pm
stack-demo Ready Ready Preview, Comment Feb 16, 2026 9:33pm
stack-docs Ready Ready Preview, Comment Feb 16, 2026 9:33pm

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 11, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds session-recording support: DB schema and seed, S3-backed chunk upload/download, backend batch and admin/internal APIs, client-side rrweb recorder and batching, dashboard replay UI and playback engine with tests, private S3 bucket config, and removal of an E2E GitHub Actions workflow.

Changes

Cohort / File(s) Summary
Database & Seeding
apps/backend/prisma/migrations/20260210120000_session_recordings_mvp/migration.sql, apps/backend/prisma/schema.prisma, apps/backend/prisma/seed.ts
Add SessionRecording and SessionRecordingChunk models, FKs, indexes, and a seeding helper for session recordings.
Backend API Routes
apps/backend/src/app/api/latest/session-recordings/batch/route.tsx, apps/backend/src/app/api/latest/internal/session-recordings/route.tsx, apps/backend/src/app/api/latest/internal/session-recordings/.../chunks/route.tsx, .../chunks/[chunk_id]/events/route.tsx, .../[session_recording_id]/events/route.tsx
New POST batch upload (gzip, S3 upload, dedupe/upsert) and admin/internal GET endpoints for paginated sessions, chunks, and events with validation, pagination, and error mapping.
S3 Utilities & Config
apps/backend/src/s3.tsx, apps/backend/.env, apps/backend/.env.development, docker/server/.env, docker/dependencies/docker.compose.yaml, docker/emulator/docker.compose.yaml
Add STACK_S3_PRIVATE_BUCKET config, update s3mock initialBuckets, add uploadBytes/downloadBytes and readBodyToBytes helpers, and gate public/private bucket usage.
Dashboard Replay UI & Playback
apps/dashboard/src/app/.../analytics/replays/page-client.tsx, apps/dashboard/src/app/.../analytics/replays/page.tsx, apps/dashboard/src/lib/session-replay-streams.ts, apps/dashboard/src/lib/session-replay-streams.test.ts, apps/dashboard/src/lib/session-replay-playback.ts, apps/dashboard/src/app/.../analytics/replays/session-replay-machine.ts, apps/dashboard/src/app/.../analytics/replays/*.test.ts
Add rrweb-based replay viewer, multi-tab stream aggregation, timeline/player controls, complex replay reducer/machine, utilities, tests, and new page/navigation item.
Client Capture & Provider Integration
packages/template/src/components/stack-analytics.tsx, packages/template/src/providers/stack-provider.tsx, packages/template/src/providers/stack-provider-client.tsx, apps/dashboard/src/app/layout.tsx, docs/src/app/layout.tsx
Introduce StackAnalytics rrweb recorder, AnalyticsOptions types, wire analytics prop through providers and layouts, and conditionally mount the recorder.
SDK Types & Interfaces / Template Impl
packages/stack-shared/src/interface/crud/session-recordings.ts, packages/stack-shared/src/interface/client-interface.ts, packages/stack-shared/src/interface/admin-interface.ts, packages/template/src/lib/stack-app/.../interfaces/*, .../implementations/*
Add admin CRUD types/methods, client sendSessionRecordingBatch API, admin list/get methods, and template app implementations exposing these APIs (note: duplicate declaration of sendSessionRecordingBatch observed).
Dependencies & Packaging
apps/dashboard/package.json, packages/js/package.json, packages/react/package.json, packages/stack/package.json, packages/template/package.json, packages/template/package-template.json, packages/template/src/index.ts, apps/dashboard/src/lib/apps-frontend.tsx
Add rrweb dependency across packages, export Analytics types, remove one convex export, and add "Replays" nav item.
E2E Tests & CI Workflow
apps/e2e/tests/backend/endpoints/api/v1/session-recordings.test.ts, .github/workflows/e2e-source-of-truth-api-tests.yaml
Add comprehensive E2E tests for session-recordings; remove GitHub Actions workflow that ran E2E tests against external source-of-truth DB.
Environment / Docker
docker/dependencies/docker.compose.yaml, docker/emulator/docker.compose.yaml, docker/server/.env
Expose and propagate STACK_S3_PRIVATE_BUCKET env var and include private bucket in s3mock initialBuckets.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Web Client
    participant RRWeb as RRWeb Recorder
    participant SDK as Stack SDK
    participant Backend as Backend API
    participant S3 as S3 Storage
    participant DB as Database

    Client->>RRWeb: start recording
    RRWeb->>SDK: buffer events, build batch
    SDK->>Backend: POST /session-recordings/batch (gzipped)
    activate Backend
    Backend->>DB: find/upsert SessionRecording (idle-time reuse)
    Backend->>S3: PutObject gzipped chunk
    S3-->>Backend: s3_key
    Backend->>DB: create SessionRecordingChunk (metadata)
    Backend-->>SDK: { session_recording_id, batch_id, s3_key, deduped }
    deactivate Backend
Loading
sequenceDiagram
    participant Dashboard as Dashboard UI
    participant Backend as Backend API
    participant DB as Database
    participant S3 as S3 Storage
    participant Replayer as RRWeb Replayer

    Dashboard->>Backend: GET /internal/session-recordings (cursor)
    Backend->>DB: query sessions (limit+1)
    Backend-->>Dashboard: sessions + next_cursor
    Dashboard->>Backend: GET /internal/session-recordings/{id}/chunks (cursor)
    Backend->>DB: query chunks
    Backend-->>Dashboard: chunks + next_cursor
    Dashboard->>Backend: GET /internal/session-recordings/{id}/chunks/{chunkId}/events
    activate Backend
    Backend->>S3: GetObject gzipped bytes
    S3-->>Backend: gzipped bytes
    Backend->>Backend: gunzip & validate -> events
    deactivate Backend
    Backend-->>Dashboard: events array
    Dashboard->>Replayer: init with events -> render replay
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • N2D4

Poem

🐰 I nibble bytes and bundle hops,

Chunks on S3 in tidy crops,
Tabs align and timelines play,
I hop through traces of the day,
A fluffy rewind — replay, hooray!

🚥 Pre-merge checks | ❌ 4
❌ Failed checks (3 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is insufficient. It only contains a Loom link and future todos, lacking explanation of what changes were made, why they were made, and how they work. The repository template requires meaningful context. Expand the description to explain the feature implementation, key components (backend recording API, storage schema, admin endpoints, frontend UI), and provide context for review beyond a video link.
Docstring Coverage ⚠️ Warning Docstring coverage is 16.18% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (31 files):

⚔️ apps/backend/.env (content)
⚔️ apps/backend/.env.development (content)
⚔️ apps/backend/prisma/schema.prisma (content)
⚔️ apps/backend/prisma/seed.ts (content)
⚔️ apps/backend/src/app/api/latest/auth/sessions/current/route.tsx (content)
⚔️ apps/backend/src/lib/tokens.tsx (content)
⚔️ apps/backend/src/s3.tsx (content)
⚔️ apps/dashboard/package.json (content)
⚔️ apps/dashboard/src/app/layout.tsx (content)
⚔️ apps/dashboard/src/lib/apps-frontend.tsx (content)
⚔️ apps/e2e/tests/backend/endpoints/api/v1/users.test.ts (content)
⚔️ docker/dependencies/docker.compose.yaml (content)
⚔️ docker/emulator/docker.compose.yaml (content)
⚔️ docker/server/.env (content)
⚔️ docs/src/app/layout.tsx (content)
⚔️ packages/js/package.json (content)
⚔️ packages/react/package.json (content)
⚔️ packages/stack-shared/src/interface/admin-interface.ts (content)
⚔️ packages/stack-shared/src/interface/client-interface.ts (content)
⚔️ packages/stack-shared/src/known-errors.tsx (content)
⚔️ packages/stack/package.json (content)
⚔️ packages/template/package-template.json (content)
⚔️ packages/template/package.json (content)
⚔️ packages/template/src/index.ts (content)
⚔️ packages/template/src/lib/stack-app/apps/implementations/admin-app-impl.ts (content)
⚔️ packages/template/src/lib/stack-app/apps/implementations/client-app-impl.ts (content)
⚔️ packages/template/src/lib/stack-app/apps/interfaces/admin-app.ts (content)
⚔️ packages/template/src/lib/stack-app/apps/interfaces/client-app.ts (content)
⚔️ packages/template/src/providers/stack-provider-client.tsx (content)
⚔️ packages/template/src/providers/stack-provider.tsx (content)
⚔️ pnpm-lock.yaml (content)

These conflicts must be resolved before merging into dev.
Resolve conflicts locally and push changes to this branch.
Title check ❓ Inconclusive The title 'session replays' is vague and generic, failing to convey the scope and impact of this substantial changeset that implements a complete session replay feature across backend, frontend, and client libraries. Enhance the title to be more descriptive, e.g., 'Implement session replay analytics feature with recording, storage, and playback' or 'Add session replay functionality with rrweb integration and multi-tab support'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch analytics-replays-4

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@vercel vercel bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Suggestion:

useMemo calling generateUuid() during SSG throws "generateUuid() called outside browser" breaking Next.js static builds

Fix on Vercel

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Feb 11, 2026

Greptile Overview

Greptile Summary

This PR adds a new Dashboard "Session Replays" viewer under Analytics that can play back multi-tab session recordings using rrweb. It introduces new client-side playback/state logic (global timeline, gap fast-forwarding between tabs, buffering while chunks download), stream-grouping utilities with unit tests, a nav item for Replays, and disables replay capture on the dashboard app itself via StackProvider analytics settings.

Main issue found is in the replay viewer’s parallel chunk-event fetching: the worker queue uses a shared mutable index across async workers, which can skip/duplicate chunk fetch tasks and lead to missing/duplicated replay data. There’s also a paging merge that can drop recordings if list requests overlap or return out of order.

Confidence Score: 3/5

  • This PR is close to mergeable but has a real correctness issue in the replay chunk download concurrency that can break playback.
  • Most changes are additive and isolated (new viewer page, utilities, tests, and a safe analytics disable flag). However, the parallel chunk event fetching uses an unsafe shared index across async workers, which can result in skipped/duplicated fetches and inconsistent replay data; additionally, pagination merging can lose items under overlapping requests.
  • apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx

Important Files Changed

Filename Overview
apps/dashboard/package.json Adds rrweb dependency to support dashboard replay viewer.
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx Adds replay viewer UI and playback/download logic; contains a concurrency bug in chunk-event fetching worker queue that can skip/duplicate tasks.
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page.tsx Adds server page wrapper that renders the client replay viewer component.
apps/dashboard/src/app/layout.tsx Disables session replay capture on the dashboard via StackProvider analytics config.
apps/dashboard/src/lib/apps-frontend.tsx Registers new 'Replays' navigation item under Analytics app.
apps/dashboard/src/lib/session-replay-playback.ts Adds pure playback-state helpers for global timeline offsets, finish actions, and seek state.
apps/dashboard/src/lib/session-replay-streams.ts Adds tab-stream grouping and timeline mapping utilities using Map to avoid prototype key issues.
apps/dashboard/src/lib/session-replay-streams.test.ts Adds unit tests covering stream grouping, sorting, limiting, and offset mapping helpers.

Sequence Diagram

sequenceDiagram
  participant U as Admin user
  participant P as Replays page (PageClient)
  participant A as Stack AdminApp (session recordings API)
  participant R as rrweb Replayer

  U->>P: Open Analytics > Replays
  P->>A: listSessionRecordings(limit, cursor)
  A-->>P: recordings[] + nextCursor
  U->>P: Select a recording
  P->>A: listSessionRecordingChunks(recordingId, paginated)
  A-->>P: chunk metadata (tabId, first/last timestamps)
  P->>P: groupChunksIntoTabStreams(chunks)
  P->>P: computeGlobalTimeline(streams)
  par CHUNK_EVENTS_CONCURRENCY workers
    P->>A: getSessionRecordingChunkEvents(recordingId, chunkId)
    A-->>P: rrweb events[]
    P->>P: coerceRrwebEvents + append to eventsByTab
    P->>R: (ensureReplayerForTab) new Replayer(events, config)
    P->>R: addEvent(event) (incremental)
  end
  U->>P: Play / Pause / Seek
  P->>R: play(localOffset) / pause(localOffset)
  R-->>P: finish event
  P->>P: choose next tab / gap fast-forward / buffer / finish
Loading

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

@BilalG1 BilalG1 changed the base branch from analytics-replays-3 to dev February 12, 2026 19:33
@BilalG1 BilalG1 changed the title feat: dashboard replay viewer for multi-tab sessions session replays Feb 12, 2026
@BilalG1 BilalG1 removed their assignment Feb 12, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In
`@apps/backend/src/app/api/latest/internal/session-recordings/`[session_recording_id]/chunks/route.tsx:
- Around line 45-62: The handler currently dereferences the optional query
object (used for rawLimit and cursorId) which can be undefined; change accesses
to safely guard query (e.g., use query?.limit and query?.cursor or coalesce
query to an empty object) before computing rawLimit/parsedLimit/limit and
cursorId so a missing query doesn't throw; update the logic around rawLimit,
parsedLimit, limit, and cursorPivot in the async handler to use the guarded
query (or throw a clear error via throwErr if you want to assert query must
exist) while keeping DEFAULT_LIMIT and MAX_LIMIT behavior the same.

In `@apps/backend/src/app/api/latest/session-recordings/batch/route.tsx`:
- Around line 104-136: The recentSession lookup and subsequent upsert can merge
recordings across users because findFirst(...) only filters by refreshTokenId
and not projectUserId; update in prisma.sessionRecording.upsert(...) also
doesn't change projectUserId, so a reused recording could belong to a different
user. Fix by adding projectUserId to the where clause of the recentSession query
(the call to prisma.sessionRecording.findFirst) so it only matches recordings
for the same user, and keep projectUserId in the create payload of
prisma.sessionRecording.upsert (no change needed there) so new recordings are
correctly attributed; alternatively, if you want to allow transferring
ownership, explicitly include projectUserId in the upsert.update block of
prisma.sessionRecording.upsert to overwrite the owner.

In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx:
- Around line 1186-1189: The effect that calls runAsynchronously(() =>
loadChunksAndDownload(selectedRecordingId)) should only depend on
selectedRecordingId (and loadChunksAndDownload/runAsynchronously) because
selectedRecording is memoized from recordings and will get a new reference when
recordings array changes, causing unnecessary restarts; update the useEffect
dependency array to remove selectedRecording, keep [loadChunksAndDownload,
selectedRecordingId] (and runAsynchronously if it’s a stable reference), and
inside the effect use an early guard if (!selectedRecordingId) return; and, if
you need to reference selectedRecording inside the effect, compute or null-check
it there so loading isn’t restarted when recordings is appended.
- Around line 371-374: The useAdminApp hook is currently typed too narrowly,
forcing callers to cast to AdminAppWithSessionRecordings; update the useAdminApp
return type to the full StackAdminApp (the interface that already declares
listSessionRecordings, listSessionRecordingChunks, and
getSessionRecordingChunkEvents) so callers like PageClient no longer need the
cast. Locate the useAdminApp declaration, import or reference the StackAdminApp
type there, change its return signature to StackAdminApp (or a union/augmented
type that includes those three methods), and remove the "as
AdminAppWithSessionRecordings" cast in PageClient.
- Around line 1483-1490: Replace the unsafe any-casts and silent catch when
subscribing to rrweb's speedService: remove "(r as any).speedService" and
instead use the Replayer type (import Replayer and relevant machine/state types
from rrweb, e.g., SpeedService or MachineState) so you can call r.speedService
with proper typings; type the callback parameter (e.g., state:
SpeedMachineState) rather than any; before subscribing, guard that
r.speedService is defined (if (!r.speedService) return) and wrap subscribe in a
try/catch that logs or handles the error (use console.error or the existing
logger) instead of an empty catch; keep assigning the returned subscription to
speedSubRef.current as before.

In `@apps/dashboard/src/app/layout.tsx`:
- Line 102: The analytics config on the StackProvider currently enables session
recording with input masking disabled which leaks secrets; update the analytics
prop on the StackProvider (the instance using stackServerApp and
translationLocale) so that recording is disabled by setting enabled: false, and
for defense-in-depth set maskAllInputs: true; ensure the prop becomes
analytics={{ replays: { maskAllInputs: true, enabled: false } }} to match the PR
description that dashboard recording should be disabled.

In `@docs/src/app/layout.tsx`:
- Line 17: The analytics replay configuration currently sets maskAllInputs:
false in the StackProvider instantiation (StackProvider app={stackServerApp}
analytics={{ replays: { enabled: true, maskAllInputs: false } }}), which exposes
user-typed inputs; change this to either remove the explicit maskAllInputs
setting or set maskAllInputs: true for the public docs site so input fields are
masked in session replays, updating the StackProvider analytics.replays
configuration accordingly.

In `@packages/stack-shared/src/interface/client-interface.ts`:
- Around line 247-267: Replace the manual try/catch in sendSessionRecordingBatch
with Result.fromThrowingAsync by calling Result.fromThrowingAsync(() =>
this.sendClientRequest(...)) to return a Result from the async call; then
normalize the error type to Error to satisfy the declared
Promise<Result<Response, Error>> (e.g., map or match the Result to convert
unknown to Error using e instanceof Error ? e : new Error(String(e))).
Alternatively, if you prefer not to normalize, update the method signature to
Promise<Result<Response, unknown>>; reference sendSessionRecordingBatch,
sendClientRequest and Result.fromThrowingAsync when applying the change.
🧹 Nitpick comments (15)
apps/backend/src/app/api/latest/internal/session-recordings/route.tsx (1)

81-83: Prefer ?? throwErr(...) over non-null assertion.

Line 83 uses ! on page[page.length - 1]. While hasMore guarantees page is non-empty, the coding guidelines prefer explicit error messages over non-null assertions.

Suggested fix
-    const nextCursor = hasMore ? page[page.length - 1]!.id : null;
+    const nextCursor = hasMore ? (page[page.length - 1] ?? throwErr("Expected non-empty page when hasMore is true")).id : null;

You'll need to add the import:

import { throwErr } from "@stackframe/stack-shared/dist/utils/errors";

As per coding guidelines: "Prefer ?? throwErr(...) over non-null assertions and fallback values, with good error messages explicitly stating assumptions."

apps/backend/src/app/api/latest/session-recordings/batch/route.tsx (2)

20-37: Add a comment explaining the as any cast.

Line 27 casts to any to access .timestamp on an unknown element. The subsequent type check (typeof ts !== "number") makes it safe, but per coding guidelines a comment explaining why any is necessary here would be helpful.

Suggested fix
-    const ts = (e as any).timestamp;
+    // `any` is needed because events are untyped rrweb payloads; the next line validates that ts is a finite number.
+    const ts = (e as any).timestamp;  // eslint-disable-line `@typescript-eslint/no-unsafe-member-access`

As per coding guidelines: "Whenever you need to use any, leave a comment explaining why you're using it, why the type system fails, and how you can be certain that errors would still be flagged."


61-70: Use a typed boolean schema for deduped instead of yupMixed().

The deduped field is always true or false, but yupMixed() allows any value. This weakens response validation.

Suggested fix
 response: yupObject({
     statusCode: yupNumber().oneOf([200]).defined(),
     bodyType: yupString().oneOf(["json"]).defined(),
     body: yupObject({
       session_recording_id: yupString().defined(),
       batch_id: yupString().defined(),
       s3_key: yupString().defined(),
-      deduped: yupMixed().defined(),
+      deduped: yupMixed<boolean>().defined(),
     }).defined(),
 }),

Or better, if a yupBoolean helper is available in the schema-fields:

-      deduped: yupMixed().defined(),
+      deduped: yupBoolean().defined(),
apps/backend/prisma/seed.ts (1)

1812-1829: Non-null assertion on array access.

Line 1819 uses ! on the array element access. While the length check on line 1804 guarantees a non-empty array, this still goes against the project's preference.

Suggested fix
-    const projectUserId = userIds[Math.floor(Math.random() * userIds.length)]!;
+    const projectUserId = userIds[Math.floor(Math.random() * userIds.length)] ?? throwErr("Expected non-empty userIds array");

As per coding guidelines: "Prefer ?? throwErr(...) over non-null assertions and fallback values, with good error messages explicitly stating assumptions."

docker/server/.env (1)

35-40: Static analysis: env var ordering.

The linter suggests STACK_S3_PRIVATE_BUCKET should come before STACK_S3_REGION for alphabetical consistency. The current placement logically groups it with other S3 vars which is reasonable, but reordering would satisfy the linter.

Suggested fix
 STACK_S3_ENDPOINT=
+STACK_S3_PRIVATE_BUCKET=
 STACK_S3_REGION=
 STACK_S3_ACCESS_KEY_ID=
 STACK_S3_SECRET_ACCESS_KEY=
 STACK_S3_BUCKET=
-STACK_S3_PRIVATE_BUCKET=
packages/template/src/providers/stack-provider-client.tsx (1)

27-27: Consider wrapping analytics in an error boundary.

If StackAnalyticsInternal throws during render (e.g., rrweb fails to initialize on an unsupported browser), the error will propagate up and could break the entire app since Suspense doesn't catch render errors. An error boundary around the analytics component would prevent session recording failures from taking down the host application.

apps/backend/src/app/api/latest/internal/session-recordings/[session_recording_id]/chunks/[chunk_id]/events/route.tsx (2)

63-68: Prefer unknown over any for parsed JSON.

Per coding guidelines, any should be avoided. Since you're doing runtime type checks on parsed immediately after (lines 70-81), using unknown would be more type-safe and still work with the typeof / Array.isArray checks.

Suggested fix
-  let parsed: any;
+  let parsed: unknown;
   try {
     parsed = JSON.parse(new TextDecoder().decode(unzipped));
   } catch (e) {
     throw new StackAssertionError("Failed to decode session recording chunk JSON", { cause: e });
   }

Note: you'll need to adjust the property accesses on lines 73 and 87 to use type narrowing or a cast after the object check:

   if (typeof parsed !== "object" || parsed === null) {
     throw new StackAssertionError("Decoded session recording chunk is not an object");
   }
-  if (parsed.session_recording_id !== sessionRecordingId) {
+  const parsedObj = parsed as Record<string, unknown>;
+  if (parsedObj.session_recording_id !== sessionRecordingId) {
     throw new StackAssertionError("Decoded session recording chunk session_recording_id mismatch", {
       expected: sessionRecordingId,
-      actual: parsed.session_recording_id,
+      actual: parsedObj.session_recording_id,
     });
   }
-  if (!Array.isArray(parsed.events)) {
+  if (!Array.isArray(parsedObj.events)) {
     throw new StackAssertionError("Decoded session recording chunk events is not an array");
   }

51-60: S3 download error handling uses untyped catch — acceptable but worth a comment.

Line 54 uses catch (e: any) to access e?.$metadata?.httpStatusCode, which is an AWS SDK convention without a public typed error class. This is a valid reason for any — consider adding a brief comment explaining why, per coding guidelines.

apps/backend/src/s3.tsx (1)

73-98: Add brief comments explaining the as any casts in readBodyToBytes.

The coding guidelines ask for a comment whenever any is used. The casts on lines 78-79 and 83-85 are justified because AWS SDK response body types vary by runtime (Node.js Readable, Web ReadableStream, SdkStreamMixin), but a short comment would help future readers understand why.

packages/template/src/components/stack-analytics.tsx (2)

196-200: JSON.stringify(event) on every rrweb event is expensive for byte estimation.

rrweb FullSnapshot events can be hundreds of KB. Stringifying every event just to measure length adds significant CPU work on the hot path of DOM mutation recording. Consider a cheaper heuristic—e.g., a rough constant per event type, or stringifying only once per N events to amortize the cost.

♻️ Suggested alternative
-          approxBytes += JSON.stringify(event).length;
-          if (events.length >= MAX_EVENTS_PER_BATCH || approxBytes >= MAX_APPROX_BYTES_PER_BATCH) {
+          // Rough byte estimate without serializing every event.
+          // FullSnapshot (type 2) and FullSnapshot-derived events are large;
+          // use a coarse multiplier for others.
+          const evtType = (event as { type?: number }).type;
+          approxBytes += evtType === 2 ? 200_000 : 600;
+          if (events.length >= MAX_EVENTS_PER_BATCH || approxBytes >= MAX_APPROX_BYTES_PER_BATCH) {

215-223: visibilitychange handler fires on every tab switch, not just page unload.

The onPageHide callback also triggers when the user simply switches browser tabs (visibility → hidden → visible). This causes extra flush calls on tab-in, which is harmless but slightly wasteful. Consider checking document.visibilityState === "hidden" inside the handler to only flush when the page is being hidden.

♻️ Suggested change
       const onPageHide = () => {
+        if (document.visibilityState !== "hidden") return;
         runAsynchronously(() => flush({ keepalive: true }), { noErrorLogging: true });
       };
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx (4)

394-422: loadPage closes over the recordings state, leading to a stale reference on append.

On Line 408, cursor ? [...recordings, ...res.items] : res.items captures recordings from the closure at the time loadPage was created. If recordings changes between invocations (e.g., a React re-render updated the dep list), the callback gets recreated, but there's a subtle window: loadPage is a dep of the useEffect on Line 424-426 which will re-run loadPage(null), but that's guarded by hasFetchedInitialRef. The scroll-triggered calls at Line 435 use the current loadPage reference, so they're fine. However, using the functional updater form of setRecordings is safer and would let you remove recordings from the dependency array, stabilizing the callback.

♻️ Suggested change
-    const items = cursor ? [...recordings, ...res.items] : res.items;
-    setRecordings(items);
+    setRecordings(prev => cursor ? [...prev, ...res.items] : res.items);

Then remove recordings from the useCallback deps.


87-97: coerceRrwebEvents double-casts through unknown — consider a more descriptive comment.

Line 96's filtered as unknown as RrwebEventWithTime[] is a forced cast. The preceding validation only checks for a finite timestamp property but doesn't verify type, data, or other fields rrweb expects. This is acceptable since the data originates from rrweb recording, but a brief comment noting that the validation is intentionally minimal (trusting the source) would help.


721-991: ensureReplayerForTab is very large (~270 lines) — consider extracting the "finish" handler.

The "finish" event handler alone (Lines 854-982) is ~130 lines with multiple branching paths (premature finish, buffer-while-downloading, tab switch, gap-fast-forward, true finish). Extracting it into a named function (e.g., handleTabFinish) would improve readability and make individual branches easier to test.


225-253: RAF-driven setCurrentTime on every animation frame may cause excessive re-renders.

The Timeline component calls setCurrentTime(getCurrentTimeMs()) on every requestAnimationFrame tick (~60fps). Each call triggers a state update and re-render of the component. Consider throttling updates to ~10fps (every ~100ms) since the timeline scrubber doesn't need sub-frame visual precision, or use a ref + manual DOM update for the progress bar position.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx:
- Around line 419-447: The loadPage callback closes over recordings causing
stale appends; change the logic so when appending pages you call
setRecordings(prev => [...prev, ...res.items]) instead of using the recordings
variable, keep the initial-set path as setRecordings(res.items) when cursor is
null, and then remove recordings from the useCallback dependency array (keep
adminApp, loadingMore and any refs like hasFetchedInitialRef/hasAutoSelectedRef
and PAGE_SIZE as needed) so loadPage no longer relies on a potentially stale
recordings snapshot.

In
`@apps/dashboard/src/app/`(main)/(protected)/projects/[projectId]/analytics/replays/session-replay-machine.ts:
- Around line 689-740: The TOGGLE_PLAY_PAUSE handler incorrectly treats
"finished" like a playable state: when state.playbackMode === "finished"
pausedAtGlobalMs equals globalTotalMs so resuming immediately re-triggers
REPLAYER_FINISH; modify the Play branch in the TOGGLE_PLAY_PAUSE case to detect
when state.playbackMode === "finished" (or pausedAtGlobalMs >= globalTotalMs)
and reset the resume target (pausedAtGlobalMs) to 0 (or another safe start)
before computing buffering and calling playEffectsForAllTabs so the replay
restarts from the beginning instead of instantly finishing; update any uses of
target/localTarget computation accordingly (symbols: TOGGLE_PLAY_PAUSE,
playbackMode, pausedAtGlobalMs, globalTotalMs, playEffectsForAllTabs,
globalOffsetToLocalOffset).
- Around line 1-12: Remove the unused TabStream import from the top import list
and replace the three non-null assertions that dereference ranges[mid] and
candidates[0] with defensive nullish coalescing using the project's throwErr
helper; specifically, change occurrences that read ranges[mid]! to ranges[mid]
?? throwErr("Expected valid range at binary search position") and change the
return that uses candidates[0]! to (candidates[0] ?? throwErr("Expected at least
one candidate tab")).tabKey so the binary-search/range lookups and candidate
selection in this file use null-safe checks.
🧹 Nitpick comments (12)
apps/backend/src/app/api/latest/session-recordings/batch/route.tsx (3)

28-28: Unnecessary as any — TypeScript narrows sufficiently after the in check.

After the guards on lines 26–27 (typeof e !== "object", !("timestamp" in e)), TypeScript already narrows e to Record<"timestamp", unknown>, so you can access e.timestamp directly. Per coding guidelines, any casts require a justifying comment, but here the cast can simply be removed.

Suggested fix
-    const ts = (e as any).timestamp;
+    const ts = e.timestamp;

82-84: Body-size gate runs after the buffer is already in memory.

fullReq.bodyBuffer is populated by createSmartRouteHandler (via req.arrayBuffer()) before the handler is invoked, so a client can still force the server to allocate up to the framework's own limit before this check rejects it. This line is still useful as defense-in-depth validation, but consider also enforcing an upstream limit (e.g., reverse-proxy client_max_body_size or a streaming check) if you haven't already.


126-140: Consider moving the dedup check before the upsert.

The existingChunk lookup (line 143) is done after the upsert, so duplicate batches still perform an unnecessary DB write to update startedAt/lastEventAt. Since the timestamps are derived from the same events, the result is idempotent, but you can save a round-trip by checking for the existing chunk first and returning early before the upsert.

apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx (5)

83-93: as unknown as RrwebEventWithTime[] double-cast bypasses type safety entirely.

After filtering, the objects are only validated to have a number timestamp. The RrwebEventWithTime type likely requires additional fields (type, data, etc.). This cast silently permits structurally invalid events into downstream rrweb code that expects the full shape.

Given that these events come from an untrusted external source (chunk storage), this is a reasonable tradeoff for a "best-effort coerce" function — but it should at minimum have a comment explaining why the cast is safe (e.g., "rrweb is tolerant of extra/missing fields at runtime") so future readers don't assume full structural validation has occurred.

As per coding guidelines: "Try to avoid the any type. Whenever you need to use any, leave a comment explaining why you're using it."


441-442: catch (e: any) used in multiple places — prefer typed error access.

Lines 441, 657, and 956 all use catch (e: any) to access e?.message. Per coding guidelines, any should be avoided. Use a helper or inline narrowing instead:

Suggested pattern
-    } catch (e: any) {
-      setListError(e?.message ?? "Failed to load session recordings.");
+    } catch (e: unknown) {
+      setListError(e instanceof Error ? e.message : "Failed to load session recordings.");

Apply the same pattern at lines 657 and 956.

As per coding guidelines: "Try to avoid the any type."


1132-1136: !idx would incorrectly return "Tab" if a label index were 0.

Currently safe because tabLabelIndex values start at 1 (line 891: i + 1), but the guard if (!idx) is fragile — it conflates "not in map" (undefined) with a hypothetical 0 value. Using idx === undefined is more precise and resilient to future changes.

Suggested fix
   const getTabLabel = useCallback((tabKey: TabKey) => {
     const idx = ms.tabLabelIndex.get(tabKey);
-    if (!idx) return "Tab";
+    if (idx === undefined) return "Tab";
     return `Tab ${idx}`;
   }, [ms.tabLabelIndex]);

498-520: Pervasive silent catch { // ignore } blocks throughout the file.

There are 15+ empty catch blocks swallowing errors from rrweb replayer operations (pause, play, addEvent, etc.). While individual rrweb calls can throw during teardown or invalid state, silently ignoring all errors makes debugging very difficult when something actually goes wrong.

Consider at minimum using console.debug or a structured logger at a low level so failures are visible during development. Alternatively, create a small helper like safeReplayerCall(r, fn) that catches and optionally logs, reducing the boilerplate.


1304-1308: Remove unnecessary as any cast on gridRow.

React's CSSProperties["gridRow"] type accepts both string and number values. The cast is not needed — the string "1 / -1" is a valid grid row value that TypeScript already allows without any cast.

Proposed fix
-                              gridRow: isActive ? ("1 / -1" as any) : ((miniIndex ?? 0) + 1),
+                              gridRow: isActive ? "1 / -1" : ((miniIndex ?? 0) + 1),
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/session-replay-machine.test.ts (3)

73-81: Unused helper dispatchChain.

dispatchChain is defined but never called anywhere in this test file. Remove it or add tests that use it.


340-344: Avoid as any — narrow the discriminated union instead.

Lines 343, 1133-1134, 1172-1173 all use as any to access effect properties without narrowing the ReplayEffect union. Per coding guidelines, any should be avoided and needs a comment when used. You can narrow first:

const play = playEffects[0];
expect(play.type).toBe("play_replayer");
if (play.type === "play_replayer") {
  expect(play.localOffsetMs).toBe(2000);
}

Or define a small test helper:

function expectEffect<T extends ReplayEffect["type"]>(
  effects: ReplayEffect[],
  type: T,
): Extract<ReplayEffect, { type: T }> {
  const e = effects.find(e => e.type === type);
  expect(e).toBeDefined();
  return e as Extract<ReplayEffect, { type: T }>;
}

This applies to the same pattern at lines 1133-1134 and 1172-1173.

As per coding guidelines: "Try to avoid the any type. Whenever you need to use any, leave a comment explaining why you're using it."


1372-1385: Vacuous effect-tabKey validation in the fuzz test.

The inner if (tabKeys.size > 0) block contains only comments — it never actually asserts anything. Either implement the check or remove the dead code to avoid giving a false sense of coverage.

Suggested implementation
-          if (tabKeys.size > 0) {
-            // Note: effects reference tabs from pre-transition state which
-            // might not be in post-transition state after SELECT_RECORDING.
-            // This is ok — the effect executor checks generation.
-          }
+          // Effects may reference tabs from the pre-transition state (e.g.
+          // after SELECT_RECORDING clears streams).  Skip the check in that
+          // case since the effect executor validates the generation.
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/session-replay-machine.ts (1)

206-265: Duplicated binary search across findBestTabAtGlobalOffset and isTabInRangeAtGlobalOffset.

Lines 216-229 and 251-264 contain identical binary-search-over-chunk-ranges logic. Consider extracting a shared helper:

function isInChunkRanges(ranges: ChunkRange[], ts: number): boolean {
  let lo = 0;
  let hi = ranges.length - 1;
  while (lo <= hi) {
    const mid = (lo + hi) >> 1;
    const r = ranges[mid] ?? throwErr("binary search OOB");
    if (ts < r.startTs) hi = mid - 1;
    else if (ts > r.endTs) lo = mid + 1;
    else return true;
  }
  return false;
}

This also addresses the non-null assertions (ranges[mid]!) on lines 220 and 255 — the helper can use ?? throwErr(...) in one place. As per coding guidelines: "Prefer ?? throwErr(...) over non-null assertions."

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@apps/backend/src/app/api/latest/internal/session-recordings/`[session_recording_id]/events/route.tsx:
- Around line 139-155: The response currently returns a full chunks metadata
array and a paginated chunk_events slice; add a concise inline comment above the
return that explains this is intentional: chunks (the chunks array) contains
metadata for all chunks to let the frontend build a full chunk->tab mapping,
while chunk_events (chunkEvents) is paginated to avoid downloading all S3
objects and consumers must match events to chunks by chunk_id lookup rather than
array alignment. Place this comment immediately where the response object with
fields chunks and chunk_events is constructed in the events route.
🧹 Nitpick comments (8)
packages/stack-shared/src/interface/crud/session-recordings.ts (1)

29-44: Chunk shape is duplicated and inconsistent between the two response types.

AdminListSessionRecordingChunksResponse items include browser_session_id (line 34) but AdminGetSessionRecordingAllEventsResponse.chunks (lines 51-59) omits it. If this is intentional, consider extracting the common fields into a shared base type to reduce duplication and make the difference explicit. If unintentional, add browser_session_id to the all-events response as well.

♻️ Proposed refactor: extract shared chunk type
+type SessionRecordingChunkBase = {
+  id: string,
+  batch_id: string,
+  tab_id: string | null,
+  event_count: number,
+  byte_length: number,
+  first_event_at_millis: number,
+  last_event_at_millis: number,
+  created_at_millis: number,
+};
+
 export type AdminListSessionRecordingChunksResponse = {
-  items: Array<{
-    id: string,
-    batch_id: string,
-    tab_id: string | null,
-    browser_session_id: string | null,
-    event_count: number,
-    byte_length: number,
-    first_event_at_millis: number,
-    last_event_at_millis: number,
-    created_at_millis: number,
-  }>,
+  items: Array<SessionRecordingChunkBase & {
+    browser_session_id: string | null,
+  }>,
   pagination: {
     next_cursor: string | null,
   },
 };

Also applies to: 50-65

apps/backend/src/app/api/latest/internal/session-recordings/[session_recording_id]/events/route.tsx (3)

81-84: Invalid offset/limit values silently fall back to defaults instead of failing.

Per coding guidelines, the code should fail early and loud on invalid input. Currently, a non-numeric or negative offset silently becomes 0, and an invalid limit silently becomes chunks.length. This can mask client bugs.

Proposed fix
-    const parsedOffset = query.offset != null ? Number.parseInt(query.offset, 10) : 0;
-    const offset = Number.isFinite(parsedOffset) && parsedOffset >= 0 ? parsedOffset : 0;
-    const parsedLimit = query.limit != null ? Number.parseInt(query.limit, 10) : chunks.length;
-    const limit = Number.isFinite(parsedLimit) && parsedLimit >= 1 ? parsedLimit : chunks.length;
+    const offset = query.offset != null ? Number.parseInt(query.offset, 10) : 0;
+    if (!Number.isFinite(offset) || offset < 0) {
+      throw new StackAssertionError("Invalid offset parameter", { offset: query.offset });
+    }
+    const limit = query.limit != null ? Number.parseInt(query.limit, 10) : chunks.length;
+    if (!Number.isFinite(limit) || limit < 1) {
+      throw new StackAssertionError("Invalid limit parameter", { limit: query.limit });
+    }

As per coding guidelines, "Fail early, fail loud. Fail fast with an error instead of silently continuing."


89-89: Multiple uses of any without explanatory comments.

The coding guidelines require a comment explaining why any is used, why the type system fails, and how errors would still be caught. This applies to:

  • Line 89: any[] in chunkEvents type
  • Line 100: catch (e: any)
  • Line 109: let parsed: any
  • Line 129: parsed.events as any[]

At minimum, add a brief comment at each site (e.g., "S3 payload shape is validated at runtime below" for the parsed data, and "AWS SDK error type is untyped" for the catch).

As per coding guidelines, "Try to avoid the any type. Whenever you need to use any, leave a comment explaining why you're using it, why the type system fails, and how you can be certain that errors would still be flagged."

Also applies to: 100-100, 109-109, 129-129


90-94: Shared mutable index in concurrent workers is correct but fragile.

The nextIndex++ pattern works because JS is single-threaded and no await sits between the while condition check and the index capture. However, this is a subtle invariant — any future refactor inserting an await before nextIndex++ would introduce a race.

Consider a safer pull-from-queue approach:

Suggested alternative
-    const chunkEvents: Array<{ chunk_id: string, events: any[] }> = new Array(chunksToDownload.length);
-    let nextIndex = 0;
-
-    async function worker() {
-      while (nextIndex < chunksToDownload.length) {
-        const idx = nextIndex++;
-        const chunk = chunksToDownload[idx];
+    const chunkEvents: Array<{ chunk_id: string, events: any[] }> = new Array(chunksToDownload.length);
+    const indexQueue = chunksToDownload.map((_, i) => i);
+
+    async function worker() {
+      while (indexQueue.length > 0) {
+        const idx = indexQueue.shift()!;
+        const chunk = chunksToDownload[idx];
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/analytics/replays/page-client.tsx (4)

82-92: Add justification comments for the as casts, especially the double-cast on line 91.

The runtime checks on lines 84–88 validate shape incrementally, so the intermediate casts are defensible. However, line 91's as unknown as RrwebEventWithTime[] is a full type-system bypass — per guidelines, every as/any should include a comment explaining why the cast is needed, why the type system fails, and how you're sure errors would still surface (e.g. "rrweb tolerates extra fields; we validated timestamp which is the only field required for replay ordering").


383-384: Use catch (e: unknown) instead of catch (e: any).

Per coding guidelines, avoid the any type. Use unknown and narrow:

Proposed fix
-    } catch (e: any) {
-      setListError(e?.message ?? "Failed to load session recordings.");
+    } catch (e: unknown) {
+      setListError(e instanceof Error ? e.message : "Failed to load session recordings.");

The same pattern appears at lines 599 and 929 — apply the same fix there.


609-733: Pervasive silent catch { // ignore } blocks mask potential failures.

There are 10+ silent catch blocks in executeEffects alone (and more elsewhere). While rrweb player calls can fail during teardown, silently swallowing all errors violates the "never silently swallow errors" guideline and makes debugging near-impossible. Consider at minimum a console.debug or structured log in development so failures are observable:

Example for one block (apply pattern throughout)
           try {
             r.play(effect.localOffsetMs);
-          } catch {
-            // ignore
+          } catch (e) {
+            console.debug("[session-replay] play failed for tab", effect.tabKey, e);
           }

799-801: Use rrweb's EventType enum instead of magic number 2 with as any.

rrweb exports EventType.FullSnapshot (value 2). Importing and using it avoids both the as any cast and the magic number:

Proposed fix
+// At top of file, add to rrweb type imports:
+type RrwebEventType = typeof import("rrweb").EventType;
+// Or import dynamically alongside Replayer

 const hasFullSnapshot = !msRef.current.hasFullSnapshotByTab.has(tabKey)
-  // eslint-disable-next-line `@typescript-eslint/no-unsafe-member-access`
-  && events.some(e => (e as any).type === 2);
+  && events.some(e => "type" in e && e.type === 2); // EventType.FullSnapshot

At minimum, add a comment explaining the magic number. Based on learnings: "Do NOT use as/any/type casts to bypass the type system unless you specifically asked the user about it."

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/e2e/tests/backend/endpoints/api/v1/session-recordings.test.ts`:
- Around line 455-472: The test is using a chunk ID
(otherChunks.body?.items?.[0]?.id) as the pagination cursor, which may not equal
the API's cursor format; change the test to take otherCursor from
otherChunks.body?.pagination?.next_cursor (use pagination.next_cursor) and pass
that encoded value to the /chunks?cursor= call to verify cross-session cursor
rejection; if session2 currently only has one chunk (no next_cursor),
create/upload a second chunk for recording2 before fetching otherChunks so a
real pagination cursor exists and the test exercises rejection of a valid cursor
from another session when calling the endpoint via niceBackendFetch for
recording1.
🧹 Nitpick comments (2)
apps/e2e/tests/backend/endpoints/api/v1/session-recordings.test.ts (2)

27-46: Consider using .toMatchInlineSnapshot for response assertions.

The coding guidelines prefer .toMatchInlineSnapshot over other selectors where possible. Throughout this file, tests use broad range checks like toBeGreaterThanOrEqual(400) / toBeLessThan(500) and toMatchObject. Inline snapshots would capture the exact response shape and status, making regressions easier to detect. This applies to most tests in this file (e.g., lines 44–45, 73–79, 96–101, 121–122, etc.).

As per coding guidelines: "When writing tests, prefer .toMatchInlineSnapshot over other selectors, if possible."


312-315: Use encodeURIComponent() for path parameters in URL template literals.

Several URLs in this file interpolate recordingId and chunkId directly into paths without encoding (lines 312, 323, 434, 447, 456, 467, 503, 514). While UUIDs are URL-safe, the coding guidelines require consistent encoding. Some query-string usages already correctly use encodeURIComponent (e.g., line 372).

As per coding guidelines: "Use urlString`` or encodeURIComponent() instead of normal string interpolation for URLs, for consistency."

@github-actions github-actions bot assigned BilalG1 and unassigned N2D4 Feb 13, 2026
@BilalG1 BilalG1 merged commit c7ef526 into dev Feb 16, 2026
24 of 26 checks passed
@BilalG1 BilalG1 deleted the analytics-replays-4 branch February 16, 2026 22:15
N2D4 pushed a commit that referenced this pull request Feb 17, 2026
https://www.loom.com/share/3b7c9288149e4f878693281778c9d7e0




## Todos (future PRs)
- Fix pre-login recording
- Better session search (filters, cmd-k, etc)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Analytics → Replays: session recording & multi-tab replay with
timeline, speed, seek, and playback settings; dashboard UI for listing
and viewing replays.

* **Admin APIs**
* Admin endpoints to list recordings, list chunks, fetch chunk events,
and retrieve all events (paginated).

* **Client**
* Client-side rrweb recording with batching, deduplication, upload API
and a send-batch client method.

* **Configuration**
  * New STACK_S3_PRIVATE_BUCKET for private session storage.

* **Tests**
* Extensive unit and end-to-end tests for replay logic, streams,
playback, and APIs.

* **Chores**
  * Removed an E2E API test GitHub Actions workflow.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants