Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. 📝 Walkthrough🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR adds client-side analytics event tracking (
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Browser as Browser (EventTracker)
participant API as Backend API<br>/analytics/events/batch
participant Prisma as Prisma (SessionReplay)
participant CH as ClickHouse
Browser->>Browser: Capture $page-view / $click events
Browser->>Browser: Buffer events (max 50 / 64KB)
Note over Browser: Every 10s tick or threshold
Browser->>API: POST /analytics/events/batch<br>{session_replay_segment_id, events[]}
API->>API: Check analytics enabled
API->>API: Validate auth (user + refresh token)
API->>Prisma: findRecentSessionReplay(tenancyId, refreshTokenId)
Prisma-->>API: session_replay_id (or null)
API->>CH: INSERT INTO analytics_internal.events<br>(event_type, event_at, data, project_id, ...)
CH-->>API: OK (async insert)
API-->>Browser: 200 { inserted: N }
Last reviewed commit: 39ca118 |
packages/template/src/lib/stack-app/apps/implementations/event-tracker.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx (1)
86-94: ClickHouse insert uses admin client — verify this is the intended access level.
getClickhouseAdminClient()provides access to all branches. Per the retrieved learning, the admin auth type should only be used when cross-branch access is needed (e.g.,include_all_branches). Since this route already scopes data byprojectId/branchIdin the rows, using the admin client for writes is likely fine, but confirm that no lower-privilege writer role exists for event ingestion.Also, the
async_insert: 1setting is good for batched writes, but note there's nowait_for_async_insertsetting — meaning the insert may be acknowledged before it's actually committed. If event loss is acceptable (analytics data), this is fine; if not, considerwait_for_async_insert: 1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx` around lines 86 - 94, The ClickHouse write is using the admin client via getClickhouseAdminClient() to insert into "analytics_internal.events" with async_insert: 1; confirm whether a lower-privilege writer client should be used for event ingestion and, if available, switch the client used for the insert to that writer (or add a comment explaining why admin access is required for cross-branch writes). Also decide and implement the desired durability: either add wait_for_async_insert: 1 to the clickhouseClient.insert settings to wait for commit, or document that async (no wait_for_async_insert) is intentional because eventual loss is acceptable. Ensure changes reference the insert call that writes rows to "analytics_internal.events" and the getClickhouseAdminClient() usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx`:
- Around line 73-84: The mapping that builds rows uses a cast (data: event.data
as Record<string, unknown>) which bypasses the type system; update the event
validation and runtime checks so the cast is unnecessary: change the input
schema that validates body.events (replace yupMixed() for the event.data field
with yup.object() or yupMixed<Record<string, unknown>>() with a .test() that
ensures the value is a plain object and enforces a serialized size limit), then
remove the cast in the rows mapping and add a defensive runtime check before
insertion that ensures event.data is an object (or null) and within size bounds;
refer to the event array parsing/validation and the rows mapping code (the
body.events map and the data field in the rows object) to implement these schema
and check changes.
- Around line 27-38: The schema requires sent_at_ms and batch_id but the request
handler (route.tsx) never uses them; either remove them from the yup body schema
or wire them into the handler: pass sent_at_ms into extractEventTimesMs(...) to
apply clock-drift correction for the events and include batch_id in the
ClickHouse payload/rows (where rows are built for clickhouse insert) to enable
deduplication/storage. Locate the body schema in route.tsx and either delete the
sent_at_ms and batch_id yup entries, or update the handler logic that builds
ClickHouse rows and the call to extractEventTimesMs to accept sent_at_ms and
include batch_id on each row (and ensure any dedupe logic references batch_id if
present).
In `@packages/template/src/lib/stack-app/apps/implementations/event-tracker.ts`:
- Around line 108-120: The code uses non-null assertions when calling the saved
history functions; replace those `this._originalPushState!(...args)` and
`this._originalReplaceState!(...args)` calls with explicit guards (e.g., `if
(!this._originalPushState) throw new Error("...");
this._originalPushState(...args)` or `this._originalPushState ?? throwErr(...)`)
so the patching logic in the history monkey-patches (the assignments to
history.pushState and history.replaceState) performs a defensive null-check
before invoking the original functions; keep the subsequent call to
this._capturePageView("push") / ("replace") unchanged and include a clear error
message referencing the assumption that the original function must exist.
---
Nitpick comments:
In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx`:
- Around line 86-94: The ClickHouse write is using the admin client via
getClickhouseAdminClient() to insert into "analytics_internal.events" with
async_insert: 1; confirm whether a lower-privilege writer client should be used
for event ingestion and, if available, switch the client used for the insert to
that writer (or add a comment explaining why admin access is required for
cross-branch writes). Also decide and implement the desired durability: either
add wait_for_async_insert: 1 to the clickhouseClient.insert settings to wait for
commit, or document that async (no wait_for_async_insert) is intentional because
eventual loss is acceptable. Ensure changes reference the insert call that
writes rows to "analytics_internal.events" and the getClickhouseAdminClient()
usage.
packages/template/src/lib/stack-app/apps/implementations/event-tracker.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx (1)
86-94: Consider explicitly settingwait_for_async_insert: 1alongsideasync_insert: 1.
wait_for_async_insertcontrols when the acknowledgement is sent back to the client — when set to1, the ack is sent only after the flush operation for that data is complete. The ClickHouse default is1, so the currentawaitdoes correctly block until the insert is flushed. However, the official ClickHouse JS client examples always pairasync_insert: 1withwait_for_async_insert: 1explicitly, resolving insert promises only when the server acknowledges a successfully written batch. Being explicit avoids surprises if the server or user profile overrides the default to0(fire-and-forget).♻️ Proposed clarification
clickhouse_settings: { date_time_input_format: "best_effort", async_insert: 1, + wait_for_async_insert: 1, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx` around lines 86 - 94, The ClickHouse insert call using clickhouseClient.insert should explicitly include wait_for_async_insert: 1 in the clickhouse_settings object so the promise only resolves after the server flushes the batch; update the insert invocation that writes to "analytics_internal.events" (values: rows) to add wait_for_async_insert: 1 alongside the existing date_time_input_format and async_insert settings in clickhouse_settings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/e2e/tests/js/cookies.test.ts`:
- Line 91: The history stub created with vi.stubGlobal("history", { pushState:
noop, replaceState: noop }) is missing the length property used by production
code; update the stub initialization used in cookies.test.ts to include length:
1 so window.history.length returns a numeric value (e.g.,
vi.stubGlobal("history", { pushState: noop, replaceState: noop, length: 1 }))
ensuring components that check history.length > 1 behave correctly in tests.
---
Duplicate comments:
In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx`:
- Around line 29-30: The schema currently validates batch_id and sent_at_ms but
the handler never uses them; update the handler to either remove these fields
from the request schema or wire them through: pass sent_at_ms (and batch_id if
needed) into extractEventTimesMs for clock-drift correction, include sent_at_ms
and batch_id when building ClickHouse row objects (and use batch_id for any
deduplication logic), and ensure function names like extractEventTimesMs and the
ClickHouse row construction code are updated to accept and persist these values;
choose one approach (remove or wire-through) and apply consistently across the
route handler and schema.
- Line 35: Replace the loose data validator "data: yupMixed().defined()" with a
strict object schema and a serialized-size guard: use yup.object<Record<string,
unknown>>().required() (or yupObject() equivalent) for the "data" field and add
a .test('max-bytes', ...) that JSON.stringify(value).length (or
Buffer.byteLength) is below your chosen cap (e.g. 64_000 bytes); remove any
downstream "as Record<string, unknown>" casts so the type flows from the schema
into the ClickHouse insert path (reference the "data" field in the route schema
and any insertion code that previously casted the value).
---
Nitpick comments:
In `@apps/backend/src/app/api/latest/analytics/events/batch/route.tsx`:
- Around line 86-94: The ClickHouse insert call using clickhouseClient.insert
should explicitly include wait_for_async_insert: 1 in the clickhouse_settings
object so the promise only resolves after the server flushes the batch; update
the insert invocation that writes to "analytics_internal.events" (values: rows)
to add wait_for_async_insert: 1 alongside the existing date_time_input_format
and async_insert settings in clickhouse_settings.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/template/src/lib/stack-app/apps/implementations/client-app-impl.ts`:
- Around line 461-474: Remove the unused projectId property from the
EventTrackerDeps type and all related usage: delete projectId from the
EventTrackerDeps type declaration and from the EventTracker
constructor/signature if it references the type, and update call sites such as
the new EventTracker instantiation in client-app-impl.ts (remove the projectId:
this.projectId entry). Ensure EventTracker implementation compiles with the
trimmed deps type and update any tests or other call sites that supply
projectId.
packages/template/src/lib/stack-app/apps/implementations/client-app-impl.ts
Show resolved
Hide resolved
apps/backend/src/app/api/latest/analytics/events/batch/route.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (3)
apps/e2e/tests/backend/endpoints/api/v1/session-replays.test.ts (1)
49-69: Test correctly validates the new error behavior.The updated test properly asserts the 400 status and
ANALYTICS_NOT_ENABLEDerror code when analytics is disabled. Setup is appropriate: project created without enabling analytics, user signed in before request.One minor note: the coding guidelines prefer
.toMatchInlineSnapshotfor test assertions. The other new test file (analytics-events-batch.test.ts) uses inline snapshots for its analytics-not-enabled case — consider aligning here for consistency. As per coding guidelines, "When writing tests, prefer.toMatchInlineSnapshotover other selectors, if possible, as configured in snapshot-serializer.ts".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/tests/backend/endpoints/api/v1/session-replays.test.ts` around lines 49 - 69, Replace the explicit status and error-code assertions in the "it(\"throws error when analytics is not enabled\", ...)" test with an inline snapshot assertion to match project conventions: remove expect(res.status).toBe(400) and expect(res.body?.code).toBe("ANALYTICS_NOT_ENABLED") and instead assert the response object (or res.body) using toMatchInlineSnapshot so the test matches the style used in analytics-events-batch.test.ts and the snapshot-serializer.ts configuration.apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts (2)
394-409: Retry loop could benefit from a guard after exhaustion.If all 15 attempts fail,
queryResremainsundefinedand the snapshot assertion on line 411 will produce a confusing "received: undefined" failure. A small guard would make debugging easier.♻️ Suggested guard
+ if (!queryRes || queryRes.body?.result?.length !== 2) { + throw new Error(`Analytics query did not return expected results after 15 retries. Last response: ${JSON.stringify(queryRes?.body)}`); + } + expect(queryRes).toMatchInlineSnapshot(`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts` around lines 394 - 409, The retry loop that calls niceBackendFetch and populates queryRes can exit without setting queryRes (or with a non-200/insufficient-result response), which leads to a confusing snapshot failure; after the for-loop that uses wait/niceBackendFetch, add a guard that checks if queryRes is undefined or queryRes.status !== 200 or queryRes.body?.result?.length !== 2 and throw or fail with a clear message including the last queryRes.status and JSON-stringified queryRes.body (or indicate attempts exhausted and include sessionReplaySegmentId), so the test fails with actionable diagnostics instead of "received: undefined".
55-69: Consider usingtoMatchInlineSnapshotfor consistency with other tests in this file.Every other test in this file uses
.toMatchInlineSnapshot()for assertions, but this test uses.toBe(). As per coding guidelines, "When writing tests, prefer.toMatchInlineSnapshotover other selectors, if possible, as configured in snapshot-serializer.ts".♻️ Suggested inline snapshot
- expect(res.status).toBe(400); - expect(res.body?.code).toBe("ANALYTICS_NOT_ENABLED"); + expect(res).toMatchInlineSnapshot(` + NiceResponse { + "status": 400, + "body": { + "code": "ANALYTICS_NOT_ENABLED", + "error": "Analytics is not enabled for this project.", + }, + "headers": Headers { + "x-stack-known-error": "ANALYTICS_NOT_ENABLED", + <some fields may have been hidden>, + }, + } + `);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts` around lines 55 - 69, Replace the two assertions that use .toBe() with a single inline snapshot assertion to match the file's style: after calling uploadEventBatch (used with sessionReplaySegmentId/batchId etc.), assert the entire response (e.g., res.status and res.body) using expect(res).toMatchInlineSnapshot(...) so the test uses the same snapshot serializer convention; update the assertion that currently references res.status and res.body?.code (from the uploadEventBatch result) to a single toMatchInlineSnapshot call that captures the 400 status and ANALYTICS_NOT_ENABLED code. Ensure the change is applied in the same test containing Project.createAndSwitch and Auth.Otp.signIn so the snapshot reflects the full response shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/e2e/tests/backend/endpoints/api/v1/analytics-events-batch.test.ts`:
- Around line 394-409: The retry loop that calls niceBackendFetch and populates
queryRes can exit without setting queryRes (or with a
non-200/insufficient-result response), which leads to a confusing snapshot
failure; after the for-loop that uses wait/niceBackendFetch, add a guard that
checks if queryRes is undefined or queryRes.status !== 200 or
queryRes.body?.result?.length !== 2 and throw or fail with a clear message
including the last queryRes.status and JSON-stringified queryRes.body (or
indicate attempts exhausted and include sessionReplaySegmentId), so the test
fails with actionable diagnostics instead of "received: undefined".
- Around line 55-69: Replace the two assertions that use .toBe() with a single
inline snapshot assertion to match the file's style: after calling
uploadEventBatch (used with sessionReplaySegmentId/batchId etc.), assert the
entire response (e.g., res.status and res.body) using
expect(res).toMatchInlineSnapshot(...) so the test uses the same snapshot
serializer convention; update the assertion that currently references res.status
and res.body?.code (from the uploadEventBatch result) to a single
toMatchInlineSnapshot call that captures the 400 status and
ANALYTICS_NOT_ENABLED code. Ensure the change is applied in the same test
containing Project.createAndSwitch and Auth.Otp.signIn so the snapshot reflects
the full response shape.
In `@apps/e2e/tests/backend/endpoints/api/v1/session-replays.test.ts`:
- Around line 49-69: Replace the explicit status and error-code assertions in
the "it(\"throws error when analytics is not enabled\", ...)" test with an
inline snapshot assertion to match project conventions: remove
expect(res.status).toBe(400) and
expect(res.body?.code).toBe("ANALYTICS_NOT_ENABLED") and instead assert the
response object (or res.body) using toMatchInlineSnapshot so the test matches
the style used in analytics-events-batch.test.ts and the snapshot-serializer.ts
configuration.
<!-- Make sure you've read the CONTRIBUTING.md guidelines: https://github.com/stack-auth/stack-auth/blob/dev/CONTRIBUTING.md -->
Summary by CodeRabbit
New Features
Bug Fixes / UX
Tests