Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis pull request adds persistent AI conversation functionality to the Stack Companion. It introduces database tables for storing conversations and messages, backend API endpoints for conversation lifecycle management, frontend UI components for chat rendering and conversation browsing, and client-side hooks for API interactions. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User (Frontend)
participant ChatUI as Chat UI
participant API as Backend API
participant DB as Database
User->>ChatUI: Send message
ChatUI->>API: POST /ai-conversations/[id]/messages or POST /ai-conversations with initial messages
API->>DB: Create aiMessage or create aiConversation + aiMessages
DB-->>API: Record(s) created with id, timestamps
API-->>ChatUI: Streaming response (AI content)
ChatUI->>ChatUI: Word-by-word reveal via useWordStreaming
ChatUI->>API: PUT /ai-conversations/[id]/messages (persist assistant response)
API->>DB: Replace aiMessage records for conversation
DB-->>API: Success
API-->>ChatUI: 200 OK
ChatUI-->>User: Display persisted message history
sequenceDiagram
participant User as User (Frontend)
participant ListUI as Conversation List
participant API as Backend API
participant DB as Database
User->>ListUI: Open Stack Companion "Ask AI"
ListUI->>API: GET /ai-conversations?projectId=...
API->>DB: Query aiConversation filtered by projectUserId, projectId
DB-->>API: [{ id, title, projectId, updatedAt }, ...]
API-->>ListUI: Conversation summaries
ListUI-->>User: Render conversation list with titles and timestamps
User->>ListUI: Click conversation or delete button
ListUI->>API: DELETE /ai-conversations/[id] or GET /ai-conversations/[id]
API->>DB: Delete aiConversation (cascades to aiMessages) or select aiConversation + aiMessages
DB-->>API: Success or conversation detail
API-->>ListUI: 200 OK or conversation data
ListUI-->>User: Update list or open conversation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR adds persistent AI chat history to the Stack Companion sidebar: a new Key concerns found during review:
Confidence Score: 4/5Not safe to merge as-is due to a non-atomic delete+recreate that can permanently destroy conversation messages. Two P1 issues exist: the missing transaction in the messages PUT endpoint (confirmed data-loss path) and the race condition in the frontend save logic. The rest of the changes are well-structured and the authorization model is correct. apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx (transaction), apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx (race condition + error handling) Important Files Changed
Sequence DiagramsequenceDiagram
participant UI as AIChatWidgetInner
participant API as Backend API
participant DB as Database
UI->>API: POST /ai/query/stream (messages)
API-->>UI: stream response (status: streaming)
API-->>UI: stream ends (status: ready)
UI->>UI: useEffect detects streaming→ready
Note over UI: isSavingRef = true
alt First save — no existing conversation
UI->>API: POST /ai-conversations
API->>DB: INSERT AiConversation
API->>DB: INSERT AiMessages (separate op, no tx)
DB-->>API: ok
API-->>UI: { id, title }
Note over UI: isSavingRef = false
else Subsequent save — conversation exists
UI->>API: PUT /ai-conversations/:id/messages
API->>DB: DELETE AiMessages WHERE conversationId=:id
Note over DB: ⚠️ No transaction — if next step fails, messages gone
API->>DB: INSERT AiMessages (createMany)
DB-->>API: ok
API-->>UI: 200 {}
Note over UI: isSavingRef = false
end
Note over UI: ⚠️ If 2nd response completes while isSavingRef=true,
Note over UI: that save is silently skipped
Reviews (1): Last reviewed commit: "initial commit" | Re-trigger Greptile |
|
|
||
| await globalPrismaClient.aiMessage.deleteMany({ | ||
| where: { conversationId: params.conversationId }, | ||
| }); | ||
|
|
||
| if (body.messages.length > 0) { | ||
| await globalPrismaClient.aiMessage.createMany({ | ||
| data: body.messages.map((msg) => ({ | ||
| conversationId: params.conversationId, | ||
| role: msg.role, | ||
| content: msg.content as object, |
There was a problem hiding this comment.
Non-atomic delete+recreate causes permanent message loss
deleteMany and the subsequent createMany are two separate database operations. If createMany throws (network blip, constraint violation, etc.), the delete has already committed — every message in the conversation is permanently gone with no way to recover them.
Wrap both operations in a Prisma transaction:
| await globalPrismaClient.aiMessage.deleteMany({ | |
| where: { conversationId: params.conversationId }, | |
| }); | |
| if (body.messages.length > 0) { | |
| await globalPrismaClient.aiMessage.createMany({ | |
| data: body.messages.map((msg) => ({ | |
| conversationId: params.conversationId, | |
| role: msg.role, | |
| content: msg.content as object, | |
| await globalPrismaClient.$transaction(async (tx) => { | |
| await tx.aiMessage.deleteMany({ | |
| where: { conversationId: params.conversationId }, | |
| }); | |
| if (body.messages.length > 0) { | |
| await tx.aiMessage.createMany({ | |
| data: body.messages.map((msg) => ({ | |
| conversationId: params.conversationId, | |
| role: msg.role, | |
| content: msg.content as object, | |
| })), | |
| }); | |
| } | |
| }); |
| // Save conversation when streaming completes | ||
| useEffect(() => { | ||
| const prevStatus = prevStatusRef.current; | ||
| prevStatusRef.current = status; | ||
|
|
||
| const completedOk = (prevStatus === "streaming" || prevStatus === "submitted") && status === "ready"; | ||
| const completedWithError = (prevStatus === "streaming" || prevStatus === "submitted") && status === "error"; | ||
|
|
||
| if ( | ||
| (completedOk || completedWithError) && | ||
| messages.length > 0 && | ||
| !isSavingRef.current | ||
| ) { | ||
| isSavingRef.current = true; | ||
| // On error, only save user messages (strip any partial/failed assistant turn) | ||
| const safeMessages = completedWithError | ||
| ? messages.filter(m => m.role === "user") | ||
| : messages; | ||
| if (safeMessages.length === 0) { | ||
| isSavingRef.current = false; | ||
| return; | ||
| } | ||
| const messagesToSave = safeMessages.map(m => ({ | ||
| role: m.role, | ||
| content: m.parts, | ||
| })); | ||
| const firstUserMessage = messages.find(m => m.role === "user"); | ||
| const title = firstUserMessage | ||
| ? getMessageContent(firstUserMessage).slice(0, 50) || "New conversation" | ||
| : "New conversation"; | ||
|
|
||
| runAsynchronously(async () => { | ||
| try { | ||
| if (conversationIdRef.current) { | ||
| await replaceConversationMessages(currentUser, conversationIdRef.current, messagesToSave); | ||
| } else if (projectId) { | ||
| const result = await createConversation(currentUser, { | ||
| title, | ||
| projectId, | ||
| messages: messagesToSave, | ||
| }); | ||
| conversationIdRef.current = result.id; | ||
| onConversationCreated(result.id); | ||
| } | ||
| } finally { | ||
| isSavingRef.current = false; | ||
| } | ||
| }); | ||
| } | ||
| }, [status, messages, currentUser, projectId, onConversationCreated]); |
There was a problem hiding this comment.
Race condition — concurrent saves silently skipped, causing data loss
isSavingRef.current is used to prevent concurrent saves. However, if a second AI response completes while the first save is still in-flight, the !isSavingRef.current guard causes that save to be silently skipped entirely. The conversation in the database will then be missing the messages from that turn until the user sends another message — and if they never do, those messages are permanently lost.
A safer approach is to queue the save or stash the latest snapshot and replay it after the in-flight save finishes:
// Stash the latest messages when a save is already running
if (isSavingRef.current) {
pendingMessagesRef.current = messagesToSave;
return;
}
// After isSavingRef.current = false in the finally block:
const pending = pendingMessagesRef.current;
pendingMessagesRef.current = null;
if (pending) { /* re-trigger save with pending */ }| runAsynchronously(async () => { | ||
| try { | ||
| const result = await listConversations(currentUser, projectId); | ||
| setConversations(result); | ||
| } finally { | ||
| setLoading(false); | ||
| } | ||
| }); | ||
| }, [currentUser, projectId]); |
There was a problem hiding this comment.
Use
runAsynchronouslyWithAlert for user-triggered async operations
Per project convention, async operations initiated from button clicks or user interactions should use runAsynchronouslyWithAlert instead of bare runAsynchronously. This ensures errors are surfaced to the user rather than silently swallowed.
This applies to multiple locations in this file:
- Line 67: initial
listConversationsload (error swallowed, onlyfinallyruns) - Line 80:
handleDeleteinsiderunAsynchronously - Line 187:
AIChatWidget's initial conversation load - Line 238:
onSelectConversation={(id) => runAsynchronously(handleSelectConversation(id))}
Rule Used: Use runAsynchronouslyWithAlert from `@stackframe... (source)
Learnt From
stack-auth/stack-auth#943
| async function getOwnedConversation(conversationId: string, userId: string) { | ||
| const conversation = await globalPrismaClient.aiConversation.findUnique({ | ||
| where: { id: conversationId }, | ||
| }); | ||
| if (!conversation || conversation.projectUserId !== userId) { | ||
| throw new StatusError(StatusError.NotFound, "Conversation not found"); | ||
| } | ||
| return conversation; |
There was a problem hiding this comment.
Duplicated
getOwnedConversation helper
An identical getOwnedConversation function is also defined in [conversationId]/route.tsx (lines 6–13). Future security or logic changes to the ownership check must be applied in two places. Consider extracting it to a shared utility file.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| role: yupString().defined(), | ||
| content: yupMixed().defined(), |
There was a problem hiding this comment.
role field accepts any arbitrary string
yupString().defined() places no restriction on the role value. In practice only "user" and "assistant" are meaningful.
| role: yupString().defined(), | |
| content: yupMixed().defined(), | |
| role: yupString().oneOf(["user", "assistant"]).defined(), |
The same applies to the role field in the POST /ai-conversations handler.
| } | ||
| }); | ||
| // eslint-disable-next-line react-hooks/exhaustive-deps | ||
| }, []); | ||
|
|
There was a problem hiding this comment.
Suppressed exhaustive-deps hides a stale-closure bug
The eslint-disable-next-line react-hooks/exhaustive-deps suppresses warnings for currentUser and projectId missing from the dependency array. If either changes while the companion is open, the widget will continue showing the old project's conversations without reloading.
There was a problem hiding this comment.
Pull request overview
Adds persistent “Ask AI” chat history to the Dashboard Stack Companion by introducing conversation storage in the backend, a new chat-history UI in the companion drawer, and shared rendering utilities for AI chat messages.
Changes:
- Introduces new backend persistence models (
AiConversation,AiMessage) with migration and internal CRUD-like endpoints under/api/latest/internal/ai-conversations. - Adds a new Stack Companion “Ask AI” sidebar item and a full chat widget with conversation list, create/update/delete flows.
- Refactors shared AI chat rendering helpers into
ai-chat-shared.tsxand wires them into the existingask-aicommand UI.
Reviewed changes
Copilot reviewed 11 out of 12 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| pnpm-lock.yaml | Lockfile updates from dependency graph changes. |
| apps/dashboard/src/hooks/use-ai-conversations.ts | Client-side API wrapper for listing/creating/updating/deleting persisted conversations. |
| apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx | New Ask AI chat UI with conversation list + persistence integration. |
| apps/dashboard/src/components/stack-companion.tsx | Adds “Ask AI” entry and renders the new widget in the companion drawer. |
| apps/dashboard/src/components/design-components/list.tsx | Stops click propagation on list-item button container to avoid triggering row click. |
| apps/dashboard/src/components/commands/ask-ai.tsx | Uses shared AI chat UI helpers after refactor. |
| apps/dashboard/src/components/commands/ai-chat-shared.tsx | New shared components/utilities for AI chat message rendering and streaming effects. |
| apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx | Adds list/create endpoints for conversations. |
| apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/route.tsx | Adds get/update-title/delete endpoints for a specific conversation. |
| apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx | Adds “replace all messages” endpoint for a conversation. |
| apps/backend/prisma/schema.prisma | Adds Prisma models for persistent AI chat storage. |
| apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql | Creates DB tables/indexes/foreign key for AI chat persistence. |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| <button | ||
| onClick={handleSubmit} | ||
| disabled={!input.trim() || aiLoading} | ||
| className={cn( | ||
| "p-1 rounded transition-colors hover:transition-none", |
There was a problem hiding this comment.
Icon-only buttons should have an accessible name for screen readers. Add an aria-label (and/or title) to this send button (and similar icon-only buttons in the widget) so it’s discoverable to assistive tech.
| onClick={(e) => handleDelete(e, conv.id)} | ||
| disabled={deletingId === conv.id} | ||
| className="opacity-0 group-hover:opacity-100 p-1 text-muted-foreground/40 hover:text-red-400 transition-all shrink-0" | ||
| type="button" |
There was a problem hiding this comment.
This delete button is icon-only and currently has no accessible label. Please add aria-label/title (e.g., “Delete conversation”) so screen readers can announce its purpose.
| type="button" | |
| type="button" | |
| aria-label="Delete conversation" | |
| title="Delete conversation" |
| await globalPrismaClient.aiMessage.deleteMany({ | ||
| where: { conversationId: params.conversationId }, | ||
| }); | ||
|
|
||
| if (body.messages.length > 0) { | ||
| await globalPrismaClient.aiMessage.createMany({ | ||
| data: body.messages.map((msg) => ({ | ||
| conversationId: params.conversationId, | ||
| role: msg.role, | ||
| content: msg.content as object, | ||
| })), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Replacing messages by deleteMany then createMany is not atomic. If the request fails between these operations, the conversation can end up with no messages. Also, this path doesn’t update AiConversation.updatedAt, so the conversation won’t appear “recent” after new messages are saved. Consider wrapping the delete+insert (and a conversation update to touch updatedAt) in a single prisma.$transaction.
| await globalPrismaClient.aiMessage.deleteMany({ | |
| where: { conversationId: params.conversationId }, | |
| }); | |
| if (body.messages.length > 0) { | |
| await globalPrismaClient.aiMessage.createMany({ | |
| data: body.messages.map((msg) => ({ | |
| conversationId: params.conversationId, | |
| role: msg.role, | |
| content: msg.content as object, | |
| })), | |
| }); | |
| } | |
| await globalPrismaClient.$transaction(async (tx) => { | |
| await tx.aiMessage.deleteMany({ | |
| where: { conversationId: params.conversationId }, | |
| }); | |
| if (body.messages.length > 0) { | |
| await tx.aiMessage.createMany({ | |
| data: body.messages.map((msg) => ({ | |
| conversationId: params.conversationId, | |
| role: msg.role, | |
| content: msg.content as object, | |
| })), | |
| }); | |
| } | |
| await tx.aiConversation.update({ | |
| where: { id: params.conversationId }, | |
| data: { updatedAt: new Date() }, | |
| }); | |
| }); |
| updatedAt DateTime @updatedAt | ||
| messages AiMessage[] | ||
|
|
||
| @@index([projectUserId, updatedAt(sort: Desc)]) |
There was a problem hiding this comment.
The main query pattern for listing conversations filters by both projectUserId and projectId and orders by updatedAt. The current index is only (projectUserId, updatedAt DESC), which can be suboptimal as conversation volume grows. Consider changing/adding an index like (projectUserId, projectId, updatedAt DESC) to match findMany({ where: { projectUserId, projectId }, orderBy: { updatedAt: "desc" } }).
| @@index([projectUserId, updatedAt(sort: Desc)]) | |
| @@index([projectUserId, projectId, updatedAt(sort: Desc)]) |
| }, [status, messages, currentUser, projectId, onConversationCreated]); | ||
|
|
||
| // Word streaming for the last assistant message | ||
| const lastAssistantMessage = messages.findLast((m: UIMessage) => m.role === "assistant"); |
There was a problem hiding this comment.
Array.prototype.findLast isn’t supported in some browsers/environments, and Next.js doesn’t automatically polyfill new built-ins. Since this runs on the client, consider replacing findLast with a backwards loop (or add a polyfill) to avoid runtime crashes on older browsers.
| const lastAssistantMessage = messages.findLast((m: UIMessage) => m.role === "assistant"); | |
| const lastAssistantMessage = (() => { | |
| for (let i = messages.length - 1; i >= 0; i--) { | |
| if (messages[i].role === "assistant") { | |
| return messages[i]; | |
| } | |
| } | |
| return undefined; | |
| })(); |
| }; | ||
|
|
||
| function getBaseUrl() { | ||
| return getPublicEnvVar("NEXT_PUBLIC_BROWSER_STACK_API_URL") ?? getPublicEnvVar("NEXT_PUBLIC_STACK_API_URL") ?? throwErr("NEXT_PUBLIC_BROWSER_STACK_API_URL is not set"); |
There was a problem hiding this comment.
getBaseUrl() can fall back to NEXT_PUBLIC_STACK_API_URL, but the thrown error message always says NEXT_PUBLIC_BROWSER_STACK_API_URL is not set. This is misleading when the other env var is also missing; please update the message to reflect both accepted env vars (or the actual missing one).
| return getPublicEnvVar("NEXT_PUBLIC_BROWSER_STACK_API_URL") ?? getPublicEnvVar("NEXT_PUBLIC_STACK_API_URL") ?? throwErr("NEXT_PUBLIC_BROWSER_STACK_API_URL is not set"); | |
| const browserUrl = getPublicEnvVar("NEXT_PUBLIC_BROWSER_STACK_API_URL"); | |
| const stackUrl = getPublicEnvVar("NEXT_PUBLIC_STACK_API_URL"); | |
| if (browserUrl ?? stackUrl) { | |
| return browserUrl ?? stackUrl; | |
| } | |
| return throwErr("Either NEXT_PUBLIC_BROWSER_STACK_API_URL or NEXT_PUBLIC_STACK_API_URL must be set"); |
| // eslint-disable-next-line react-hooks/exhaustive-deps | ||
| }, []); |
There was a problem hiding this comment.
This effect intentionally disables exhaustive-deps and runs only once, but it reads currentUser and projectId. If the user navigates to a different project (or auth state changes) while the widget stays mounted, it will keep showing the old project’s conversation. Consider including currentUser/projectId in the deps and resetting state when they change (or explicitly remount the widget on project change).
| if ( | ||
| (completedOk || completedWithError) && | ||
| messages.length > 0 && | ||
| !isSavingRef.current | ||
| ) { |
There was a problem hiding this comment.
The isSavingRef guard can cause message history to be dropped: after a stream completes, saving happens while status === "ready" (so the input is enabled). If the user sends another message before the save finishes, the next completion will be ignored because isSavingRef.current is still true. Consider queuing a pending save (always save the latest messages after the current save finishes) or using a debounced/serial save loop keyed by conversationId.
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (4)
apps/dashboard/src/components/commands/ai-chat-shared.tsx (2)
188-190: Type assertions on external AI SDK data.The
invocation.inputandinvocation.outputare cast to specific shapes without validation. If the AI SDK response structure changes, this could cause silent failures or crashes when accessinginput?.queryorresult?.success.Consider adding defensive checks or optional chaining throughout the usage sites.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx` around lines 188 - 190, The code currently force-casts invocation.input and invocation.output to specific shapes (see invocation, input, queryArg, result) which is unsafe; update usages to validate types before access by checking that invocation?.input is an object and invocation?.output is an object with expected keys (e.g., typeof input.query === "string" before assigning queryArg and checking typeof result.success === "boolean" or Array.isArray(result.result) before using them), replace direct assumptions with optional chaining and safe guards, and add a small fallback/default values when fields are missing so consumer code using queryArg, result.success, result.result, and result.rowCount won’t throw if the SDK shape changes.
18-22: Potential state update on unmounted component.The
setTimeoutcallback could fire after the component unmounts, causing a React warning. While minor for visual state, it's good practice to clean up timeouts.Proposed fix using ref
export const CopyButton = memo(function CopyButton({ text, className, size = "sm" }: { text: string, className?: string, size?: "sm" | "xs", }) { const [copied, setCopied] = useState(false); + const timeoutRef = useRef<ReturnType<typeof setTimeout>>(); + + useEffect(() => { + return () => { + if (timeoutRef.current) clearTimeout(timeoutRef.current); + }; + }, []); const handleCopy = useCallback(async () => { await navigator.clipboard.writeText(text); setCopied(true); - setTimeout(() => setCopied(false), 1500); + timeoutRef.current = setTimeout(() => setCopied(false), 1500); }, [text]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx` around lines 18 - 22, The handleCopy handler schedules a setTimeout to reset copied state which may run after the component unmounts; modify handleCopy and the component to store the timeout id in a ref (e.g., timeoutRef) and clear it in a useEffect cleanup to avoid calling setCopied on an unmounted component: when calling setTimeout in handleCopy save the returned id to timeoutRef.current, and in a useEffect return a cleanup that calls clearTimeout(timeoutRef.current) and resets timeoutRef.current to null; also clear any existing timeout before setting a new one in handleCopy to prevent overlaps.apps/dashboard/src/hooks/use-ai-conversations.ts (1)
41-43: Error message lacks response body context.The error message only includes the status code. For debugging, it would help to include the response body (if available) to understand why the request failed.
Proposed enhancement
if (!response.ok) { - throw new Error(`AI conversations API error: ${response.status}`); + const errorBody = await response.text().catch(() => ""); + throw new Error(`AI conversations API error: ${response.status}${errorBody ? ` - ${errorBody}` : ""}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/hooks/use-ai-conversations.ts` around lines 41 - 43, The error thrown when response.ok is false in use-ai-conversations.ts only includes the status code; update the failure path in the function that performs the fetch (e.g., the fetch call inside useAIConversations or fetchAIConversations) to read the response body (preferably via response.text() or response.json() guarded in a try/catch), include a truncated/limited portion of that body in the error message along with the status (e.g., `AI conversations API error: ${response.status} - ${bodySnippet}`), and fall back to a safe message if reading the body fails to avoid masking the original error.apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx (1)
564-568: Complex last-assistant detection logic could be simplified.The calculation for
isLastAssistantis difficult to follow. Consider extracting to a named helper or adding a brief comment explaining the intent.Suggested clarification
+ // Check if this is the last assistant message (accounting for trailing user message) const isLastAssistant = message.role === "assistant" && index === arr.length - 1 - (arr[arr.length - 1]?.role === "user" ? 1 : 0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around lines 564 - 568, The last-assistant detection logic (currently computed in the isLastAssistant const using message.role, index, arr and the end-user check) is hard to read—extract this into a small helper function (e.g., isLastAssistantAtIndex(messages, index) or computeIsLastAssistant) and use that helper in place of the inline expression; inside the helper add a one-line comment explaining the rule (treat the final assistant as not-last if the very last message is a user), then update the displayContent line to use the new helper while keeping existing dependencies like messageContent and aiLoading unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql`:
- Around line 2-11: The AiConversation table is missing a foreign-key constraint
on projectId which allows orphaned conversations; update the migration SQL to
add a foreign key constraint on "projectId" referencing the primary key of the
Project table (e.g., "Project"."id") with ON DELETE CASCADE, and then update the
Prisma schema to add the corresponding relation fields (e.g., in the
AiConversation model add an explicit relation to Project and mark projectId as
the foreign key) so Prisma migrations and runtime enforce cascade deletes; apply
the same change for the other similar table(s) referenced (lines 30-31) to keep
behavior consistent.
In `@apps/backend/prisma/schema.prisma`:
- Around line 1105-1113: The AiMessage model currently relies on createdAt for
ordering which is nondeterministic for batch inserts (createMany()); add an
explicit integer position field to AiMessage (e.g., position Int) and create a
composite index on (conversationId, position) to preserve stable ordering, then
update any insertion code that uses createMany() or individual message inserts
to set position according to the input array order (ensuring position increments
per conversation) and change queries that used orderBy: { createdAt: "asc" } to
orderBy: { position: "asc" } (keep conversationId and createdAt untouched aside
from indexing).
In
`@apps/backend/src/app/api/latest/internal/ai-conversations/`[conversationId]/messages/route.tsx:
- Around line 46-61: The current handler calls getOwnedConversation then deletes
messages and separately calls createMany, which can leave the conversation empty
on failure and doesn't update AiConversation.updatedAt; wrap the deleteMany +
optional createMany + an update to touch the parent conversation (update
AiConversation.updatedAt for params.conversationId) inside a single Prisma
transaction (use globalPrismaClient.$transaction) so the delete/insert/update
are atomic and the parent conversation's updatedAt is updated even when messages
array is empty; keep getOwnedConversation as the authorization check before the
transaction and reference params.conversationId,
globalPrismaClient.aiMessage.deleteMany,
globalPrismaClient.aiMessage.createMany, and
globalPrismaClient.aiConversation.update (or the appropriate model name) when
implementing.
In `@apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx`:
- Around line 97-114: The code creates an aiConversation and then calls
globalPrismaClient.aiMessage.createMany separately, which can leave a dangling
conversation if message insertion fails; change this to an atomic operation by
either wrapping both calls in a Prisma transaction (use
globalPrismaClient.$transaction([ ... ]) to create the conversation and create
messages) or perform a nested create in the aiConversation.create call (include
the relation field and use messages: { create: body.messages.map(m => ({ role:
m.role, content: m.content })) } so messages are created as part of the same
write). Update the handler to use one of these approaches instead of separate
aiConversation.create and aiMessage.createMany calls.
- Around line 77-85: The request body schema's messages array uses yupString()
for role and yupMixed() for content, causing downstream casts (e.g., the
"msg.content as object" usage); update the body: yupObject(...) messages schema
so role is constrained with
yupString().oneOf(["user","assistant","system"]).defined() and content is
validated as the exact persisted message object shape (replace yupMixed() with a
yupObject(...) that matches the stored message fields), and apply the identical
tightened schema to the sibling PUT handler in
[conversationId]/messages/route.tsx so callers and handlers no longer need
runtime type casts.
In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx`:
- Around line 408-412: The getToolInvocations function uses a double cast (as
unknown as ToolInvocationPart) which bypasses TypeScript safety; replace this
with a proper type guard or runtime validation: add a predicate like
isToolInvocationPart(part): part is ToolInvocationPart (checking required
properties/shape of ToolInvocationPart) and use message.parts.filter((part):
part is ToolInvocationPart => part.type.startsWith("tool-") &&
isToolInvocationPart(part)) then return the filtered array directly; reference
getToolInvocations, UIMessage, and ToolInvocationPart when adding the guard.
In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx`:
- Around line 185-201: The effect that fetches the latest conversation (inside
useEffect) omits projectId from its dependency array, causing stale
conversations when the user switches projects; update the dependency list to
include projectId (and any other used values like currentUser or
runAsynchronously if not stable) so the effect re-runs on project change,
ensuring listConversations/currentUser/getConversation, setViewMode, and
setConversationKey operate against the correct project; alternatively ensure the
component remounts on project change, but the simplest fix is to add projectId
to the useEffect dependencies.
- Around line 352-368: The save block using runAsynchronously currently lets
errors from createConversation or replaceConversationMessages fail silently;
update the inner async to catch errors (surround the await calls for
replaceConversationMessages/createConversation with try/catch), call a
user-visible notifier (e.g., runAsynchronouslyWithAlert or the app's toast/alert
mechanism) inside the catch with a clear message including the error, still set
isSavingRef.current = false in finally, and keep the existing logic that sets
conversationIdRef.current and calls onConversationCreated(result.id) only on
success.
- Around line 191-195: The code in initialMessages maps conv.messages using
unsafe casts for msg.role and msg.content; update the
ConversationDetail.messages type to reflect the real API shape (role: "user" |
"assistant" and content: UIMessage["parts"]) or add runtime validation before
mapping (validate msg.role is "user"|"assistant" and that msg.content conforms
to UIMessage["parts"]) and only then construct initialMessages from
conv.messages to avoid unsafe casts in the initialMessages mapping.
In `@apps/dashboard/src/hooks/use-ai-conversations.ts`:
- Around line 67-73: The code interpolates conversationId directly into request
paths (see getConversation, updateConversationTitle,
replaceConversationMessages, deleteConversation), which risks path traversal if
an attacker supplies malicious input; fix by validating or encoding the id
before using it in the URL: either enforce a strict format (e.g., UUID regex)
and throw on invalid values, or always URL-encode conversationId with
encodeURIComponent() and use the encoded value in apiFetch; apply the same
validation/encoding pattern consistently across the listed functions.
---
Nitpick comments:
In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx`:
- Around line 188-190: The code currently force-casts invocation.input and
invocation.output to specific shapes (see invocation, input, queryArg, result)
which is unsafe; update usages to validate types before access by checking that
invocation?.input is an object and invocation?.output is an object with expected
keys (e.g., typeof input.query === "string" before assigning queryArg and
checking typeof result.success === "boolean" or Array.isArray(result.result)
before using them), replace direct assumptions with optional chaining and safe
guards, and add a small fallback/default values when fields are missing so
consumer code using queryArg, result.success, result.result, and result.rowCount
won’t throw if the SDK shape changes.
- Around line 18-22: The handleCopy handler schedules a setTimeout to reset
copied state which may run after the component unmounts; modify handleCopy and
the component to store the timeout id in a ref (e.g., timeoutRef) and clear it
in a useEffect cleanup to avoid calling setCopied on an unmounted component:
when calling setTimeout in handleCopy save the returned id to
timeoutRef.current, and in a useEffect return a cleanup that calls
clearTimeout(timeoutRef.current) and resets timeoutRef.current to null; also
clear any existing timeout before setting a new one in handleCopy to prevent
overlaps.
In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx`:
- Around line 564-568: The last-assistant detection logic (currently computed in
the isLastAssistant const using message.role, index, arr and the end-user check)
is hard to read—extract this into a small helper function (e.g.,
isLastAssistantAtIndex(messages, index) or computeIsLastAssistant) and use that
helper in place of the inline expression; inside the helper add a one-line
comment explaining the rule (treat the final assistant as not-last if the very
last message is a user), then update the displayContent line to use the new
helper while keeping existing dependencies like messageContent and aiLoading
unchanged.
In `@apps/dashboard/src/hooks/use-ai-conversations.ts`:
- Around line 41-43: The error thrown when response.ok is false in
use-ai-conversations.ts only includes the status code; update the failure path
in the function that performs the fetch (e.g., the fetch call inside
useAIConversations or fetchAIConversations) to read the response body
(preferably via response.text() or response.json() guarded in a try/catch),
include a truncated/limited portion of that body in the error message along with
the status (e.g., `AI conversations API error: ${response.status} -
${bodySnippet}`), and fall back to a safe message if reading the body fails to
avoid masking the original error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9b7586bc-5742-4613-b78e-b4381f9af0ef
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (11)
apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sqlapps/backend/prisma/schema.prismaapps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsxapps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/route.tsxapps/backend/src/app/api/latest/internal/ai-conversations/route.tsxapps/dashboard/src/components/commands/ai-chat-shared.tsxapps/dashboard/src/components/commands/ask-ai.tsxapps/dashboard/src/components/design-components/list.tsxapps/dashboard/src/components/stack-companion.tsxapps/dashboard/src/components/stack-companion/ai-chat-widget.tsxapps/dashboard/src/hooks/use-ai-conversations.ts
| CREATE TABLE "AiConversation" ( | ||
| "id" UUID NOT NULL DEFAULT gen_random_uuid(), | ||
| "projectUserId" UUID NOT NULL, | ||
| "projectId" TEXT NOT NULL, | ||
| "title" TEXT NOT NULL, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "updatedAt" TIMESTAMP(3) NOT NULL, | ||
|
|
||
| CONSTRAINT "AiConversation_pkey" PRIMARY KEY ("id") | ||
| ); |
There was a problem hiding this comment.
Project-scoped conversations should have a real project FK.
These endpoints store and query conversations by projectId, but the table does not constrain that field. If a project is deleted, its chat history stays behind as orphaned rows. Please add the FK/cascade here (and the matching Prisma relation) unless long-term orphan retention is intentional.
Suggested SQL
+ALTER TABLE "AiConversation"
+ ADD CONSTRAINT "AiConversation_projectId_fkey"
+ FOREIGN KEY ("projectId") REFERENCES "Project"("id")
+ ON DELETE CASCADE
+ ON UPDATE CASCADE;Also applies to: 30-31
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql`
around lines 2 - 11, The AiConversation table is missing a foreign-key
constraint on projectId which allows orphaned conversations; update the
migration SQL to add a foreign key constraint on "projectId" referencing the
primary key of the Project table (e.g., "Project"."id") with ON DELETE CASCADE,
and then update the Prisma schema to add the corresponding relation fields
(e.g., in the AiConversation model add an explicit relation to Project and mark
projectId as the foreign key) so Prisma migrations and runtime enforce cascade
deletes; apply the same change for the other similar table(s) referenced (lines
30-31) to keep behavior consistent.
| model AiMessage { | ||
| id String @id @default(uuid()) @db.Uuid | ||
| conversationId String @db.Uuid | ||
| role String | ||
| content Json | ||
| createdAt DateTime @default(now()) | ||
| conversation AiConversation @relation(fields: [conversationId], references: [id], onDelete: Cascade) | ||
|
|
||
| @@index([conversationId, createdAt(sort: Asc)]) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find files that use AiMessage and understand the insertion/query patterns
cd /tmp/repo && fd -e ts -e js | xargs grep -l "AiMessage" | head -20Repository: stack-auth/stack-auth
Length of output: 124
🏁 Script executed:
# Let's also search for createMany patterns with AiMessage
rg "createMany|AiMessage" --type ts --type js -A 3 -B 1 | head -100Repository: stack-auth/stack-auth
Length of output: 1841
🏁 Script executed:
# Look for the detail route and order patterns mentioned
rg "orderBy.*createdAt|createdAt.*orderBy" --type ts --type js -B 2 -A 2Repository: stack-auth/stack-auth
Length of output: 2759
🏁 Script executed:
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx | sed -n '1,150p'Repository: stack-auth/stack-auth
Length of output: 4414
🏁 Script executed:
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/\[conversationId\]/messages/route.tsx | sed -n '1,150p'Repository: stack-auth/stack-auth
Length of output: 2706
🏁 Script executed:
# Let's look at the detail route that orders by createdAt
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/\[conversationId\]/route.tsx | sed -n '1,100p'Repository: stack-auth/stack-auth
Length of output: 3975
🏁 Script executed:
cat -n apps/backend/prisma/schema.prisma | sed -n '1093,1115p'Repository: stack-auth/stack-auth
Length of output: 916
🏁 Script executed:
# Also check if there are any other models with similar patterns that might be using a position field
rg "@@unique.*position|position.*Int.*@default|@@index.*position" apps/backend/prisma/schema.prismaRepository: stack-auth/stack-auth
Length of output: 47
Persist a stable message order instead of relying on createdAt.
Messages inserted with createMany() in a single batch will have identical or nearly-identical timestamps, making the orderBy: { createdAt: "asc" } query nondeterministic. The input array order is lost during insertion. Add an explicit position column indexed with conversationId to preserve deterministic ordering.
Suggested shape
model AiMessage {
id String `@id` `@default`(uuid()) `@db.Uuid`
conversationId String `@db.Uuid`
+ position Int
role String
content Json
createdAt DateTime `@default`(now())
conversation AiConversation `@relation`(fields: [conversationId], references: [id], onDelete: Cascade)
- @@index([conversationId, createdAt(sort: Asc)])
+ @@unique([conversationId, position])
+ @@index([conversationId, position])
}Update insertions to include position:
- data: body.messages.map((msg) => ({
+ data: body.messages.map((msg, index) => ({
+ position: index,
conversationId: conversation.id,
role: msg.role,
content: msg.content as object,
})),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| model AiMessage { | |
| id String @id @default(uuid()) @db.Uuid | |
| conversationId String @db.Uuid | |
| role String | |
| content Json | |
| createdAt DateTime @default(now()) | |
| conversation AiConversation @relation(fields: [conversationId], references: [id], onDelete: Cascade) | |
| @@index([conversationId, createdAt(sort: Asc)]) | |
| model AiMessage { | |
| id String `@id` `@default`(uuid()) `@db.Uuid` | |
| conversationId String `@db.Uuid` | |
| position Int | |
| role String | |
| content Json | |
| createdAt DateTime `@default`(now()) | |
| conversation AiConversation `@relation`(fields: [conversationId], references: [id], onDelete: Cascade) | |
| @@unique([conversationId, position]) | |
| @@index([conversationId, position]) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/prisma/schema.prisma` around lines 1105 - 1113, The AiMessage
model currently relies on createdAt for ordering which is nondeterministic for
batch inserts (createMany()); add an explicit integer position field to
AiMessage (e.g., position Int) and create a composite index on (conversationId,
position) to preserve stable ordering, then update any insertion code that uses
createMany() or individual message inserts to set position according to the
input array order (ensuring position increments per conversation) and change
queries that used orderBy: { createdAt: "asc" } to orderBy: { position: "asc" }
(keep conversationId and createdAt untouched aside from indexing).
| handler: async ({ auth, params, body }) => { | ||
| await getOwnedConversation(params.conversationId, auth.user.id); | ||
|
|
||
| await globalPrismaClient.aiMessage.deleteMany({ | ||
| where: { conversationId: params.conversationId }, | ||
| }); | ||
|
|
||
| if (body.messages.length > 0) { | ||
| await globalPrismaClient.aiMessage.createMany({ | ||
| data: body.messages.map((msg) => ({ | ||
| conversationId: params.conversationId, | ||
| role: msg.role, | ||
| content: msg.content as object, | ||
| })), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Make the replace path atomic and touch the parent conversation.
A createMany() failure here leaves the conversation emptied after deleteMany(), and even successful replacements will not move the thread in the list because AiConversation.updatedAt never changes. Do the delete/insert/update in one transaction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/backend/src/app/api/latest/internal/ai-conversations/`[conversationId]/messages/route.tsx
around lines 46 - 61, The current handler calls getOwnedConversation then
deletes messages and separately calls createMany, which can leave the
conversation empty on failure and doesn't update AiConversation.updatedAt; wrap
the deleteMany + optional createMany + an update to touch the parent
conversation (update AiConversation.updatedAt for params.conversationId) inside
a single Prisma transaction (use globalPrismaClient.$transaction) so the
delete/insert/update are atomic and the parent conversation's updatedAt is
updated even when messages array is empty; keep getOwnedConversation as the
authorization check before the transaction and reference params.conversationId,
globalPrismaClient.aiMessage.deleteMany,
globalPrismaClient.aiMessage.createMany, and
globalPrismaClient.aiConversation.update (or the appropriate model name) when
implementing.
| body: yupObject({ | ||
| title: yupString().defined(), | ||
| projectId: yupString().defined(), | ||
| messages: yupArray( | ||
| yupObject({ | ||
| role: yupString().defined(), | ||
| content: yupMixed().defined(), | ||
| }) | ||
| ).defined(), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && find . -type f -name "route.tsx" | grep -i "ai-conversations" | head -5Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
find . -type f -path "*internal/ai-conversations*" -name "*.tsx"Repository: stack-auth/stack-auth
Length of output: 317
🏁 Script executed:
find . -type f -path "*ai-conversations*" -name "*.ts*" | head -10Repository: stack-auth/stack-auth
Length of output: 368
🏁 Script executed:
fd -t f "route.tsx" apps/backend/src/app/api/latest/internal --max-one-result | head -5Repository: stack-auth/stack-auth
Length of output: 298
🏁 Script executed:
fd -t d "ai-conversations" apps/backend/src/Repository: stack-auth/stack-auth
Length of output: 123
🏁 Script executed:
fd -t d "internal" apps/backend/src/app/api/latest/Repository: stack-auth/stack-auth
Length of output: 228
🏁 Script executed:
git ls-files "apps/backend/src/app/api/latest/internal/"Repository: stack-auth/stack-auth
Length of output: 4831
🏁 Script executed:
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx | sed -n '70,115p'Repository: stack-auth/stack-auth
Length of output: 1670
🏁 Script executed:
wc -l apps/backend/src/app/api/latest/internal/ai-conversations/route.tsxRepository: stack-auth/stack-auth
Length of output: 136
🏁 Script executed:
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/route.tsxRepository: stack-auth/stack-auth
Length of output: 4414
🏁 Script executed:
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsxRepository: stack-auth/stack-auth
Length of output: 2706
🏁 Script executed:
wc -l apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsxRepository: stack-auth/stack-auth
Length of output: 161
🏁 Script executed:
fd -t f "schema.prisma" apps/backendRepository: stack-auth/stack-auth
Length of output: 98
🏁 Script executed:
grep -A 5 -B 5 "aiMessage\|AiMessage" $(fd -t f "schema.prisma" apps/backend)Repository: stack-auth/stack-auth
Length of output: 518
🏁 Script executed:
find . -path "*/prisma/schema.prisma" -o -path "*/schema.prisma" | head -3Repository: stack-auth/stack-auth
Length of output: 100
🏁 Script executed:
rg -A 5 -B 5 "yupMixed" apps/backend/src/app/api/latest/internal/ai-conversations/Repository: stack-auth/stack-auth
Length of output: 7205
🏁 Script executed:
grep -r "yupMixed" packages/stack-shared/dist/schema-fields.ts | head -5Repository: stack-auth/stack-auth
Length of output: 141
🏁 Script executed:
cat $(find packages/stack-shared -name "*schema-fields*" -o -name "*schema*" | grep -i field | head -1)Repository: stack-auth/stack-auth
Length of output: 50377
🏁 Script executed:
fd -t f "schema-fields" packages/stack-shared/Repository: stack-auth/stack-auth
Length of output: 107
🏁 Script executed:
grep -A 20 "model AiMessage" apps/backend/prisma/schema.prismaRepository: stack-auth/stack-auth
Length of output: 541
🏁 Script executed:
rg -B 5 -A 15 "AiMessage|aiMessage" apps/backend/prisma/schema.prismaRepository: stack-auth/stack-auth
Length of output: 722
🏁 Script executed:
rg -r "AiMessage\|aiMessage" --type ts --type tsx apps/backend/src | grep -i "content\|schema" | head -20Repository: stack-auth/stack-auth
Length of output: 92
🏁 Script executed:
rg "messages\[" apps/backend/src/app/api/latest/internal/ai-conversations/ -A 3 -B 3Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
rg "aiMessage\|AiMessage" apps/backend/src --type ts --type js | head -20Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
find apps/backend/src -type f -name "*.ts" -o -name "*.tsx" | xargs grep -l "aiMessage" | head -5Repository: stack-auth/stack-auth
Length of output: 311
🏁 Script executed:
rg "role.*user|assistant|system" apps/backend/src/app/api/latest/internal/ai-conversations/ -iRepository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
grep -n "as object\|as any\|as unknown" apps/backend/src/app/api/latest/internal/ai-conversations/route.tsxRepository: stack-auth/stack-auth
Length of output: 110
🏁 Script executed:
grep -n "as object\|as any\|as unknown" apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsxRepository: stack-auth/stack-auth
Length of output: 109
Tighten the request message schema to define expected role and content types.
The messages[*] schema accepts arbitrary yupMixed() for content and unvalidated strings for role, which requires downstream type casts (msg.content as object at lines 111 and 58). This violates the coding guidelines: "Do NOT use as/any/type casts or anything else to bypass the type system unless you specifically asked the user about it."
Define the exact persisted message structure here—for example, role should be constrained to known values (e.g., "user", "assistant", "system") and content should be validated as a specific type. Apply the same schema to the sibling PUT route at [conversationId]/messages/route.tsx to avoid duplication.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx` around
lines 77 - 85, The request body schema's messages array uses yupString() for
role and yupMixed() for content, causing downstream casts (e.g., the
"msg.content as object" usage); update the body: yupObject(...) messages schema
so role is constrained with
yupString().oneOf(["user","assistant","system"]).defined() and content is
validated as the exact persisted message object shape (replace yupMixed() with a
yupObject(...) that matches the stored message fields), and apply the identical
tightened schema to the sibling PUT handler in
[conversationId]/messages/route.tsx so callers and handlers no longer need
runtime type casts.
| handler: async ({ auth, body }) => { | ||
| const conversation = await globalPrismaClient.aiConversation.create({ | ||
| data: { | ||
| projectUserId: auth.user.id, | ||
| title: body.title, | ||
| projectId: body.projectId, | ||
| }, | ||
| }); | ||
|
|
||
| if (body.messages.length > 0) { | ||
| await globalPrismaClient.aiMessage.createMany({ | ||
| data: body.messages.map((msg) => ({ | ||
| conversationId: conversation.id, | ||
| role: msg.role, | ||
| content: msg.content as object, | ||
| })), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Create the conversation and its initial messages atomically.
If aiMessage.createMany() fails after the conversation row is inserted, the request returns an error but leaves a partial conversation behind. A retry can then create duplicates. Use a transaction or a nested relation write here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx` around
lines 97 - 114, The code creates an aiConversation and then calls
globalPrismaClient.aiMessage.createMany separately, which can leave a dangling
conversation if message insertion fails; change this to an atomic operation by
either wrapping both calls in a Prisma transaction (use
globalPrismaClient.$transaction([ ... ]) to create the conversation and create
messages) or perform a nested create in the aiConversation.create call (include
the relation field and use messages: { create: body.messages.map(m => ({ role:
m.role, content: m.content })) } so messages are created as part of the same
write). Update the handler to use one of these approaches instead of separate
aiConversation.create and aiMessage.createMany calls.
| export function getToolInvocations(message: UIMessage): ToolInvocationPart[] { | ||
| return message.parts | ||
| .filter((part) => part.type.startsWith("tool-")) | ||
| .map((part) => part as unknown as ToolInvocationPart); | ||
| } |
There was a problem hiding this comment.
Double type cast bypasses type safety.
The as unknown as ToolInvocationPart pattern completely bypasses TypeScript's type checking. If the actual structure of part differs from ToolInvocationPart, this will cause runtime errors.
Consider using a type guard or runtime validation instead.
Proposed type guard approach
+function isToolInvocationPart(part: unknown): part is ToolInvocationPart {
+ return (
+ typeof part === "object" &&
+ part !== null &&
+ "type" in part &&
+ typeof (part as { type: unknown }).type === "string" &&
+ (part as { type: string }).type.startsWith("tool-") &&
+ "toolCallId" in part &&
+ "state" in part &&
+ "input" in part
+ );
+}
export function getToolInvocations(message: UIMessage): ToolInvocationPart[] {
return message.parts
.filter((part) => part.type.startsWith("tool-"))
- .map((part) => part as unknown as ToolInvocationPart);
+ .filter(isToolInvocationPart);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx` around lines 408 -
412, The getToolInvocations function uses a double cast (as unknown as
ToolInvocationPart) which bypasses TypeScript safety; replace this with a proper
type guard or runtime validation: add a predicate like
isToolInvocationPart(part): part is ToolInvocationPart (checking required
properties/shape of ToolInvocationPart) and use message.parts.filter((part):
part is ToolInvocationPart => part.type.startsWith("tool-") &&
isToolInvocationPart(part)) then return the filtered array directly; reference
getToolInvocations, UIMessage, and ToolInvocationPart when adding the guard.
| useEffect(() => { | ||
| if (!projectId) return; | ||
| runAsynchronously(async () => { | ||
| const conversations = await listConversations(currentUser, projectId); | ||
| if (conversations.length > 0) { | ||
| const conv = await getConversation(currentUser, conversations[0].id); | ||
| const initialMessages: UIMessage[] = conv.messages.map((msg) => ({ | ||
| id: msg.id, | ||
| role: msg.role as "user" | "assistant", | ||
| parts: msg.content as UIMessage["parts"], | ||
| })); | ||
| setViewMode({ view: 'chat', conversationId: conversations[0].id, initialMessages }); | ||
| setConversationKey(prev => prev + 1); | ||
| } | ||
| }); | ||
| // eslint-disable-next-line react-hooks/exhaustive-deps | ||
| }, []); |
There was a problem hiding this comment.
Missing projectId dependency may cause stale data after navigation.
The effect loads the most recent conversation on mount but omits projectId from dependencies. If the user navigates to a different project without remounting this component, the chat will display conversations from the previous project.
Either add projectId to the dependency array or verify the component remounts on project changes.
Proposed fix
useEffect(() => {
if (!projectId) return;
runAsynchronously(async () => {
const conversations = await listConversations(currentUser, projectId);
if (conversations.length > 0) {
const conv = await getConversation(currentUser, conversations[0].id);
const initialMessages: UIMessage[] = conv.messages.map((msg) => ({
id: msg.id,
role: msg.role as "user" | "assistant",
parts: msg.content as UIMessage["parts"],
}));
setViewMode({ view: 'chat', conversationId: conversations[0].id, initialMessages });
setConversationKey(prev => prev + 1);
}
});
- // eslint-disable-next-line react-hooks/exhaustive-deps
- }, []);
+ }, [projectId, currentUser]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 185 - 201, The effect that fetches the latest conversation (inside
useEffect) omits projectId from its dependency array, causing stale
conversations when the user switches projects; update the dependency list to
include projectId (and any other used values like currentUser or
runAsynchronously if not stable) so the effect re-runs on project change,
ensuring listConversations/currentUser/getConversation, setViewMode, and
setConversationKey operate against the correct project; alternatively ensure the
component remounts on project change, but the simplest fix is to add projectId
to the useEffect dependencies.
| const initialMessages: UIMessage[] = conv.messages.map((msg) => ({ | ||
| id: msg.id, | ||
| role: msg.role as "user" | "assistant", | ||
| parts: msg.content as UIMessage["parts"], | ||
| })); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's existing validation patterns or Zod usage in the codebase
rg -n "from 'zod'" --type=ts -g '!node_modules/**' | head -20
rg -nP "\.parse\(|\.safeParse\(" --type=ts apps/dashboard/src/ | head -10Repository: stack-auth/stack-auth
Length of output: 50378
🏁 Script executed:
cat -n apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx | sed -n '175,210p'Repository: stack-auth/stack-auth
Length of output: 1803
🏁 Script executed:
# Find where messages come from and what type the conv object is
rg -n "useQuery|useApiCall|fetch.*conversation" apps/dashboard/src/components/stack-companion/ -A 3Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
# Check the API endpoint that provides conversation data
fd -t f "conversation" apps/backend/src/ | grep -E "\.(ts|tsx)$" | head -10Repository: stack-auth/stack-auth
Length of output: 47
🏁 Script executed:
# Look for validation or schema definitions for conversation messages
rg -n "message.*schema|Message.*type" apps/dashboard/src/ -A 2 | head -40Repository: stack-auth/stack-auth
Length of output: 50380
🏁 Script executed:
rg -n "function getConversation|const getConversation|export.*getConversation" apps/dashboard/src/ -A 5Repository: stack-auth/stack-auth
Length of output: 623
🏁 Script executed:
# Find type definitions for UIMessage and conversation data
rg -n "type UIMessage|interface UIMessage|type.*Message.*=" apps/dashboard/src/ -B 1 -A 3 | head -40Repository: stack-auth/stack-auth
Length of output: 50380
🏁 Script executed:
# Search for where messages are coming from in the conversation
rg -n "messages" apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx -B 2 -A 2Repository: stack-auth/stack-auth
Length of output: 5229
🏁 Script executed:
# Look for the data fetching utilities
fd -t f "api\|fetch\|client" apps/dashboard/src/ -name "*.ts" -o -name "*.tsx" | grep -E "(api|fetch|client)" | head -15Repository: stack-auth/stack-auth
Length of output: 295
🏁 Script executed:
rg -n "type ConversationDetail|interface ConversationDetail" apps/dashboard/src/ -A 10Repository: stack-auth/stack-auth
Length of output: 809
🏁 Script executed:
cat -n apps/dashboard/src/hooks/use-ai-conversations.ts | sed -n '1,100p'Repository: stack-auth/stack-auth
Length of output: 3697
🏁 Script executed:
# Check if there's any validation or schema for the API response
rg -n "ConversationDetail|AIMessage|ConversationMessage" apps/dashboard/src/hooks/use-ai-conversations.ts -B 2 -A 5Repository: stack-auth/stack-auth
Length of output: 791
🏁 Script executed:
rg -n "type UIMessage|interface UIMessage" apps/dashboard/src/ -B 2 -A 8 | head -40Repository: stack-auth/stack-auth
Length of output: 2798
🏁 Script executed:
# Check if there's any validation applied in the hooks or elsewhere
rg -n "safeParse|parse|zod" apps/dashboard/src/hooks/use-ai-conversations.tsRepository: stack-auth/stack-auth
Length of output: 47
Remove unsafe type casts or improve type definitions.
The msg.role and msg.content are cast from loosely-typed API response data without validation. The ConversationDetail.messages type defines role as string and content as unknown, so casting these to "user" | "assistant" and UIMessage["parts"] makes unsafe assumptions.
Either narrow the type definition in ConversationDetail to match what the API actually returns:
messages: Array<{
id: string,
role: "user" | "assistant",
content: UIMessage["parts"],
}>Or validate the data at the source before casting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 191 - 195, The code in initialMessages maps conv.messages using unsafe
casts for msg.role and msg.content; update the ConversationDetail.messages type
to reflect the real API shape (role: "user" | "assistant" and content:
UIMessage["parts"]) or add runtime validation before mapping (validate msg.role
is "user"|"assistant" and that msg.content conforms to UIMessage["parts"]) and
only then construct initialMessages from conv.messages to avoid unsafe casts in
the initialMessages mapping.
| runAsynchronously(async () => { | ||
| try { | ||
| if (conversationIdRef.current) { | ||
| await replaceConversationMessages(currentUser, conversationIdRef.current, messagesToSave); | ||
| } else if (projectId) { | ||
| const result = await createConversation(currentUser, { | ||
| title, | ||
| projectId, | ||
| messages: messagesToSave, | ||
| }); | ||
| conversationIdRef.current = result.id; | ||
| onConversationCreated(result.id); | ||
| } | ||
| } finally { | ||
| isSavingRef.current = false; | ||
| } | ||
| }); |
There was a problem hiding this comment.
Silent failure when conversation save fails.
If createConversation or replaceConversationMessages throws, the error propagates to runAsynchronously but the user receives no feedback that their conversation wasn't saved. Consider using runAsynchronouslyWithAlert or adding user-visible error handling to prevent silent data loss.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 352 - 368, The save block using runAsynchronously currently lets errors
from createConversation or replaceConversationMessages fail silently; update the
inner async to catch errors (surround the await calls for
replaceConversationMessages/createConversation with try/catch), call a
user-visible notifier (e.g., runAsynchronouslyWithAlert or the app's toast/alert
mechanism) inside the catch with a clear message including the error, still set
isSavingRef.current = false in finally, and keep the existing logic that sets
conversationIdRef.current and calls onConversationCreated(result.id) only on
success.
| export async function getConversation( | ||
| currentUser: CurrentUser | undefined | null, | ||
| conversationId: string, | ||
| ): Promise<ConversationDetail> { | ||
| const response = await apiFetch(currentUser, `/${conversationId}`); | ||
| return await response.json(); | ||
| } |
There was a problem hiding this comment.
Validate conversationId format before path interpolation.
The conversationId is directly interpolated into the URL path. While unlikely with UUIDs, an attacker-controlled ID containing path traversal characters (e.g., ../) could potentially manipulate the request path. Consider validating the format or using URL encoding.
Proposed defensive fix
export async function getConversation(
currentUser: CurrentUser | undefined | null,
conversationId: string,
): Promise<ConversationDetail> {
- const response = await apiFetch(currentUser, `/${conversationId}`);
+ const response = await apiFetch(currentUser, `/${encodeURIComponent(conversationId)}`);
return await response.json();
}Apply the same pattern to updateConversationTitle, replaceConversationMessages, and deleteConversation.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function getConversation( | |
| currentUser: CurrentUser | undefined | null, | |
| conversationId: string, | |
| ): Promise<ConversationDetail> { | |
| const response = await apiFetch(currentUser, `/${conversationId}`); | |
| return await response.json(); | |
| } | |
| export async function getConversation( | |
| currentUser: CurrentUser | undefined | null, | |
| conversationId: string, | |
| ): Promise<ConversationDetail> { | |
| const response = await apiFetch(currentUser, `/${encodeURIComponent(conversationId)}`); | |
| return await response.json(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/hooks/use-ai-conversations.ts` around lines 67 - 73, The
code interpolates conversationId directly into request paths (see
getConversation, updateConversationTitle, replaceConversationMessages,
deleteConversation), which risks path traversal if an attacker supplies
malicious input; fix by validating or encoding the id before using it in the
URL: either enforce a strict format (e.g., UUID regex) and throw on invalid
values, or always URL-encode conversationId with encodeURIComponent() and use
the encoded value in apiFetch; apply the same validation/encoding pattern
consistently across the listed functions.
| if (!projectId) { | ||
| setLoading(false); | ||
| return; | ||
| } |
| await globalPrismaClient.aiMessage.deleteMany({ | ||
| where: { conversationId: params.conversationId }, | ||
| }); | ||
|
|
||
| if (body.messages.length > 0) { | ||
| await globalPrismaClient.aiMessage.createMany({ | ||
| data: body.messages.map((msg) => ({ | ||
| conversationId: params.conversationId, | ||
| role: msg.role, | ||
| content: msg.content as object, | ||
| })), | ||
| }); | ||
| } |
There was a problem hiding this comment.
| await globalPrismaClient.aiMessage.deleteMany({ | |
| where: { conversationId: params.conversationId }, | |
| }); | |
| if (body.messages.length > 0) { | |
| await globalPrismaClient.aiMessage.createMany({ | |
| data: body.messages.map((msg) => ({ | |
| conversationId: params.conversationId, | |
| role: msg.role, | |
| content: msg.content as object, | |
| })), | |
| }); | |
| } | |
| await globalPrismaClient.$transaction(async (tx) => { | |
| await tx.aiMessage.deleteMany({ | |
| where: { conversationId: params.conversationId }, | |
| }); | |
| if (body.messages.length > 0) { | |
| await tx.aiMessage.createMany({ | |
| data: body.messages.map((msg) => ({ | |
| conversationId: params.conversationId, | |
| role: msg.role, | |
| content: msg.content as object, | |
| })), | |
| }); | |
| } | |
| }); |
Non-atomic deleteMany + createMany operations can leave conversation with zero messages if createMany fails after deleteMany succeeds.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes