Skip to content

Persistent AI Chat history #1296

Open
aadesh18 wants to merge 2 commits intodevfrom
ai-chat-history
Open

Persistent AI Chat history #1296
aadesh18 wants to merge 2 commits intodevfrom
ai-chat-history

Conversation

@aadesh18
Copy link
Copy Markdown
Contributor

@aadesh18 aadesh18 commented Mar 28, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added AI Chat functionality with conversation management capabilities
    • New "Ask AI" sidebar option for quick access to AI assistance
    • Ability to create, browse, and delete AI conversations
    • Rich chat UI with support for code blocks, links, and tool invocations
    • Word-by-word message streaming for better readability
  • Bug Fixes

    • Fixed event propagation issue in list item action buttons to prevent unintended triggering

Copilot AI review requested due to automatic review settings March 28, 2026 00:34
@aadesh18 aadesh18 self-assigned this Mar 28, 2026
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 28, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stack-auth-hosted-components Ready Ready Preview, Comment Mar 28, 2026 0:34am
stack-backend Ready Ready Preview, Comment Mar 28, 2026 0:34am
stack-dashboard Ready Ready Preview, Comment Mar 28, 2026 0:34am
stack-demo Ready Ready Preview, Comment Mar 28, 2026 0:34am
stack-docs Ready Ready Preview, Comment Mar 28, 2026 0:34am

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 28, 2026

📝 Walkthrough

Walkthrough

This pull request adds persistent AI conversation functionality to the Stack Companion. It introduces database tables for storing conversations and messages, backend API endpoints for conversation lifecycle management, frontend UI components for chat rendering and conversation browsing, and client-side hooks for API interactions.

Changes

Cohort / File(s) Summary
Database Schema
apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql, apps/backend/prisma/schema.prisma
Added AiConversation and AiMessage Prisma models with supporting indexes (projectUserId, updatedAt and conversationId, createdAt) and cascading delete constraints to maintain referential integrity.
Backend API Routes
apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx, apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/route.tsx, apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx
Implemented GET/POST handlers for listing and creating conversations, GET/PATCH/DELETE handlers for conversation details/updates/deletion, and PUT handler for replacing conversation messages. All handlers enforce ownership validation via projectUserId checks.
Frontend Chat Components
apps/dashboard/src/components/commands/ai-chat-shared.tsx, apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx
Added reusable chat UI components (copy button, code/link rendering, tool invocation cards) in shared module; implemented full AIChatWidget with conversation list browsing, message persistence, word-by-word streaming reveal, and autoscroll behavior.
Frontend Refactoring
apps/dashboard/src/components/commands/ask-ai.tsx
Extracted chat rendering utilities to shared module and updated component to import and reuse them, reducing local implementation duplication while maintaining chat control flow.
Client Hooks
apps/dashboard/src/hooks/use-ai-conversations.ts
Provided typed API wrapper functions (listConversations, createConversation, getConversation, updateConversationTitle, replaceConversationMessages, deleteConversation) with authentication header injection and error handling.
Minor UI Updates
apps/dashboard/src/components/design-components/list.tsx, apps/dashboard/src/components/stack-companion.tsx
Added onClick event stopPropagation on action buttons to prevent list row click interference; added Ask AI sidebar entry with AIChatWidget conditional rendering and layout adjustments for the chat view.

Sequence Diagram(s)

sequenceDiagram
    participant User as User (Frontend)
    participant ChatUI as Chat UI
    participant API as Backend API
    participant DB as Database

    User->>ChatUI: Send message
    ChatUI->>API: POST /ai-conversations/[id]/messages or POST /ai-conversations with initial messages
    API->>DB: Create aiMessage or create aiConversation + aiMessages
    DB-->>API: Record(s) created with id, timestamps
    API-->>ChatUI: Streaming response (AI content)
    ChatUI->>ChatUI: Word-by-word reveal via useWordStreaming
    ChatUI->>API: PUT /ai-conversations/[id]/messages (persist assistant response)
    API->>DB: Replace aiMessage records for conversation
    DB-->>API: Success
    API-->>ChatUI: 200 OK
    ChatUI-->>User: Display persisted message history
Loading
sequenceDiagram
    participant User as User (Frontend)
    participant ListUI as Conversation List
    participant API as Backend API
    participant DB as Database

    User->>ListUI: Open Stack Companion "Ask AI"
    ListUI->>API: GET /ai-conversations?projectId=...
    API->>DB: Query aiConversation filtered by projectUserId, projectId
    DB-->>API: [{ id, title, projectId, updatedAt }, ...]
    API-->>ListUI: Conversation summaries
    ListUI-->>User: Render conversation list with titles and timestamps

    User->>ListUI: Click conversation or delete button
    ListUI->>API: DELETE /ai-conversations/[id] or GET /ai-conversations/[id]
    API->>DB: Delete aiConversation (cascades to aiMessages) or select aiConversation + aiMessages
    DB-->>API: Success or conversation detail
    API-->>ListUI: 200 OK or conversation data
    ListUI-->>User: Update list or open conversation
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Cmd K #1037: Overlapping changes to AI chat UI components (ask-ai.tsx) and shared markdown/copy utilities across dashboard and backend AI endpoints.

Suggested reviewers

  • Developing-Gamer
  • N2D4

Poem

🐰 Whispered a rabbit midst code and git branches
"Conversations now persist through my digital ransoms!
With schemas and cascades, and messages saved,
Each chat is remembered, forever engraved!"

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only an HTML template comment and lacks substantive content describing the changes, implementation approach, or rationale. Provide a detailed description of the changes made, including the new database schema, API endpoints created, UI components added, and the overall implementation approach for persistent chat history.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding persistent AI chat history with database tables, API endpoints, and UI components.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ai-chat-history

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 28, 2026

Greptile Summary

This PR adds persistent AI chat history to the Stack Companion sidebar: a new AiConversation/AiMessage database schema, five internal API endpoints (list, create, get, update, delete), a new AIChatWidget component with conversation list/chat views, and a refactor of shared UI components into ai-chat-shared.tsx.

Key concerns found during review:

  • Non-atomic message replacement (P1): The PUT /ai-conversations/:id/messages handler calls deleteMany then createMany without a transaction. A failure between the two operations permanently deletes all messages with no recovery path.
  • Save race condition (P1): The isSavingRef.current guard in AIChatWidgetInner silently skips a save if a second AI response completes while the first save is in-flight, leaving messages unwritten to the database.
  • Missing error alerts on async handlers (P2): Several runAsynchronously calls lack error display; per project convention these should use runAsynchronouslyWithAlert.
  • Duplicated getOwnedConversation helper (P2): Same ownership-check function copy-pasted into two route files.
  • Unvalidated role field (P2): Message role accepts any string instead of being constrained to \"user\" | \"assistant\".

Confidence Score: 4/5

Not safe to merge as-is due to a non-atomic delete+recreate that can permanently destroy conversation messages.

Two P1 issues exist: the missing transaction in the messages PUT endpoint (confirmed data-loss path) and the race condition in the frontend save logic. The rest of the changes are well-structured and the authorization model is correct.

apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx (transaction), apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx (race condition + error handling)

Important Files Changed

Filename Overview
apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx PUT handler replaces all messages without a DB transaction — if createMany fails after deleteMany, all conversation messages are permanently deleted; also duplicates getOwnedConversation helper.
apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx New AI chat widget with a race condition in the save-on-completion effect, missing error alerts on several async operations, and a suppressed exhaustive-deps warning.
apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/route.tsx GET/PATCH/DELETE handlers for a single conversation; ownership check is correct, but getOwnedConversation is duplicated from the messages route file.
apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx List and create conversation endpoints; uses SmartRouteHandler correctly, hardcodes limit of 50 conversations, and properly scopes by user + projectId.
apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql Creates AiConversation and AiMessage tables with appropriate indexes and a CASCADE FK between them; projectUserId and projectId have no FK constraints to existing user/project tables.
apps/dashboard/src/hooks/use-ai-conversations.ts Clean API client module wrapping all five conversation endpoints; properly sets Content-Type and forwards auth headers.

Sequence Diagram

sequenceDiagram
    participant UI as AIChatWidgetInner
    participant API as Backend API
    participant DB as Database

    UI->>API: POST /ai/query/stream (messages)
    API-->>UI: stream response (status: streaming)
    API-->>UI: stream ends (status: ready)

    UI->>UI: useEffect detects streaming→ready
    Note over UI: isSavingRef = true

    alt First save — no existing conversation
        UI->>API: POST /ai-conversations
        API->>DB: INSERT AiConversation
        API->>DB: INSERT AiMessages (separate op, no tx)
        DB-->>API: ok
        API-->>UI: { id, title }
        Note over UI: isSavingRef = false
    else Subsequent save — conversation exists
        UI->>API: PUT /ai-conversations/:id/messages
        API->>DB: DELETE AiMessages WHERE conversationId=:id
        Note over DB: ⚠️ No transaction — if next step fails, messages gone
        API->>DB: INSERT AiMessages (createMany)
        DB-->>API: ok
        API-->>UI: 200 {}
        Note over UI: isSavingRef = false
    end

    Note over UI: ⚠️ If 2nd response completes while isSavingRef=true,
    Note over UI: that save is silently skipped
Loading

Reviews (1): Last reviewed commit: "initial commit" | Re-trigger Greptile

Comment on lines +48 to +58

await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});

if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Non-atomic delete+recreate causes permanent message loss

deleteMany and the subsequent createMany are two separate database operations. If createMany throws (network blip, constraint violation, etc.), the delete has already committed — every message in the conversation is permanently gone with no way to recover them.

Wrap both operations in a Prisma transaction:

Suggested change
await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});
if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
await globalPrismaClient.$transaction(async (tx) => {
await tx.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});
if (body.messages.length > 0) {
await tx.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
});

Comment on lines +321 to +370
// Save conversation when streaming completes
useEffect(() => {
const prevStatus = prevStatusRef.current;
prevStatusRef.current = status;

const completedOk = (prevStatus === "streaming" || prevStatus === "submitted") && status === "ready";
const completedWithError = (prevStatus === "streaming" || prevStatus === "submitted") && status === "error";

if (
(completedOk || completedWithError) &&
messages.length > 0 &&
!isSavingRef.current
) {
isSavingRef.current = true;
// On error, only save user messages (strip any partial/failed assistant turn)
const safeMessages = completedWithError
? messages.filter(m => m.role === "user")
: messages;
if (safeMessages.length === 0) {
isSavingRef.current = false;
return;
}
const messagesToSave = safeMessages.map(m => ({
role: m.role,
content: m.parts,
}));
const firstUserMessage = messages.find(m => m.role === "user");
const title = firstUserMessage
? getMessageContent(firstUserMessage).slice(0, 50) || "New conversation"
: "New conversation";

runAsynchronously(async () => {
try {
if (conversationIdRef.current) {
await replaceConversationMessages(currentUser, conversationIdRef.current, messagesToSave);
} else if (projectId) {
const result = await createConversation(currentUser, {
title,
projectId,
messages: messagesToSave,
});
conversationIdRef.current = result.id;
onConversationCreated(result.id);
}
} finally {
isSavingRef.current = false;
}
});
}
}, [status, messages, currentUser, projectId, onConversationCreated]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Race condition — concurrent saves silently skipped, causing data loss

isSavingRef.current is used to prevent concurrent saves. However, if a second AI response completes while the first save is still in-flight, the !isSavingRef.current guard causes that save to be silently skipped entirely. The conversation in the database will then be missing the messages from that turn until the user sends another message — and if they never do, those messages are permanently lost.

A safer approach is to queue the save or stash the latest snapshot and replay it after the in-flight save finishes:

// Stash the latest messages when a save is already running
if (isSavingRef.current) {
  pendingMessagesRef.current = messagesToSave;
  return;
}
// After isSavingRef.current = false in the finally block:
const pending = pendingMessagesRef.current;
pendingMessagesRef.current = null;
if (pending) { /* re-trigger save with pending */ }

Comment on lines +67 to +75
runAsynchronously(async () => {
try {
const result = await listConversations(currentUser, projectId);
setConversations(result);
} finally {
setLoading(false);
}
});
}, [currentUser, projectId]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Use runAsynchronouslyWithAlert for user-triggered async operations

Per project convention, async operations initiated from button clicks or user interactions should use runAsynchronouslyWithAlert instead of bare runAsynchronously. This ensures errors are surfaced to the user rather than silently swallowed.

This applies to multiple locations in this file:

  • Line 67: initial listConversations load (error swallowed, only finally runs)
  • Line 80: handleDelete inside runAsynchronously
  • Line 187: AIChatWidget's initial conversation load
  • Line 238: onSelectConversation={(id) => runAsynchronously(handleSelectConversation(id))}

Rule Used: Use runAsynchronouslyWithAlert from `@stackframe... (source)

Learnt From
stack-auth/stack-auth#943

Comment on lines +6 to +13
async function getOwnedConversation(conversationId: string, userId: string) {
const conversation = await globalPrismaClient.aiConversation.findUnique({
where: { id: conversationId },
});
if (!conversation || conversation.projectUserId !== userId) {
throw new StatusError(StatusError.NotFound, "Conversation not found");
}
return conversation;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Duplicated getOwnedConversation helper

An identical getOwnedConversation function is also defined in [conversationId]/route.tsx (lines 6–13). Future security or logic changes to the ownership check must be applied in two places. Consider extracting it to a shared utility file.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +35 to +36
role: yupString().defined(),
content: yupMixed().defined(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 role field accepts any arbitrary string

yupString().defined() places no restriction on the role value. In practice only "user" and "assistant" are meaningful.

Suggested change
role: yupString().defined(),
content: yupMixed().defined(),
role: yupString().oneOf(["user", "assistant"]).defined(),

The same applies to the role field in the POST /ai-conversations handler.

Comment on lines +198 to +202
}
});
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Suppressed exhaustive-deps hides a stale-closure bug

The eslint-disable-next-line react-hooks/exhaustive-deps suppresses warnings for currentUser and projectId missing from the dependency array. If either changes while the companion is open, the widget will continue showing the old project's conversations without reloading.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds persistent “Ask AI” chat history to the Dashboard Stack Companion by introducing conversation storage in the backend, a new chat-history UI in the companion drawer, and shared rendering utilities for AI chat messages.

Changes:

  • Introduces new backend persistence models (AiConversation, AiMessage) with migration and internal CRUD-like endpoints under /api/latest/internal/ai-conversations.
  • Adds a new Stack Companion “Ask AI” sidebar item and a full chat widget with conversation list, create/update/delete flows.
  • Refactors shared AI chat rendering helpers into ai-chat-shared.tsx and wires them into the existing ask-ai command UI.

Reviewed changes

Copilot reviewed 11 out of 12 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
pnpm-lock.yaml Lockfile updates from dependency graph changes.
apps/dashboard/src/hooks/use-ai-conversations.ts Client-side API wrapper for listing/creating/updating/deleting persisted conversations.
apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx New Ask AI chat UI with conversation list + persistence integration.
apps/dashboard/src/components/stack-companion.tsx Adds “Ask AI” entry and renders the new widget in the companion drawer.
apps/dashboard/src/components/design-components/list.tsx Stops click propagation on list-item button container to avoid triggering row click.
apps/dashboard/src/components/commands/ask-ai.tsx Uses shared AI chat UI helpers after refactor.
apps/dashboard/src/components/commands/ai-chat-shared.tsx New shared components/utilities for AI chat message rendering and streaming effects.
apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx Adds list/create endpoints for conversations.
apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/route.tsx Adds get/update-title/delete endpoints for a specific conversation.
apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx Adds “replace all messages” endpoint for a conversation.
apps/backend/prisma/schema.prisma Adds Prisma models for persistent AI chat storage.
apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql Creates DB tables/indexes/foreign key for AI chat persistence.
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +509 to +513
<button
onClick={handleSubmit}
disabled={!input.trim() || aiLoading}
className={cn(
"p-1 rounded transition-colors hover:transition-none",
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Icon-only buttons should have an accessible name for screen readers. Add an aria-label (and/or title) to this send button (and similar icon-only buttons in the widget) so it’s discoverable to assistive tech.

Copilot uses AI. Check for mistakes.
onClick={(e) => handleDelete(e, conv.id)}
disabled={deletingId === conv.id}
className="opacity-0 group-hover:opacity-100 p-1 text-muted-foreground/40 hover:text-red-400 transition-all shrink-0"
type="button"
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This delete button is icon-only and currently has no accessible label. Please add aria-label/title (e.g., “Delete conversation”) so screen readers can announce its purpose.

Suggested change
type="button"
type="button"
aria-label="Delete conversation"
title="Delete conversation"

Copilot uses AI. Check for mistakes.
Comment on lines +49 to +61
await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});

if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replacing messages by deleteMany then createMany is not atomic. If the request fails between these operations, the conversation can end up with no messages. Also, this path doesn’t update AiConversation.updatedAt, so the conversation won’t appear “recent” after new messages are saved. Consider wrapping the delete+insert (and a conversation update to touch updatedAt) in a single prisma.$transaction.

Suggested change
await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});
if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
await globalPrismaClient.$transaction(async (tx) => {
await tx.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});
if (body.messages.length > 0) {
await tx.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
await tx.aiConversation.update({
where: { id: params.conversationId },
data: { updatedAt: new Date() },
});
});

Copilot uses AI. Check for mistakes.
updatedAt DateTime @updatedAt
messages AiMessage[]

@@index([projectUserId, updatedAt(sort: Desc)])
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main query pattern for listing conversations filters by both projectUserId and projectId and orders by updatedAt. The current index is only (projectUserId, updatedAt DESC), which can be suboptimal as conversation volume grows. Consider changing/adding an index like (projectUserId, projectId, updatedAt DESC) to match findMany({ where: { projectUserId, projectId }, orderBy: { updatedAt: "desc" } }).

Suggested change
@@index([projectUserId, updatedAt(sort: Desc)])
@@index([projectUserId, projectId, updatedAt(sort: Desc)])

Copilot uses AI. Check for mistakes.
}, [status, messages, currentUser, projectId, onConversationCreated]);

// Word streaming for the last assistant message
const lastAssistantMessage = messages.findLast((m: UIMessage) => m.role === "assistant");
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Array.prototype.findLast isn’t supported in some browsers/environments, and Next.js doesn’t automatically polyfill new built-ins. Since this runs on the client, consider replacing findLast with a backwards loop (or add a polyfill) to avoid runtime crashes on older browsers.

Suggested change
const lastAssistantMessage = messages.findLast((m: UIMessage) => m.role === "assistant");
const lastAssistantMessage = (() => {
for (let i = messages.length - 1; i >= 0; i--) {
if (messages[i].role === "assistant") {
return messages[i];
}
}
return undefined;
})();

Copilot uses AI. Check for mistakes.
};

function getBaseUrl() {
return getPublicEnvVar("NEXT_PUBLIC_BROWSER_STACK_API_URL") ?? getPublicEnvVar("NEXT_PUBLIC_STACK_API_URL") ?? throwErr("NEXT_PUBLIC_BROWSER_STACK_API_URL is not set");
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getBaseUrl() can fall back to NEXT_PUBLIC_STACK_API_URL, but the thrown error message always says NEXT_PUBLIC_BROWSER_STACK_API_URL is not set. This is misleading when the other env var is also missing; please update the message to reflect both accepted env vars (or the actual missing one).

Suggested change
return getPublicEnvVar("NEXT_PUBLIC_BROWSER_STACK_API_URL") ?? getPublicEnvVar("NEXT_PUBLIC_STACK_API_URL") ?? throwErr("NEXT_PUBLIC_BROWSER_STACK_API_URL is not set");
const browserUrl = getPublicEnvVar("NEXT_PUBLIC_BROWSER_STACK_API_URL");
const stackUrl = getPublicEnvVar("NEXT_PUBLIC_STACK_API_URL");
if (browserUrl ?? stackUrl) {
return browserUrl ?? stackUrl;
}
return throwErr("Either NEXT_PUBLIC_BROWSER_STACK_API_URL or NEXT_PUBLIC_STACK_API_URL must be set");

Copilot uses AI. Check for mistakes.
Comment on lines +200 to +201
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This effect intentionally disables exhaustive-deps and runs only once, but it reads currentUser and projectId. If the user navigates to a different project (or auth state changes) while the widget stays mounted, it will keep showing the old project’s conversation. Consider including currentUser/projectId in the deps and resetting state when they change (or explicitly remount the widget on project change).

Copilot uses AI. Check for mistakes.
Comment on lines +329 to +333
if (
(completedOk || completedWithError) &&
messages.length > 0 &&
!isSavingRef.current
) {
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isSavingRef guard can cause message history to be dropped: after a stream completes, saving happens while status === "ready" (so the input is enabled). If the user sends another message before the save finishes, the next completion will be ignored because isSavingRef.current is still true. Consider queuing a pending save (always save the latest messages after the current save finishes) or using a debounced/serial save loop keyed by conversationId.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (4)
apps/dashboard/src/components/commands/ai-chat-shared.tsx (2)

188-190: Type assertions on external AI SDK data.

The invocation.input and invocation.output are cast to specific shapes without validation. If the AI SDK response structure changes, this could cause silent failures or crashes when accessing input?.query or result?.success.

Consider adding defensive checks or optional chaining throughout the usage sites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx` around lines 188 -
190, The code currently force-casts invocation.input and invocation.output to
specific shapes (see invocation, input, queryArg, result) which is unsafe;
update usages to validate types before access by checking that invocation?.input
is an object and invocation?.output is an object with expected keys (e.g.,
typeof input.query === "string" before assigning queryArg and checking typeof
result.success === "boolean" or Array.isArray(result.result) before using them),
replace direct assumptions with optional chaining and safe guards, and add a
small fallback/default values when fields are missing so consumer code using
queryArg, result.success, result.result, and result.rowCount won’t throw if the
SDK shape changes.

18-22: Potential state update on unmounted component.

The setTimeout callback could fire after the component unmounts, causing a React warning. While minor for visual state, it's good practice to clean up timeouts.

Proposed fix using ref
 export const CopyButton = memo(function CopyButton({ text, className, size = "sm" }: {
   text: string,
   className?: string,
   size?: "sm" | "xs",
 }) {
   const [copied, setCopied] = useState(false);
+  const timeoutRef = useRef<ReturnType<typeof setTimeout>>();
+
+  useEffect(() => {
+    return () => {
+      if (timeoutRef.current) clearTimeout(timeoutRef.current);
+    };
+  }, []);

   const handleCopy = useCallback(async () => {
     await navigator.clipboard.writeText(text);
     setCopied(true);
-    setTimeout(() => setCopied(false), 1500);
+    timeoutRef.current = setTimeout(() => setCopied(false), 1500);
   }, [text]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx` around lines 18 -
22, The handleCopy handler schedules a setTimeout to reset copied state which
may run after the component unmounts; modify handleCopy and the component to
store the timeout id in a ref (e.g., timeoutRef) and clear it in a useEffect
cleanup to avoid calling setCopied on an unmounted component: when calling
setTimeout in handleCopy save the returned id to timeoutRef.current, and in a
useEffect return a cleanup that calls clearTimeout(timeoutRef.current) and
resets timeoutRef.current to null; also clear any existing timeout before
setting a new one in handleCopy to prevent overlaps.
apps/dashboard/src/hooks/use-ai-conversations.ts (1)

41-43: Error message lacks response body context.

The error message only includes the status code. For debugging, it would help to include the response body (if available) to understand why the request failed.

Proposed enhancement
   if (!response.ok) {
-    throw new Error(`AI conversations API error: ${response.status}`);
+    const errorBody = await response.text().catch(() => "");
+    throw new Error(`AI conversations API error: ${response.status}${errorBody ? ` - ${errorBody}` : ""}`);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/hooks/use-ai-conversations.ts` around lines 41 - 43, The
error thrown when response.ok is false in use-ai-conversations.ts only includes
the status code; update the failure path in the function that performs the fetch
(e.g., the fetch call inside useAIConversations or fetchAIConversations) to read
the response body (preferably via response.text() or response.json() guarded in
a try/catch), include a truncated/limited portion of that body in the error
message along with the status (e.g., `AI conversations API error:
${response.status} - ${bodySnippet}`), and fall back to a safe message if
reading the body fails to avoid masking the original error.
apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx (1)

564-568: Complex last-assistant detection logic could be simplified.

The calculation for isLastAssistant is difficult to follow. Consider extracting to a named helper or adding a brief comment explaining the intent.

Suggested clarification
+          // Check if this is the last assistant message (accounting for trailing user message)
           const isLastAssistant = message.role === "assistant" &&
             index === arr.length - 1 - (arr[arr.length - 1]?.role === "user" ? 1 : 0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 564 - 568, The last-assistant detection logic (currently computed in the
isLastAssistant const using message.role, index, arr and the end-user check) is
hard to read—extract this into a small helper function (e.g.,
isLastAssistantAtIndex(messages, index) or computeIsLastAssistant) and use that
helper in place of the inline expression; inside the helper add a one-line
comment explaining the rule (treat the final assistant as not-last if the very
last message is a user), then update the displayContent line to use the new
helper while keeping existing dependencies like messageContent and aiLoading
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql`:
- Around line 2-11: The AiConversation table is missing a foreign-key constraint
on projectId which allows orphaned conversations; update the migration SQL to
add a foreign key constraint on "projectId" referencing the primary key of the
Project table (e.g., "Project"."id") with ON DELETE CASCADE, and then update the
Prisma schema to add the corresponding relation fields (e.g., in the
AiConversation model add an explicit relation to Project and mark projectId as
the foreign key) so Prisma migrations and runtime enforce cascade deletes; apply
the same change for the other similar table(s) referenced (lines 30-31) to keep
behavior consistent.

In `@apps/backend/prisma/schema.prisma`:
- Around line 1105-1113: The AiMessage model currently relies on createdAt for
ordering which is nondeterministic for batch inserts (createMany()); add an
explicit integer position field to AiMessage (e.g., position Int) and create a
composite index on (conversationId, position) to preserve stable ordering, then
update any insertion code that uses createMany() or individual message inserts
to set position according to the input array order (ensuring position increments
per conversation) and change queries that used orderBy: { createdAt: "asc" } to
orderBy: { position: "asc" } (keep conversationId and createdAt untouched aside
from indexing).

In
`@apps/backend/src/app/api/latest/internal/ai-conversations/`[conversationId]/messages/route.tsx:
- Around line 46-61: The current handler calls getOwnedConversation then deletes
messages and separately calls createMany, which can leave the conversation empty
on failure and doesn't update AiConversation.updatedAt; wrap the deleteMany +
optional createMany + an update to touch the parent conversation (update
AiConversation.updatedAt for params.conversationId) inside a single Prisma
transaction (use globalPrismaClient.$transaction) so the delete/insert/update
are atomic and the parent conversation's updatedAt is updated even when messages
array is empty; keep getOwnedConversation as the authorization check before the
transaction and reference params.conversationId,
globalPrismaClient.aiMessage.deleteMany,
globalPrismaClient.aiMessage.createMany, and
globalPrismaClient.aiConversation.update (or the appropriate model name) when
implementing.

In `@apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx`:
- Around line 97-114: The code creates an aiConversation and then calls
globalPrismaClient.aiMessage.createMany separately, which can leave a dangling
conversation if message insertion fails; change this to an atomic operation by
either wrapping both calls in a Prisma transaction (use
globalPrismaClient.$transaction([ ... ]) to create the conversation and create
messages) or perform a nested create in the aiConversation.create call (include
the relation field and use messages: { create: body.messages.map(m => ({ role:
m.role, content: m.content })) } so messages are created as part of the same
write). Update the handler to use one of these approaches instead of separate
aiConversation.create and aiMessage.createMany calls.
- Around line 77-85: The request body schema's messages array uses yupString()
for role and yupMixed() for content, causing downstream casts (e.g., the
"msg.content as object" usage); update the body: yupObject(...) messages schema
so role is constrained with
yupString().oneOf(["user","assistant","system"]).defined() and content is
validated as the exact persisted message object shape (replace yupMixed() with a
yupObject(...) that matches the stored message fields), and apply the identical
tightened schema to the sibling PUT handler in
[conversationId]/messages/route.tsx so callers and handlers no longer need
runtime type casts.

In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx`:
- Around line 408-412: The getToolInvocations function uses a double cast (as
unknown as ToolInvocationPart) which bypasses TypeScript safety; replace this
with a proper type guard or runtime validation: add a predicate like
isToolInvocationPart(part): part is ToolInvocationPart (checking required
properties/shape of ToolInvocationPart) and use message.parts.filter((part):
part is ToolInvocationPart => part.type.startsWith("tool-") &&
isToolInvocationPart(part)) then return the filtered array directly; reference
getToolInvocations, UIMessage, and ToolInvocationPart when adding the guard.

In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx`:
- Around line 185-201: The effect that fetches the latest conversation (inside
useEffect) omits projectId from its dependency array, causing stale
conversations when the user switches projects; update the dependency list to
include projectId (and any other used values like currentUser or
runAsynchronously if not stable) so the effect re-runs on project change,
ensuring listConversations/currentUser/getConversation, setViewMode, and
setConversationKey operate against the correct project; alternatively ensure the
component remounts on project change, but the simplest fix is to add projectId
to the useEffect dependencies.
- Around line 352-368: The save block using runAsynchronously currently lets
errors from createConversation or replaceConversationMessages fail silently;
update the inner async to catch errors (surround the await calls for
replaceConversationMessages/createConversation with try/catch), call a
user-visible notifier (e.g., runAsynchronouslyWithAlert or the app's toast/alert
mechanism) inside the catch with a clear message including the error, still set
isSavingRef.current = false in finally, and keep the existing logic that sets
conversationIdRef.current and calls onConversationCreated(result.id) only on
success.
- Around line 191-195: The code in initialMessages maps conv.messages using
unsafe casts for msg.role and msg.content; update the
ConversationDetail.messages type to reflect the real API shape (role: "user" |
"assistant" and content: UIMessage["parts"]) or add runtime validation before
mapping (validate msg.role is "user"|"assistant" and that msg.content conforms
to UIMessage["parts"]) and only then construct initialMessages from
conv.messages to avoid unsafe casts in the initialMessages mapping.

In `@apps/dashboard/src/hooks/use-ai-conversations.ts`:
- Around line 67-73: The code interpolates conversationId directly into request
paths (see getConversation, updateConversationTitle,
replaceConversationMessages, deleteConversation), which risks path traversal if
an attacker supplies malicious input; fix by validating or encoding the id
before using it in the URL: either enforce a strict format (e.g., UUID regex)
and throw on invalid values, or always URL-encode conversationId with
encodeURIComponent() and use the encoded value in apiFetch; apply the same
validation/encoding pattern consistently across the listed functions.

---

Nitpick comments:
In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx`:
- Around line 188-190: The code currently force-casts invocation.input and
invocation.output to specific shapes (see invocation, input, queryArg, result)
which is unsafe; update usages to validate types before access by checking that
invocation?.input is an object and invocation?.output is an object with expected
keys (e.g., typeof input.query === "string" before assigning queryArg and
checking typeof result.success === "boolean" or Array.isArray(result.result)
before using them), replace direct assumptions with optional chaining and safe
guards, and add a small fallback/default values when fields are missing so
consumer code using queryArg, result.success, result.result, and result.rowCount
won’t throw if the SDK shape changes.
- Around line 18-22: The handleCopy handler schedules a setTimeout to reset
copied state which may run after the component unmounts; modify handleCopy and
the component to store the timeout id in a ref (e.g., timeoutRef) and clear it
in a useEffect cleanup to avoid calling setCopied on an unmounted component:
when calling setTimeout in handleCopy save the returned id to
timeoutRef.current, and in a useEffect return a cleanup that calls
clearTimeout(timeoutRef.current) and resets timeoutRef.current to null; also
clear any existing timeout before setting a new one in handleCopy to prevent
overlaps.

In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx`:
- Around line 564-568: The last-assistant detection logic (currently computed in
the isLastAssistant const using message.role, index, arr and the end-user check)
is hard to read—extract this into a small helper function (e.g.,
isLastAssistantAtIndex(messages, index) or computeIsLastAssistant) and use that
helper in place of the inline expression; inside the helper add a one-line
comment explaining the rule (treat the final assistant as not-last if the very
last message is a user), then update the displayContent line to use the new
helper while keeping existing dependencies like messageContent and aiLoading
unchanged.

In `@apps/dashboard/src/hooks/use-ai-conversations.ts`:
- Around line 41-43: The error thrown when response.ok is false in
use-ai-conversations.ts only includes the status code; update the failure path
in the function that performs the fetch (e.g., the fetch call inside
useAIConversations or fetchAIConversations) to read the response body
(preferably via response.text() or response.json() guarded in a try/catch),
include a truncated/limited portion of that body in the error message along with
the status (e.g., `AI conversations API error: ${response.status} -
${bodySnippet}`), and fall back to a safe message if reading the body fails to
avoid masking the original error.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9b7586bc-5742-4613-b78e-b4381f9af0ef

📥 Commits

Reviewing files that changed from the base of the PR and between 4ddf6a5 and 40d875c.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (11)
  • apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql
  • apps/backend/prisma/schema.prisma
  • apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx
  • apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/route.tsx
  • apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx
  • apps/dashboard/src/components/commands/ai-chat-shared.tsx
  • apps/dashboard/src/components/commands/ask-ai.tsx
  • apps/dashboard/src/components/design-components/list.tsx
  • apps/dashboard/src/components/stack-companion.tsx
  • apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx
  • apps/dashboard/src/hooks/use-ai-conversations.ts

Comment on lines +2 to +11
CREATE TABLE "AiConversation" (
"id" UUID NOT NULL DEFAULT gen_random_uuid(),
"projectUserId" UUID NOT NULL,
"projectId" TEXT NOT NULL,
"title" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,

CONSTRAINT "AiConversation_pkey" PRIMARY KEY ("id")
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Project-scoped conversations should have a real project FK.

These endpoints store and query conversations by projectId, but the table does not constrain that field. If a project is deleted, its chat history stays behind as orphaned rows. Please add the FK/cascade here (and the matching Prisma relation) unless long-term orphan retention is intentional.

Suggested SQL
+ALTER TABLE "AiConversation"
+  ADD CONSTRAINT "AiConversation_projectId_fkey"
+  FOREIGN KEY ("projectId") REFERENCES "Project"("id")
+  ON DELETE CASCADE
+  ON UPDATE CASCADE;

Also applies to: 30-31

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/backend/prisma/migrations/20260327000000_add_ai_conversations/migration.sql`
around lines 2 - 11, The AiConversation table is missing a foreign-key
constraint on projectId which allows orphaned conversations; update the
migration SQL to add a foreign key constraint on "projectId" referencing the
primary key of the Project table (e.g., "Project"."id") with ON DELETE CASCADE,
and then update the Prisma schema to add the corresponding relation fields
(e.g., in the AiConversation model add an explicit relation to Project and mark
projectId as the foreign key) so Prisma migrations and runtime enforce cascade
deletes; apply the same change for the other similar table(s) referenced (lines
30-31) to keep behavior consistent.

Comment on lines +1105 to +1113
model AiMessage {
id String @id @default(uuid()) @db.Uuid
conversationId String @db.Uuid
role String
content Json
createdAt DateTime @default(now())
conversation AiConversation @relation(fields: [conversationId], references: [id], onDelete: Cascade)

@@index([conversationId, createdAt(sort: Asc)])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find files that use AiMessage and understand the insertion/query patterns
cd /tmp/repo && fd -e ts -e js | xargs grep -l "AiMessage" | head -20

Repository: stack-auth/stack-auth

Length of output: 124


🏁 Script executed:

# Let's also search for createMany patterns with AiMessage
rg "createMany|AiMessage" --type ts --type js -A 3 -B 1 | head -100

Repository: stack-auth/stack-auth

Length of output: 1841


🏁 Script executed:

# Look for the detail route and order patterns mentioned
rg "orderBy.*createdAt|createdAt.*orderBy" --type ts --type js -B 2 -A 2

Repository: stack-auth/stack-auth

Length of output: 2759


🏁 Script executed:

cat -n apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx | sed -n '1,150p'

Repository: stack-auth/stack-auth

Length of output: 4414


🏁 Script executed:

cat -n apps/backend/src/app/api/latest/internal/ai-conversations/\[conversationId\]/messages/route.tsx | sed -n '1,150p'

Repository: stack-auth/stack-auth

Length of output: 2706


🏁 Script executed:

# Let's look at the detail route that orders by createdAt
cat -n apps/backend/src/app/api/latest/internal/ai-conversations/\[conversationId\]/route.tsx | sed -n '1,100p'

Repository: stack-auth/stack-auth

Length of output: 3975


🏁 Script executed:

cat -n apps/backend/prisma/schema.prisma | sed -n '1093,1115p'

Repository: stack-auth/stack-auth

Length of output: 916


🏁 Script executed:

# Also check if there are any other models with similar patterns that might be using a position field
rg "@@unique.*position|position.*Int.*@default|@@index.*position" apps/backend/prisma/schema.prisma

Repository: stack-auth/stack-auth

Length of output: 47


Persist a stable message order instead of relying on createdAt.

Messages inserted with createMany() in a single batch will have identical or nearly-identical timestamps, making the orderBy: { createdAt: "asc" } query nondeterministic. The input array order is lost during insertion. Add an explicit position column indexed with conversationId to preserve deterministic ordering.

Suggested shape
 model AiMessage {
   id             String         `@id` `@default`(uuid()) `@db.Uuid`
   conversationId String         `@db.Uuid`
+  position       Int
   role           String
   content        Json
   createdAt      DateTime       `@default`(now())
   conversation   AiConversation `@relation`(fields: [conversationId], references: [id], onDelete: Cascade)

-  @@index([conversationId, createdAt(sort: Asc)])
+  @@unique([conversationId, position])
+  @@index([conversationId, position])
 }

Update insertions to include position:

- data: body.messages.map((msg) => ({
+ data: body.messages.map((msg, index) => ({
+   position: index,
    conversationId: conversation.id,
    role: msg.role,
    content: msg.content as object,
  })),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
model AiMessage {
id String @id @default(uuid()) @db.Uuid
conversationId String @db.Uuid
role String
content Json
createdAt DateTime @default(now())
conversation AiConversation @relation(fields: [conversationId], references: [id], onDelete: Cascade)
@@index([conversationId, createdAt(sort: Asc)])
model AiMessage {
id String `@id` `@default`(uuid()) `@db.Uuid`
conversationId String `@db.Uuid`
position Int
role String
content Json
createdAt DateTime `@default`(now())
conversation AiConversation `@relation`(fields: [conversationId], references: [id], onDelete: Cascade)
@@unique([conversationId, position])
@@index([conversationId, position])
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/prisma/schema.prisma` around lines 1105 - 1113, The AiMessage
model currently relies on createdAt for ordering which is nondeterministic for
batch inserts (createMany()); add an explicit integer position field to
AiMessage (e.g., position Int) and create a composite index on (conversationId,
position) to preserve stable ordering, then update any insertion code that uses
createMany() or individual message inserts to set position according to the
input array order (ensuring position increments per conversation) and change
queries that used orderBy: { createdAt: "asc" } to orderBy: { position: "asc" }
(keep conversationId and createdAt untouched aside from indexing).

Comment on lines +46 to +61
handler: async ({ auth, params, body }) => {
await getOwnedConversation(params.conversationId, auth.user.id);

await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});

if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Make the replace path atomic and touch the parent conversation.

A createMany() failure here leaves the conversation emptied after deleteMany(), and even successful replacements will not move the thread in the list because AiConversation.updatedAt never changes. Do the delete/insert/update in one transaction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/backend/src/app/api/latest/internal/ai-conversations/`[conversationId]/messages/route.tsx
around lines 46 - 61, The current handler calls getOwnedConversation then
deletes messages and separately calls createMany, which can leave the
conversation empty on failure and doesn't update AiConversation.updatedAt; wrap
the deleteMany + optional createMany + an update to touch the parent
conversation (update AiConversation.updatedAt for params.conversationId) inside
a single Prisma transaction (use globalPrismaClient.$transaction) so the
delete/insert/update are atomic and the parent conversation's updatedAt is
updated even when messages array is empty; keep getOwnedConversation as the
authorization check before the transaction and reference params.conversationId,
globalPrismaClient.aiMessage.deleteMany,
globalPrismaClient.aiMessage.createMany, and
globalPrismaClient.aiConversation.update (or the appropriate model name) when
implementing.

Comment on lines +77 to +85
body: yupObject({
title: yupString().defined(),
projectId: yupString().defined(),
messages: yupArray(
yupObject({
role: yupString().defined(),
content: yupMixed().defined(),
})
).defined(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd /tmp && find . -type f -name "route.tsx" | grep -i "ai-conversations" | head -5

Repository: stack-auth/stack-auth

Length of output: 47


🏁 Script executed:

find . -type f -path "*internal/ai-conversations*" -name "*.tsx"

Repository: stack-auth/stack-auth

Length of output: 317


🏁 Script executed:

find . -type f -path "*ai-conversations*" -name "*.ts*" | head -10

Repository: stack-auth/stack-auth

Length of output: 368


🏁 Script executed:

fd -t f "route.tsx" apps/backend/src/app/api/latest/internal --max-one-result | head -5

Repository: stack-auth/stack-auth

Length of output: 298


🏁 Script executed:

fd -t d "ai-conversations" apps/backend/src/

Repository: stack-auth/stack-auth

Length of output: 123


🏁 Script executed:

fd -t d "internal" apps/backend/src/app/api/latest/

Repository: stack-auth/stack-auth

Length of output: 228


🏁 Script executed:

git ls-files "apps/backend/src/app/api/latest/internal/"

Repository: stack-auth/stack-auth

Length of output: 4831


🏁 Script executed:

cat -n apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx | sed -n '70,115p'

Repository: stack-auth/stack-auth

Length of output: 1670


🏁 Script executed:

wc -l apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx

Repository: stack-auth/stack-auth

Length of output: 136


🏁 Script executed:

cat -n apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx

Repository: stack-auth/stack-auth

Length of output: 4414


🏁 Script executed:

cat -n apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx

Repository: stack-auth/stack-auth

Length of output: 2706


🏁 Script executed:

wc -l apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx

Repository: stack-auth/stack-auth

Length of output: 161


🏁 Script executed:

fd -t f "schema.prisma" apps/backend

Repository: stack-auth/stack-auth

Length of output: 98


🏁 Script executed:

grep -A 5 -B 5 "aiMessage\|AiMessage" $(fd -t f "schema.prisma" apps/backend)

Repository: stack-auth/stack-auth

Length of output: 518


🏁 Script executed:

find . -path "*/prisma/schema.prisma" -o -path "*/schema.prisma" | head -3

Repository: stack-auth/stack-auth

Length of output: 100


🏁 Script executed:

rg -A 5 -B 5 "yupMixed" apps/backend/src/app/api/latest/internal/ai-conversations/

Repository: stack-auth/stack-auth

Length of output: 7205


🏁 Script executed:

grep -r "yupMixed" packages/stack-shared/dist/schema-fields.ts | head -5

Repository: stack-auth/stack-auth

Length of output: 141


🏁 Script executed:

cat $(find packages/stack-shared -name "*schema-fields*" -o -name "*schema*" | grep -i field | head -1)

Repository: stack-auth/stack-auth

Length of output: 50377


🏁 Script executed:

fd -t f "schema-fields" packages/stack-shared/

Repository: stack-auth/stack-auth

Length of output: 107


🏁 Script executed:

grep -A 20 "model AiMessage" apps/backend/prisma/schema.prisma

Repository: stack-auth/stack-auth

Length of output: 541


🏁 Script executed:

rg -B 5 -A 15 "AiMessage|aiMessage" apps/backend/prisma/schema.prisma

Repository: stack-auth/stack-auth

Length of output: 722


🏁 Script executed:

rg -r "AiMessage\|aiMessage" --type ts --type tsx apps/backend/src | grep -i "content\|schema" | head -20

Repository: stack-auth/stack-auth

Length of output: 92


🏁 Script executed:

rg "messages\[" apps/backend/src/app/api/latest/internal/ai-conversations/ -A 3 -B 3

Repository: stack-auth/stack-auth

Length of output: 47


🏁 Script executed:

rg "aiMessage\|AiMessage" apps/backend/src --type ts --type js | head -20

Repository: stack-auth/stack-auth

Length of output: 47


🏁 Script executed:

find apps/backend/src -type f -name "*.ts" -o -name "*.tsx" | xargs grep -l "aiMessage" | head -5

Repository: stack-auth/stack-auth

Length of output: 311


🏁 Script executed:

rg "role.*user|assistant|system" apps/backend/src/app/api/latest/internal/ai-conversations/ -i

Repository: stack-auth/stack-auth

Length of output: 47


🏁 Script executed:

grep -n "as object\|as any\|as unknown" apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx

Repository: stack-auth/stack-auth

Length of output: 110


🏁 Script executed:

grep -n "as object\|as any\|as unknown" apps/backend/src/app/api/latest/internal/ai-conversations/[conversationId]/messages/route.tsx

Repository: stack-auth/stack-auth

Length of output: 109


Tighten the request message schema to define expected role and content types.

The messages[*] schema accepts arbitrary yupMixed() for content and unvalidated strings for role, which requires downstream type casts (msg.content as object at lines 111 and 58). This violates the coding guidelines: "Do NOT use as/any/type casts or anything else to bypass the type system unless you specifically asked the user about it."

Define the exact persisted message structure here—for example, role should be constrained to known values (e.g., "user", "assistant", "system") and content should be validated as a specific type. Apply the same schema to the sibling PUT route at [conversationId]/messages/route.tsx to avoid duplication.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx` around
lines 77 - 85, The request body schema's messages array uses yupString() for
role and yupMixed() for content, causing downstream casts (e.g., the
"msg.content as object" usage); update the body: yupObject(...) messages schema
so role is constrained with
yupString().oneOf(["user","assistant","system"]).defined() and content is
validated as the exact persisted message object shape (replace yupMixed() with a
yupObject(...) that matches the stored message fields), and apply the identical
tightened schema to the sibling PUT handler in
[conversationId]/messages/route.tsx so callers and handlers no longer need
runtime type casts.

Comment on lines +97 to +114
handler: async ({ auth, body }) => {
const conversation = await globalPrismaClient.aiConversation.create({
data: {
projectUserId: auth.user.id,
title: body.title,
projectId: body.projectId,
},
});

if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: conversation.id,
role: msg.role,
content: msg.content as object,
})),
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Create the conversation and its initial messages atomically.

If aiMessage.createMany() fails after the conversation row is inserted, the request returns an error but leaves a partial conversation behind. A retry can then create duplicates. Use a transaction or a nested relation write here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/app/api/latest/internal/ai-conversations/route.tsx` around
lines 97 - 114, The code creates an aiConversation and then calls
globalPrismaClient.aiMessage.createMany separately, which can leave a dangling
conversation if message insertion fails; change this to an atomic operation by
either wrapping both calls in a Prisma transaction (use
globalPrismaClient.$transaction([ ... ]) to create the conversation and create
messages) or perform a nested create in the aiConversation.create call (include
the relation field and use messages: { create: body.messages.map(m => ({ role:
m.role, content: m.content })) } so messages are created as part of the same
write). Update the handler to use one of these approaches instead of separate
aiConversation.create and aiMessage.createMany calls.

Comment on lines +408 to +412
export function getToolInvocations(message: UIMessage): ToolInvocationPart[] {
return message.parts
.filter((part) => part.type.startsWith("tool-"))
.map((part) => part as unknown as ToolInvocationPart);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Double type cast bypasses type safety.

The as unknown as ToolInvocationPart pattern completely bypasses TypeScript's type checking. If the actual structure of part differs from ToolInvocationPart, this will cause runtime errors.

Consider using a type guard or runtime validation instead.

Proposed type guard approach
+function isToolInvocationPart(part: unknown): part is ToolInvocationPart {
+  return (
+    typeof part === "object" &&
+    part !== null &&
+    "type" in part &&
+    typeof (part as { type: unknown }).type === "string" &&
+    (part as { type: string }).type.startsWith("tool-") &&
+    "toolCallId" in part &&
+    "state" in part &&
+    "input" in part
+  );
+}

 export function getToolInvocations(message: UIMessage): ToolInvocationPart[] {
   return message.parts
     .filter((part) => part.type.startsWith("tool-"))
-    .map((part) => part as unknown as ToolInvocationPart);
+    .filter(isToolInvocationPart);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/commands/ai-chat-shared.tsx` around lines 408 -
412, The getToolInvocations function uses a double cast (as unknown as
ToolInvocationPart) which bypasses TypeScript safety; replace this with a proper
type guard or runtime validation: add a predicate like
isToolInvocationPart(part): part is ToolInvocationPart (checking required
properties/shape of ToolInvocationPart) and use message.parts.filter((part):
part is ToolInvocationPart => part.type.startsWith("tool-") &&
isToolInvocationPart(part)) then return the filtered array directly; reference
getToolInvocations, UIMessage, and ToolInvocationPart when adding the guard.

Comment on lines +185 to +201
useEffect(() => {
if (!projectId) return;
runAsynchronously(async () => {
const conversations = await listConversations(currentUser, projectId);
if (conversations.length > 0) {
const conv = await getConversation(currentUser, conversations[0].id);
const initialMessages: UIMessage[] = conv.messages.map((msg) => ({
id: msg.id,
role: msg.role as "user" | "assistant",
parts: msg.content as UIMessage["parts"],
}));
setViewMode({ view: 'chat', conversationId: conversations[0].id, initialMessages });
setConversationKey(prev => prev + 1);
}
});
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing projectId dependency may cause stale data after navigation.

The effect loads the most recent conversation on mount but omits projectId from dependencies. If the user navigates to a different project without remounting this component, the chat will display conversations from the previous project.

Either add projectId to the dependency array or verify the component remounts on project changes.

Proposed fix
   useEffect(() => {
     if (!projectId) return;
     runAsynchronously(async () => {
       const conversations = await listConversations(currentUser, projectId);
       if (conversations.length > 0) {
         const conv = await getConversation(currentUser, conversations[0].id);
         const initialMessages: UIMessage[] = conv.messages.map((msg) => ({
           id: msg.id,
           role: msg.role as "user" | "assistant",
           parts: msg.content as UIMessage["parts"],
         }));
         setViewMode({ view: 'chat', conversationId: conversations[0].id, initialMessages });
         setConversationKey(prev => prev + 1);
       }
     });
-  // eslint-disable-next-line react-hooks/exhaustive-deps
-  }, []);
+  }, [projectId, currentUser]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 185 - 201, The effect that fetches the latest conversation (inside
useEffect) omits projectId from its dependency array, causing stale
conversations when the user switches projects; update the dependency list to
include projectId (and any other used values like currentUser or
runAsynchronously if not stable) so the effect re-runs on project change,
ensuring listConversations/currentUser/getConversation, setViewMode, and
setConversationKey operate against the correct project; alternatively ensure the
component remounts on project change, but the simplest fix is to add projectId
to the useEffect dependencies.

Comment on lines +191 to +195
const initialMessages: UIMessage[] = conv.messages.map((msg) => ({
id: msg.id,
role: msg.role as "user" | "assistant",
parts: msg.content as UIMessage["parts"],
}));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if there's existing validation patterns or Zod usage in the codebase
rg -n "from 'zod'" --type=ts -g '!node_modules/**' | head -20
rg -nP "\.parse\(|\.safeParse\(" --type=ts apps/dashboard/src/ | head -10

Repository: stack-auth/stack-auth

Length of output: 50378


🏁 Script executed:

cat -n apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx | sed -n '175,210p'

Repository: stack-auth/stack-auth

Length of output: 1803


🏁 Script executed:

# Find where messages come from and what type the conv object is
rg -n "useQuery|useApiCall|fetch.*conversation" apps/dashboard/src/components/stack-companion/ -A 3

Repository: stack-auth/stack-auth

Length of output: 47


🏁 Script executed:

# Check the API endpoint that provides conversation data
fd -t f "conversation" apps/backend/src/ | grep -E "\.(ts|tsx)$" | head -10

Repository: stack-auth/stack-auth

Length of output: 47


🏁 Script executed:

# Look for validation or schema definitions for conversation messages
rg -n "message.*schema|Message.*type" apps/dashboard/src/ -A 2 | head -40

Repository: stack-auth/stack-auth

Length of output: 50380


🏁 Script executed:

rg -n "function getConversation|const getConversation|export.*getConversation" apps/dashboard/src/ -A 5

Repository: stack-auth/stack-auth

Length of output: 623


🏁 Script executed:

# Find type definitions for UIMessage and conversation data
rg -n "type UIMessage|interface UIMessage|type.*Message.*=" apps/dashboard/src/ -B 1 -A 3 | head -40

Repository: stack-auth/stack-auth

Length of output: 50380


🏁 Script executed:

# Search for where messages are coming from in the conversation
rg -n "messages" apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx -B 2 -A 2

Repository: stack-auth/stack-auth

Length of output: 5229


🏁 Script executed:

# Look for the data fetching utilities
fd -t f "api\|fetch\|client" apps/dashboard/src/ -name "*.ts" -o -name "*.tsx" | grep -E "(api|fetch|client)" | head -15

Repository: stack-auth/stack-auth

Length of output: 295


🏁 Script executed:

rg -n "type ConversationDetail|interface ConversationDetail" apps/dashboard/src/ -A 10

Repository: stack-auth/stack-auth

Length of output: 809


🏁 Script executed:

cat -n apps/dashboard/src/hooks/use-ai-conversations.ts | sed -n '1,100p'

Repository: stack-auth/stack-auth

Length of output: 3697


🏁 Script executed:

# Check if there's any validation or schema for the API response
rg -n "ConversationDetail|AIMessage|ConversationMessage" apps/dashboard/src/hooks/use-ai-conversations.ts -B 2 -A 5

Repository: stack-auth/stack-auth

Length of output: 791


🏁 Script executed:

rg -n "type UIMessage|interface UIMessage" apps/dashboard/src/ -B 2 -A 8 | head -40

Repository: stack-auth/stack-auth

Length of output: 2798


🏁 Script executed:

# Check if there's any validation applied in the hooks or elsewhere
rg -n "safeParse|parse|zod" apps/dashboard/src/hooks/use-ai-conversations.ts

Repository: stack-auth/stack-auth

Length of output: 47


Remove unsafe type casts or improve type definitions.

The msg.role and msg.content are cast from loosely-typed API response data without validation. The ConversationDetail.messages type defines role as string and content as unknown, so casting these to "user" | "assistant" and UIMessage["parts"] makes unsafe assumptions.

Either narrow the type definition in ConversationDetail to match what the API actually returns:

messages: Array<{
  id: string,
  role: "user" | "assistant",
  content: UIMessage["parts"],
}>

Or validate the data at the source before casting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 191 - 195, The code in initialMessages maps conv.messages using unsafe
casts for msg.role and msg.content; update the ConversationDetail.messages type
to reflect the real API shape (role: "user" | "assistant" and content:
UIMessage["parts"]) or add runtime validation before mapping (validate msg.role
is "user"|"assistant" and that msg.content conforms to UIMessage["parts"]) and
only then construct initialMessages from conv.messages to avoid unsafe casts in
the initialMessages mapping.

Comment on lines +352 to +368
runAsynchronously(async () => {
try {
if (conversationIdRef.current) {
await replaceConversationMessages(currentUser, conversationIdRef.current, messagesToSave);
} else if (projectId) {
const result = await createConversation(currentUser, {
title,
projectId,
messages: messagesToSave,
});
conversationIdRef.current = result.id;
onConversationCreated(result.id);
}
} finally {
isSavingRef.current = false;
}
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent failure when conversation save fails.

If createConversation or replaceConversationMessages throws, the error propagates to runAsynchronously but the user receives no feedback that their conversation wasn't saved. Consider using runAsynchronouslyWithAlert or adding user-visible error handling to prevent silent data loss.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/components/stack-companion/ai-chat-widget.tsx` around
lines 352 - 368, The save block using runAsynchronously currently lets errors
from createConversation or replaceConversationMessages fail silently; update the
inner async to catch errors (surround the await calls for
replaceConversationMessages/createConversation with try/catch), call a
user-visible notifier (e.g., runAsynchronouslyWithAlert or the app's toast/alert
mechanism) inside the catch with a clear message including the error, still set
isSavingRef.current = false in finally, and keep the existing logic that sets
conversationIdRef.current and calls onConversationCreated(result.id) only on
success.

Comment on lines +67 to +73
export async function getConversation(
currentUser: CurrentUser | undefined | null,
conversationId: string,
): Promise<ConversationDetail> {
const response = await apiFetch(currentUser, `/${conversationId}`);
return await response.json();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate conversationId format before path interpolation.

The conversationId is directly interpolated into the URL path. While unlikely with UUIDs, an attacker-controlled ID containing path traversal characters (e.g., ../) could potentially manipulate the request path. Consider validating the format or using URL encoding.

Proposed defensive fix
 export async function getConversation(
   currentUser: CurrentUser | undefined | null,
   conversationId: string,
 ): Promise<ConversationDetail> {
-  const response = await apiFetch(currentUser, `/${conversationId}`);
+  const response = await apiFetch(currentUser, `/${encodeURIComponent(conversationId)}`);
   return await response.json();
 }

Apply the same pattern to updateConversationTitle, replaceConversationMessages, and deleteConversation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function getConversation(
currentUser: CurrentUser | undefined | null,
conversationId: string,
): Promise<ConversationDetail> {
const response = await apiFetch(currentUser, `/${conversationId}`);
return await response.json();
}
export async function getConversation(
currentUser: CurrentUser | undefined | null,
conversationId: string,
): Promise<ConversationDetail> {
const response = await apiFetch(currentUser, `/${encodeURIComponent(conversationId)}`);
return await response.json();
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard/src/hooks/use-ai-conversations.ts` around lines 67 - 73, The
code interpolates conversationId directly into request paths (see
getConversation, updateConversationTitle, replaceConversationMessages,
deleteConversation), which risks path traversal if an attacker supplies
malicious input; fix by validating or encoding the id before using it in the
URL: either enforce a strict format (e.g., UUID regex) and throw on invalid
values, or always URL-encode conversationId with encodeURIComponent() and use
the encoded value in apiFetch; apply the same validation/encoding pattern
consistently across the listed functions.

@aadesh18 aadesh18 changed the base branch from dev to ai-in-stack-companion March 28, 2026 00:54
@aadesh18 aadesh18 changed the base branch from ai-in-stack-companion to dev March 28, 2026 00:55
@aadesh18 aadesh18 changed the base branch from dev to ai-in-stack-companion March 28, 2026 00:55
@aadesh18 aadesh18 changed the base branch from ai-in-stack-companion to dev March 28, 2026 00:55
if (!projectId) {
setLoading(false);
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling in ConversationList causes API failures to silently show "No conversations yet" instead of an error message

Fix on Vercel

Comment on lines +49 to +61
await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});

if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
await globalPrismaClient.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});
if (body.messages.length > 0) {
await globalPrismaClient.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
await globalPrismaClient.$transaction(async (tx) => {
await tx.aiMessage.deleteMany({
where: { conversationId: params.conversationId },
});
if (body.messages.length > 0) {
await tx.aiMessage.createMany({
data: body.messages.map((msg) => ({
conversationId: params.conversationId,
role: msg.role,
content: msg.content as object,
})),
});
}
});

Non-atomic deleteMany + createMany operations can leave conversation with zero messages if createMany fails after deleteMany succeeds.

Fix on Vercel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants