Build A Custom AI Chat Application With Next.js
In recent years, AI chatbots powered by large language models like OpenAI’s GPT have revolutionized how users interact with applications. However, to provide specialized and context-aware responses, fine-tuning these models on your own dataset is essential. Let us delve into understanding how to build a custom AI chat application with Next.js and fine-tune GPT using your own data.
1. What is Next.js
Next.js is a powerful React framework used for building modern, scalable, and production-ready web applications. It enhances the core capabilities of React by providing features such as server-side rendering (SSR), static site generation (SSG), API routes, image optimization, routing, data fetching mechanisms, and more—all built into a single unified framework.
Next.js is widely adopted by developers because it offers a highly performant developer experience and allows applications to be deployed easily on platforms such as Vercel, which is also created by the team behind Next.js. Its file-based routing and hybrid rendering model make it ideal for web apps, AI tools, e-commerce platforms, dashboards, and enterprise-grade products.
When building AI applications—especially those that require real-time interactions like chatbots—Next.js becomes particularly valuable. Its built-in API routes allow seamless backend integration with services such as OpenAI APIs, vector databases, and custom fine-tuned models. Using serverless functions, developers can securely handle model calls, manage sessions, process user requests, and generate responses without setting up a separate backend server.
In the context of creating AI-driven chat applications, Next.js offers:
- Fast server-rendered UI updates for smooth chat interactions.
- Secure server-side API endpoints to call fine-tuned GPT models.
- Easy deployment and scaling for handling high traffic.
- Automatic optimizations for JavaScript, images, caching, and streaming responses.
Overall, Next.js provides the perfect balance of frontend flexibility and backend power, making it a top choice for developers looking to build robust, high-performance AI chat applications.
2. Code Example
2.1 Dataset Preparation
Fine-tuning GPT requires your data to be formatted in a specific way, typically as a JSONL (JSON Lines) file containing prompt-completion pairs. Here’s a sample dataset for a customer support chatbot:
{
"prompt": "Q: How do I reset my password?\nA:",
"completion": " You can reset your password by clicking the 'Forgot Password' link on the login page."
}
{
"prompt": "Q: What are your business hours?\nA:",
"completion": " Our business hours are Monday to Friday, 9 AM to 6 PM."
}
2.2 Fine-Tuning GPT Model
You can use the OpenAI CLI or API to fine-tune the model. Here’s a brief example using the OpenAI CLI:
openai api fine_tunes.create -t "training_data.jsonl" -m "davinci" --suffix "custom-chatbot"
After fine-tuning completes, note the fine-tuned model ID (e.g., davinci:ft-your-org-2025-11-20-12-00-00), which you will use in your Next.js application. The dataset and format used above are specific to the davinci model; these may vary if a different model type is used, such as gpt-4 or gpt-5.
2.3 Next.js Application Setup
We’ll create a simple chat interface using Next.js API routes to communicate with the fine-tuned GPT model.
2.3.1 Project Initialization
To begin building your custom AI chat application with Next.js, start by creating a new Next.js project and installing the required dependencies.
npx create-next-app@latest custom-ai-chat cd custom-ai-chat npm install openai
2.3.2 Create an Environment Variable
Create a .env.local file with your OpenAI API key and fine-tuned model ID:
OPENAI_API_KEY=your_openai_api_key OPENAI_MODEL_ID=davinci:ft-your-org-2025-11-20-12-00-00
The above commands initialize a fresh Next.js project named custom-ai-chat, navigate into the project directory, and install the official openai SDK required to interact with your fine-tuned GPT model; this setup ensures you have a clean project structure, the necessary packages, and a ready-to-build environment for integrating server-side API routes, frontend components, and AI-powered chat features within your Next.js application.
2.3.3 Create an API Route
This API route processes user messages by sending them to the fine-tuned GPT model and returning the generated response to the frontend.
import { Configuration, OpenAIApi } from "openai";
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
export default async function handler(req, res) {
if (req.method !== "POST") {
return res.status(405).json({ message: "Method not allowed" });
}
const { message } = req.body;
if (!message) {
return res.status(400).json({ message: "Message is required" });
}
try {
const prompt = `Q: ${message}\nA:`;
const completion = await openai.createCompletion({
model: process.env.OPENAI_MODEL_ID,
prompt,
max_tokens: 150,
temperature: 0.7,
stop: ["\n", "Q:"],
});
const answer = completion.data.choices[0].text.trim();
res.status(200).json({ answer });
} catch (error) {
console.error("OpenAI error:", error);
res.status(500).json({ message: "Internal server error" });
}
}
This code defines a POST-only API route in Next.js that initializes the OpenAI client with your API key, validates the incoming request, constructs a prompt using the user’s message, and sends it to your fine-tuned GPT model using createCompletion(); once the model generates a response, the code extracts the answer text, trims it, and returns it as JSON to the frontend, while also handling incorrect HTTP methods, missing inputs, and internal server errors gracefully.
2.3.4 Create a Frontend Chat Component
This React component renders the chat interface, captures user input, sends messages to the backend API, and displays both user and bot responses in a simple conversational layout.
import { useState } from "react";
export default function Home() {
const [input, setInput] = useState("");
const [chat, setChat] = useState([]);
const sendMessage = async () => {
if (!input.trim()) return;
const userMessage = { sender: "user", text: input };
setChat([...chat, userMessage]);
setInput("");
try {
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: input }),
});
const data = await res.json();
if (res.ok) {
const botMessage = { sender: "bot", text: data.answer };
setChat((prev) => [...prev, botMessage]);
} else {
throw new Error(data.message || "Error from server");
}
} catch (error) {
const errorMessage = { sender: "bot", text: "Sorry, something went wrong." };
setChat((prev) => [...prev, errorMessage]);
}
};
return (
<div style={{ maxWidth: 600, margin: "2rem auto", padding: "1rem" }}>
<h1>Custom AI Chatbot</h1>
<div
style={{
border: "1px solid #ccc",
borderRadius: 5,
padding: "1rem",
height: "400px",
overflowY: "auto",
marginBottom: "1rem",
backgroundColor: "#fafafa",
}}
>
{chat.length === 0 && Start chatting by typing a message below.}
{chat.map((msg, idx) => (
<p
key={idx}
style={{
textAlign: msg.sender === "user" ? "right" : "left",
margin: "0.5rem 0",
color: msg.sender === "user" ? "#2980b9" : "#27ae60",
}}
>
{msg.text}
))}
</div>
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => {
if (e.key === "Enter") sendMessage();
}}
placeholder="Type your message..."
style={{ width: "100%", padding: "0.5rem", fontSize: "1rem" }}
/>
<button
onClick={sendMessage}
style={{
marginTop: "0.5rem",
width: "100%",
padding: "0.75rem",
backgroundColor: "#2980b9",
color: "white",
border: "none",
borderRadius: "5px",
fontSize: "1rem",
cursor: "pointer",
}}
>
Send
</button>
</div>
);
}
This component uses React’s useState hooks to store the current input and full chat history, updates the UI instantly when the user submits a message, and sends the message to the /api/chat endpoint for processing; once a response is returned by the fine-tuned GPT model, it appends the bot’s reply to the chat list, displays messages with simple left-right alignment based on the sender, and includes basic error handling, input clearing, and keyboard support for a smooth, responsive chat experience.
2.3.5 Code Run and Output
Below is a sample interaction showing how the user and the AI model exchange messages in the chat interface.

