vercel logs --since <window> --json --limit N returns duplicate entries once there are more than 50 logs in the time window. The first page (50 records) is fetched correctly; every subsequent page returns the same first page again. The CLI faithfully accumulates duplicates until --limit is reached.
The first 50 results get duplicated over and over until they fill the --limit
Minimal reproduction
Depends on the actual number of logs in your time window
# Window is small enough to fit in one page → no duplicates
$ vercel logs --since 7min --json --limit 100 \
| awk '{n++; u[$0]++} END {print "total:", n; print "unique:", length(u)}'
total: 28
unique: 28
# Window crosses the page boundary → 50 unique entries returned twice
$ vercel logs --since 10min --json --limit 100 \
| awk '{n++; u[$0]++} END {print "total:", n; print "unique:", length(u)}'
total: 100
unique: 50
# Actually any sufficiently large --since window is enough to reproduce this
$ vercel logs --since 12h --json --limit 100 \
| awk '{n++; u[$0]++} END {print "total:", n; print "unique:", length(u)}'
total: 100
unique: 50
Duplicates are byte-identical — same id, same timestamp, same everything.
Expected behaviour
The API should return the next page of results when ?page=N is incremented, and the cli should return distinct logs (if there are more than 50 logs, --limit 100 should return between 50 and 100 distinct logs)
Assumption about the root cause
The request-logs endpoint (/api/logs/?page=N&...) appears to ignore page N and returns the same first page each time while still reporting pagination.hasMore: true.
The CLI loop that drives this in fetchAllRequestLogs calls fetchRequestLogs(..., { ...options, page }) with page = 0, 1, 2, ... and trusts response.pagination.hasMore. The pagination logic in the CLI itself looks correct; I suspect the server side is the issue.
|
export async function* fetchAllRequestLogs( |
|
client: Client, |
|
options: FetchRequestLogsOptions |
|
): AsyncGenerator<RequestLogEntry> { |
|
let page = 0; |
|
let remaining = options.limit ?? 100; |
|
let hasMore = true; |
|
|
|
while (hasMore && remaining > 0) { |
|
const response = await fetchRequestLogs(client, { |
|
...options, |
|
page, |
|
}); |
|
|
|
if (!response.logs || response.logs.length === 0) { |
|
break; |
|
} |
|
|
|
for (const log of response.logs) { |
|
yield log; |
|
remaining--; |
|
if (remaining <= 0) { |
|
return; |
|
} |
|
} |
|
|
|
hasMore = response.pagination?.hasMore ?? false; |
|
page++; |
|
} |
|
} |
Environment
- Vercel CLI: 53.1.0 (also 53.3.2)
- Node: 22.x
- macOS
vercel logs --since <window> --json --limit Nreturns duplicate entries once there are more than 50 logs in the time window. The first page (50 records) is fetched correctly; every subsequent page returns the same first page again. The CLI faithfully accumulates duplicates until --limit is reached.The first 50 results get duplicated over and over until they fill the --limit
Minimal reproduction
Depends on the actual number of logs in your time window
Duplicates are byte-identical — same id, same timestamp, same everything.
Expected behaviour
The API should return the next page of results when ?page=N is incremented, and the cli should return distinct logs (if there are more than 50 logs, --limit 100 should return between 50 and 100 distinct logs)
Assumption about the root cause
The request-logs endpoint (
/api/logs/?page=N&...) appears to ignore page N and returns the same first page each time while still reportingpagination.hasMore: true.The CLI loop that drives this in
fetchAllRequestLogscallsfetchRequestLogs(..., { ...options, page })with page = 0, 1, 2, ... and trustsresponse.pagination.hasMore. The pagination logic in the CLI itself looks correct; I suspect the server side is the issue.vercel/packages/cli/src/util/logs-v2.ts
Lines 243 to 272 in 5b4ece0
Environment