-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Description
Describe the Bug
When scraping a webpage that contains no visible or extractable content, the Firecrawl API returns a 500 status code with the following response:
{
"success": false,
"code": "SCRAPE_ALL_ENGINES_FAILED",
"error": "All scraping engines failed! -- Double check the URL to make sure it's not broken. If the issue persists, contact us at help@firecrawl.com."
}
This seems misleading since the request is valid and the target page is accessible — it just has empty content. A 500 suggests a server-side failure, but the actual situation is that there’s simply nothing to scrape.
To Reproduce
Steps to reproduce the issue:
- Use the /scrape endpoint with a valid but empty webpage URL (e.g. a blank HTML page).
- Example request: POST /scrape
{
"url": "https://example.com/empty-page"
}
- Observe that the response is a 500 error with code SCRAPE_ALL_ENGINES_FAILED.
Expected Behavior
Instead of returning a 500, the API should:
- Return a successful response with empty content (e.g. { "content": "" }), or
- Return a 204 (No Content) or 4xx-level response that accurately indicates there’s nothing to scrape.
This helps distinguish between true scraping engine failures and pages that are empty.
Screenshots
N/A — see JSON response above.
Environment (please complete the following information):
- OS: macOS Tahoe 26.0.1
- Deployment Type: Cloud (firecrawl.dev)
- Firecrawl Version:Not applicable (using hosted API)
- Node.js Version: Not applicable (tested via Postman)
Logs
If needed, I can provide the request/response logs showing the 500 and the URL used.
Additional Context
This behavior causes false positives for “engine failure” conditions in client applications that expect Firecrawl to handle empty pages gracefully.