--- title: Overview · Cloudflare Workers docs description: "With Cloudflare Workers, you can expect to:" lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/ md: https://developers.cloudflare.com/workers/index.md --- A serverless platform for building, deploying, and scaling apps across [Cloudflare's global network](https://www.cloudflare.com/network/) with a single command — no infrastructure to manage, no complex configuration With Cloudflare Workers, you can expect to: * Deliver fast performance with high reliability anywhere in the world * Build full-stack apps with your framework of choice, including [React](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/), [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/), [Svelte](https://developers.cloudflare.com/workers/framework-guides/web-apps/svelte/), [Next](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/), [Astro](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/), [React Router](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/), [and more](https://developers.cloudflare.com/workers/framework-guides/) * Use your preferred language, including [JavaScript](https://developers.cloudflare.com/workers/languages/javascript/), [TypeScript](https://developers.cloudflare.com/workers/languages/typescript/), [Python](https://developers.cloudflare.com/workers/languages/python/), [Rust](https://developers.cloudflare.com/workers/languages/rust/), [and more](https://developers.cloudflare.com/workers/runtime-apis/webassembly/) * Gain deep visibility and insight with built-in [observability](https://developers.cloudflare.com/workers/observability/logs/) * Get started for free and grow with flexible [pricing](https://developers.cloudflare.com/workers/platform/pricing/), affordable at any scale Get started with your first project: [Deploy a template](https://dash.cloudflare.com/?to=/:account/workers-and-pages/templates) [Deploy with Wrangler CLI](https://developers.cloudflare.com/workers/get-started/guide/) *** ## Build with Workers #### Front-end applications Deploy [static assets](https://developers.cloudflare.com/workers/static-assets/) to Cloudflare's [CDN & cache](https://developers.cloudflare.com/cache/) for fast rendering #### Back-end applications Build APIs and connect to data stores with [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) to optimize latency #### Serverless AI inference Run LLMs, generate images, and more with [Workers AI](https://developers.cloudflare.com/workers-ai/) #### Background jobs Schedule [cron jobs](https://developers.cloudflare.com/workers/configuration/cron-triggers/), run durable [Workflows](https://developers.cloudflare.com/workflows/), and integrate with [Queues](https://developers.cloudflare.com/queues/) *** ## Integrate with Workers Connect to external services like databases, APIs, and storage via [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), enabling functionality with just a few lines of code: **Storage** **[Durable Objects](https://developers.cloudflare.com/durable-objects/)** Scalable stateful storage for real-time coordination. **[D1](https://developers.cloudflare.com/d1/)** Serverless SQL database built for fast, global queries. **[KV](https://developers.cloudflare.com/kv/)** Low-latency key-value storage for fast, edge-cached reads. **[Queues](https://developers.cloudflare.com/queues/)** Guaranteed delivery with no charges for egress bandwidth. **[Hyperdrive](https://developers.cloudflare.com/hyperdrive/)** Connect to your external database with accelerated queries, cached at the edge. **Compute** **[Workers AI](https://developers.cloudflare.com/workers-ai/)** Machine learning models powered by serverless GPUs. **[Workflows](https://developers.cloudflare.com/workflows/)** Durable, long-running operations with automatic retries. **[Vectorize](https://developers.cloudflare.com/vectorize/)** Vector database for AI-powered semantic search. **[R2](https://developers.cloudflare.com/r2/)** Zero-egress object storage for cost-efficient data access. **[Browser Rendering](https://developers.cloudflare.com/browser-rendering/)** Programmatic serverless browser instances. **Media** **[Cache / CDN](https://developers.cloudflare.com/cache/)** Global caching for high-performance, low-latency delivery. **[Images](https://developers.cloudflare.com/images/)** Streamlined image infrastructure from a single API. *** Want to connect with the Workers community? [Join our Discord](https://discord.cloudflare.com) --- title: 404 - Page Not Found · Cloudflare Workers docs chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/404/ md: https://developers.cloudflare.com/workers/404/index.md --- # 404 Check the URL, try using our [search](https://developers.cloudflare.com/search/) or try our LLM-friendly [llms.txt directory](https://developers.cloudflare.com/llms.txt). --- title: AI Assistant · Cloudflare Workers docs chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/ai/ md: https://developers.cloudflare.com/workers/ai/index.md --- ![Cursor illustration](https://developers.cloudflare.com/_astro/cursor-dark.CqBNjfjr_ZR4meY.webp) ![Cursor illustration](https://developers.cloudflare.com/_astro/cursor-light.BIMnHhHE_tY6Bo.webp) # Meet your AI assistant, CursorAI Preview Cursor is an experimental AI assistant, trained to answer questions about Cloudflare and powered by [Cloudflare Workers](https://developers.cloudflare.com/workers/), [Workers AI](https://developers.cloudflare.com/workers-ai/), [Vectorize](https://developers.cloudflare.com/vectorize/), and [AI Gateway](https://developers.cloudflare.com/ai-gateway/). Cursor is here to help answer your Cloudflare questions, so ask away! Cursor is an experimental AI preview, meaning that the answers provided are often incorrect, incomplete, or lacking in context. Be sure to double-check what Cursor recommends using the linked sources provided. Use of Cloudflare Cursor is subject to the Cloudflare Website and Online Services [Terms of Use](https://www.cloudflare.com/website-terms/). You acknowledge and agree that the output generated by Cursor has not been verified by Cloudflare for accuracy and does not represent Cloudflare’s views. --- title: CI/CD · Cloudflare Workers docs description: Set up continuous integration and continuous deployment for your Workers. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/ci-cd/ md: https://developers.cloudflare.com/workers/ci-cd/index.md --- You can set up continuous integration and continuous deployment (CI/CD) for your Workers by using either the integrated build system, [Workers Builds](#workers-builds), or using [external providers](#external-cicd) to optimize your development workflow. ## Why use CI/CD? Using a CI/CD pipeline to deploy your Workers is a best practice because it: * Automates the build and deployment process, removing the need for manual `wrangler deploy` commands. * Ensures consistent builds and deployments across your team by using the same source control management (SCM) system. * Reduces variability and errors by deploying in a uniform environment. * Simplifies managing access to production credentials. ## Which CI/CD should I use? Choose [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) if you want a fully integrated solution within Cloudflare's ecosystem that requires minimal setup and configuration for GitHub or GitLab users. We recommend using [external CI/CD providers](https://developers.cloudflare.com/workers/ci-cd/external-cicd) if: * You have a self-hosted instance of GitHub or GitLabs, which is currently not supported in Workers Builds' [Git integration](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/) * You are using a Git provider that is not GitHub or GitLab ## Workers Builds [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) is Cloudflare's native CI/CD system that allows you to integrate with GitHub or GitLab to automatically deploy changes with each new push to a selected branch (e.g. `main`). ![Workers Builds Workflow Diagram](https://developers.cloudflare.com/_astro/workers-builds-workflow.Bmy3qIVc_dylLs.webp) Ready to streamline your Workers deployments? Get started with [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/#get-started). ## External CI/CD You can also choose to set up your CI/CD pipeline with an external provider. * [GitHub Actions](https://developers.cloudflare.com/workers/ci-cd/external-cicd/github-actions/) * [GitLab CI/CD](https://developers.cloudflare.com/workers/ci-cd/external-cicd/gitlab-cicd/) --- title: Configuration · Cloudflare Workers docs description: Configure your Worker project with various features and customizations. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/configuration/ md: https://developers.cloudflare.com/workers/configuration/index.md --- Configure your Worker project with various features and customizations. * [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) * [Compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) * [Compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/) * [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) * [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) * [Integrations](https://developers.cloudflare.com/workers/configuration/integrations/) * [Multipart upload metadata](https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/) * [Page Rules](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/) * [Preview URLs](https://developers.cloudflare.com/workers/configuration/previews/) * [Routes and domains](https://developers.cloudflare.com/workers/configuration/routing/) * [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) * [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) * [Versions & Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) * [Workers Sites](https://developers.cloudflare.com/workers/configuration/sites/) --- title: Databases · Cloudflare Workers docs description: Explore database integrations for your Worker projects. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/databases/ md: https://developers.cloudflare.com/workers/databases/index.md --- Explore database integrations for your Worker projects. * [Connect to databases](https://developers.cloudflare.com/workers/databases/connecting-to-databases/) * [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) * [Vectorize (vector database)](https://developers.cloudflare.com/vectorize/) * [Cloudflare D1](https://developers.cloudflare.com/d1/) * [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) * [3rd Party Integrations](https://developers.cloudflare.com/workers/databases/third-party-integrations/) --- title: Demos and architectures · Cloudflare Workers docs description: Learn how you can use Workers within your existing application and architecture. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/demos/ md: https://developers.cloudflare.com/workers/demos/index.md --- Learn how you can use Workers within your existing application and architecture. ## Demos Explore the following demo applications for Workers. * [Starter code for D1 Sessions API:](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template) An introduction to D1 Sessions API. This demo simulates purchase orders administration. * [E-commerce Store:](https://github.com/harshil1712/e-com-d1) An application to showcase D1 read replication in the context of an online store. * [Gamertown Customer Support Assistant:](https://github.com/craigsdennis/gamertown-workers-ai-vectorize) A RAG based AI Chat app that uses Vectorize to access video game data for employees of Gamertown. * [shrty.dev:](https://github.com/craigsdennis/shorty-dot-dev) A URL shortener that makes use of KV and Workers Analytics Engine. The admin interface uses Function Calling. Go Shorty! * [Homie - Home Automation using Function Calling:](https://github.com/craigsdennis/lightbulb-moment-tool-calling) A home automation tool that uses AI Function calling to change the color of lightbulbs in your home. * [Hackathon Helper:](https://github.com/craigsdennis/hackathon-helper-workers-ai) A series of starters for Hackathons. Get building quicker! Python, Streamlit, Workers, and Pages starters for all your AI needs! * [Multimodal AI Translator:](https://github.com/elizabethsiegle/cfworkers-ai-translate) This application uses Cloudflare Workers AI to perform multimodal translation of languages via audio and text in the browser. * [Floor is Llava:](https://github.com/craigsdennis/floor-is-llava-workers-ai) This is an example repo to explore using the AI Vision model Llava hosted on Cloudflare Workers AI. This is a SvelteKit app hosted on Pages. * [Workers AI Object Detector:](https://github.com/elizabethsiegle/cf-workers-ai-obj-detection-webcam) Detect objects from a webcam in a Cloudflare Worker web app with detr-resnet-50 hosted on Cloudflare using Cloudflare Workers AI. * [JavaScript-native RPC on Cloudflare Workers <> Named Entrypoints:](https://github.com/cloudflare/js-rpc-and-entrypoints-demo) This is a collection of examples of communicating between multiple Cloudflare Workers using the remote-procedure call (RPC) system that is built into the Workers runtime. * [Workers for Platforms Example Project:](https://github.com/cloudflare/workers-for-platforms-example) Explore how you could manage thousands of Workers with a single Cloudflare Workers account. * [Whatever-ify:](https://github.com/craigsdennis/whatever-ify-workers-ai) Turn yourself into...whatever. Take a photo, get a description, generate a scene and character, then generate an image based on that calendar. * [Cloudflare Workers Chat Demo:](https://github.com/cloudflare/workers-chat-demo) This is a demo app written on Cloudflare Workers utilizing Durable Objects to implement real-time chat with stored history. * [Phoney AI:](https://github.com/craigsdennis/phoney-ai) This application uses Cloudflare Workers AI, Twilio, and AssemblyAI. Your phone is an input and output device. * [Vanilla JavaScript Chat Application using Cloudflare Workers AI:](https://github.com/craigsdennis/vanilla-chat-workers-ai) A web based chat interface built on Cloudflare Pages that allows for exploring Text Generation models on Cloudflare Workers AI. Design is built using tailwind. * [Turnstile Demo:](https://github.com/cloudflare/turnstile-demo-workers) A simple demo with a Turnstile-protected form, using Cloudflare Workers. With the code in this repository, we demonstrate implicit rendering and explicit rendering. * [Wildebeest:](https://github.com/cloudflare/wildebeest) Wildebeest is an ActivityPub and Mastodon-compatible server whose goal is to allow anyone to operate their Fediverse server and identity on their domain without needing to keep infrastructure, with minimal setup and maintenance, and running in minutes. * [D1 Northwind Demo:](https://github.com/cloudflare/d1-northwind) This is a demo of the Northwind dataset, running on Cloudflare Workers, and D1 - Cloudflare's SQL database, running on SQLite. * [Multiplayer Doom Workers:](https://github.com/cloudflare/doom-workers) A WebAssembly Doom port with multiplayer support running on top of Cloudflare's global network using Workers, WebSockets, Pages, and Durable Objects. * [Queues Web Crawler:](https://github.com/cloudflare/queues-web-crawler) An example use-case for Queues, a web crawler built on Browser Rendering and Puppeteer. The crawler finds the number of links to Cloudflare.com on the site, and archives a screenshot to Workers KV. * [DMARC Email Worker:](https://github.com/cloudflare/dmarc-email-worker) A Cloudflare worker script to process incoming DMARC reports, store them, and produce analytics. * [Access External Auth Rule Example Worker:](https://github.com/cloudflare/workers-access-external-auth-example) This is a worker that allows you to quickly setup an external evalutation rule in Cloudflare Access. ## Reference architectures Explore the following reference architectures that use Workers: [Cloudflare Security Architecture](https://developers.cloudflare.com/reference-architecture/architectures/security/) [This document provides insight into how this network and platform are architected from a security perspective, how they are operated, and what services are available for businesses to address their own security challenges.](https://developers.cloudflare.com/reference-architecture/architectures/security/) [Composable AI architecture](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-composable/) [The architecture diagram illustrates how AI applications can be built end-to-end on Cloudflare, or single services can be integrated with external infrastructure and services.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-composable/) [Retrieval Augmented Generation (RAG)](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/) [RAG combines retrieval with generative models for better text. It uses external knowledge to create factual, relevant responses, improving coherence and accuracy in NLP tasks like chatbots.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/) [Automatic captioning for video uploads](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-video-caption/) [By integrating automatic speech recognition technology into video platforms, content creators, publishers, and distributors can reach a broader audience, including individuals with hearing impairments or those who prefer to consume content in different languages.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-video-caption/) [Ingesting BigQuery Data into Workers AI](https://developers.cloudflare.com/reference-architecture/diagrams/ai/bigquery-workers-ai/) [You can connect a Cloudflare Worker to get data from Google BigQuery and pass it to Workers AI, to run AI Models, powered by serverless GPUs.](https://developers.cloudflare.com/reference-architecture/diagrams/ai/bigquery-workers-ai/) [Optimizing and securing connected transportation systems](https://developers.cloudflare.com/reference-architecture/diagrams/iot/optimizing-and-securing-connected-transportation-systems/) [This diagram showcases Cloudflare components optimizing connected transportation systems. It illustrates how their technologies minimize latency, ensure reliability, and strengthen security for critical data flow.](https://developers.cloudflare.com/reference-architecture/diagrams/iot/optimizing-and-securing-connected-transportation-systems/) [Extend ZTNA with external authorization and serverless computing](https://developers.cloudflare.com/reference-architecture/diagrams/sase/augment-access-with-serverless/) [Cloudflare's ZTNA enhances access policies using external API calls and Workers for robust security. It verifies user authentication and authorization, ensuring only legitimate access to protected resources.](https://developers.cloudflare.com/reference-architecture/diagrams/sase/augment-access-with-serverless/) [A/B-testing using Workers](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/a-b-testing-using-workers/) [Cloudflare's low-latency, fully serverless compute platform, Workers offers powerful capabilities to enable A/B testing using a server-side implementation.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/a-b-testing-using-workers/) [Fullstack applications](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/fullstack-application/) [A practical example of how these services come together in a real fullstack application architecture.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/fullstack-application/) [Serverless ETL pipelines](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-etl/) [Cloudflare enables fully serverless ETL pipelines, significantly reducing complexity, accelerating time to production, and lowering overall costs.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-etl/) [Serverless global APIs](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-global-apis/) [An example architecture of a serverless API on Cloudflare and aims to illustrate how different compute and data products could interact with each other.](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-global-apis/) [Serverless image content management](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-image-content-management/) [Leverage various components of Cloudflare's ecosystem to construct a scalable image management solution](https://developers.cloudflare.com/reference-architecture/diagrams/serverless/serverless-image-content-management/) [Egress-free object storage in multi-cloud setups](https://developers.cloudflare.com/reference-architecture/diagrams/storage/egress-free-storage-multi-cloud/) [Learn how to use R2 to get egress-free object storage in multi-cloud setups.](https://developers.cloudflare.com/reference-architecture/diagrams/storage/egress-free-storage-multi-cloud/) [Event notifications for storage](https://developers.cloudflare.com/reference-architecture/diagrams/storage/event-notifications-for-storage/) [Use Cloudflare Workers or an external service to monitor for notifications about data changes and then handle them appropriately.](https://developers.cloudflare.com/reference-architecture/diagrams/storage/event-notifications-for-storage/) [Storing user generated content](https://developers.cloudflare.com/reference-architecture/diagrams/storage/storing-user-generated-content/) [Store user-generated content in R2 for fast, secure, and cost-effective architecture.](https://developers.cloudflare.com/reference-architecture/diagrams/storage/storing-user-generated-content/) --- title: Development & testing · Cloudflare Workers docs description: Develop and test your Workers locally. lastUpdated: 2025-06-20T17:22:06.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/ md: https://developers.cloudflare.com/workers/development-testing/index.md --- You can build, run, and test your Worker code on your own local machine before deploying it to Cloudflare's network. This is made possible through [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/), a simulator that executes your Worker code using the same runtime used in production, [`workerd`](https://github.com/cloudflare/workerd). [By default](https://developers.cloudflare.com/workers/development-testing/#defaults), your Worker's bindings [connect to locally simulated resources](https://developers.cloudflare.com/workers/development-testing/#bindings-during-local-development), but can be configured to interact with the real, production resource with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings). ## Core concepts ### Worker execution vs Bindings When developing Workers, it's important to understand two distinct concepts: * **Worker execution**: Where your Worker code actually runs (on your local machine vs on Cloudflare's infrastructure). * [**Bindings**](https://developers.cloudflare.com/workers/runtime-apis/bindings/): How your Worker interacts with Cloudflare resources (like [KV namespaces](https://developers.cloudflare.com/kv), [R2 buckets](https://developers.cloudflare.com/r2), [D1 databases](https://developers.cloudflare.com/d1), [Queues](https://developers.cloudflare.com/queues/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), etc). In your Worker code, these are accessed via the `env` object (such as `env.MY_KV`). ## Local development **You can start a local development server using:** 1. The Cloudflare Workers CLI [**Wrangler**](https://developers.cloudflare.com/workers/wrangler/), using the built-in [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) command. * npm ```sh npx wrangler dev ``` * yarn ```sh yarn wrangler dev ``` * pnpm ```sh pnpm wrangler dev ``` 1. [**Vite**](https://vite.dev/), using the [**Cloudflare Vite plugin**](https://developers.cloudflare.com/workers/vite-plugin/). * npm ```sh npx vite dev ``` * yarn ```sh yarn vite dev ``` * pnpm ```sh pnpm vite dev ``` Both Wrangler and the Cloudflare Vite plugin use [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/) under the hood, and are developed and maintained by the Cloudflare team. For guidance on choosing when to use Wrangler versus Vite, see our guide [Choosing between Wrangler & Vite](https://developers.cloudflare.com/workers/development-testing/wrangler-vs-vite/). * [Get started with Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/) * [Get started with the Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/get-started/) ### Defaults By default, running `wrangler dev` / `vite dev` (when using the [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/get-started/)) means that: * Your Worker code runs on your local machine. * All resources your Worker is bound to in your [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) are simulated locally. ### Bindings during local development [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) are interfaces that allow your Worker to interact with various Cloudflare resources (like [KV namespaces](https://developers.cloudflare.com/kv), [R2 buckets](https://developers.cloudflare.com/r2), [D1 databases](https://developers.cloudflare.com/d1), [Queues](https://developers.cloudflare.com/queues/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), etc). In your Worker code, these are accessed via the `env` object (such as `env.MY_KV`). During local development, your Worker code interacts with these bindings using the exact same API calls (such as `env.MY_KV.put()`) as it would in a deployed environment. These local resources are initially empty, but you can populate them with data, as documented in [Adding local data](https://developers.cloudflare.com/workers/development-testing/local-data/). * By default, bindings connect to **local resource simulations** (except for [AI bindings](https://developers.cloudflare.com/workers-ai/configuration/bindings/), as AI models always run remotely). * You can override this default behavior and **connect to the remote resource**, on a per-binding basis. This lets you connect to real, production resources while still running your Worker code locally. ## Remote bindings Beta **Remote bindings** are bindings that are configured to connect to the deployed, remote resource during local development *instead* of the locally simulated resource. You can configure remote bindings by setting `experimental_remote: true` in the binding definition. ### Example configuration * wrangler.jsonc ```jsonc { "name": "my-worker", "compatibility_date": "2025-07-16", "r2_buckets": [ { "bucket_name": "screenshots-bucket", "binding": "screenshots_bucket", "experimental_remote": true, }, ], } ``` * wrangler.toml ```toml name = "my-worker" compatibility_date = "2025-07-16" [[r2_buckets]] bucket_name = "screenshots-bucket" binding = "screenshots_bucket" experimental_remote = true ``` When remote bindings are configured, your Worker still **executes locally**, only the underlying resources your bindings connect to change. For all bindings marked with `experimental_remote: true`, Miniflare will route its operations (such as `env.MY_KV.put()`) to the deployed resource. All other bindings not explicitly configured with `experimental_remote: true` continue to use their default local simulations. ### Using Wrangler with remote bindings If you're using [Wrangler](https://developers.cloudflare.com/workers/wrangler/) for local development and have remote bindings configured, you'll need to use the following experimental command: * npm ```sh npx wrangler dev --x-remote-bindings ``` * yarn ```sh yarn wrangler dev --x-remote-bindings ``` * pnpm ```sh pnpm wrangler dev --x-remote-bindings ``` ### Using Vite with remote bindings If you're using Vite via [the Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you'll need to add support for remote bindings in your Vite configuration (`vite.config.ts`): ```ts import { cloudflare } from "@cloudflare/vite-plugin"; import { defineConfig } from "vite"; export default defineConfig({ plugins: [ cloudflare({ configPath: "./entry-worker/wrangler.jsonc", experimental: { remoteBindings: true }, }), ], }); ``` ### Using Vitest with remote bindings You can also use Vitest with configured remote bindings by enabling support in your Vitest configuration file (`vitest.config.ts`): ```ts import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config"; export default defineWorkersConfig({ test: { poolOptions: { workers: { experimental_remoteBindings: true, wrangler: { configPath: "./wrangler.jsonc" }, }, }, }, }); ``` ### Targeting preview resources To protect production data, you can create and specify preview resources in your [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/), such as: * [Preview namespaces for KV stores](https://developers.cloudflare.com/workers/wrangler/configuration/#kv-namespaces):`preview_id`. * [Preview buckets for R2 storage](https://developers.cloudflare.com/workers/wrangler/configuration/#r2-buckets): `preview_bucket_name`. * [Preview database IDs for D1](https://developers.cloudflare.com/workers/wrangler/configuration/#d1-databases): `preview_database_id` If preview configuration is present for a binding, setting `experimental_remote: true` will ensure that remote bindings connect to that designated remote preview resource. **For example:** * wrangler.jsonc ```jsonc { "name": "my-worker", "compatibility_date": "2025-07-16", "r2_buckets": [ { "bucket_name": "screenshots-bucket", "binding": "screenshots_bucket", "preview_bucket_name": "preview-screenshots-bucket", "experimental_remote": true, }, ], } ``` * wrangler.toml ```toml name = "my-worker" compatibility_date = "2025-07-16" [[r2_buckets]] bucket_name = "screenshots-bucket" binding = "screenshots_bucket" preview_bucket_name = "preview-screenshots-bucket" experimental_remote = true ``` Running `wrangler dev --x-remote-bindings` with the above configuration means that: * Your Worker code runs locally * All calls made to `env.screenshots_bucket` will use the `preview-screenshots-bucket` resource, rather than the production `screenshots-bucket`. ### Recommended remote bindings We recommend configuring specific bindings to connect to their remote counterparts. These services often rely on Cloudflare's network infrastructure or have complex backends that are not fully simulated locally. The following bindings are recommended to have `experimental_remote: true` in your Wrangler configuration: #### [Browser Rendering](https://developers.cloudflare.com/workers/wrangler/configuration/#browser-rendering): To interact with a real headless browser for rendering. There is no current local simulation for Browser Rendering. * wrangler.jsonc ```jsonc { "browser": { "binding": "MY_BROWSER", "experimental_remote": true }, } ``` * wrangler.toml ```toml [browser] binding = "MY_BROWSER" experimental_remote = true ``` #### [Workers AI](https://developers.cloudflare.com/workers/wrangler/configuration/#workers-ai): To utilize actual AI models deployed on Cloudflare's network for inference. There is no current local simulation for Workers AI. * wrangler.jsonc ```jsonc { "ai": { "binding": "AI", "experimental_remote": true }, } ``` * wrangler.toml ```toml [ai] binding = "AI" experimental_remote = true ``` #### [Vectorize](https://developers.cloudflare.com/workers/wrangler/configuration/#vectorize-indexes): To connect to your production Vectorize indexes for accurate vector search and similarity operations. There is no current local simulation for Vectorize. * wrangler.jsonc ```jsonc { "vectorize": [ { "binding": "MY_VECTORIZE_INDEX", "index_name": "my-prod-index", "experimental_remote": true } ], } ``` * wrangler.toml ```toml [[vectorize]] binding = "MY_VECTORIZE_INDEX" index_name = "my-prod-index" experimental_remote = true ``` #### [mTLS](https://developers.cloudflare.com/workers/wrangler/configuration/#mtls-certificates): To verify that the certificate exchange and validation process work as expected. There is no current local simulation for mTLS bindings. * wrangler.jsonc ```jsonc { "mtls_certificates": [ { "binding": "MY_CLIENT_CERT_FETCHER", "certificate_id": "", "experimental_remote": true } ] } ``` * wrangler.toml ```toml [[mtls_certificates]] binding = "MY_CLIENT_CERT_FETCHER" certificate_id = "" experimental_remote = true ``` #### [Images](https://developers.cloudflare.com/workers/wrangler/configuration/#images): To connect to a high-fidelity version of the Images API, and verify that all transformations work as expected. Local simulation for Cloudflare Images is [limited with only a subset of features](https://developers.cloudflare.com/images/transform-images/bindings/#interact-with-your-images-binding-locally). * wrangler.jsonc ```jsonc { "images": { "binding": "IMAGES" , "experimental_remote": true } } ``` * wrangler.toml ```toml [images] binding = "IMAGES" experimental_remote = true ``` Note If `experimental_remote: true` is not specified for Browser Rendering, Vectorize, mTLS, or Images, Cloudflare **will issue a warning**. This prompts you to consider enabling it for a more production-like testing experience. If a Workers AI binding has `experimental_remote` set to `false`, Cloudflare will **produce an error**. If the property is omitted, Cloudflare will connect to the remote resource and issue a warning to add the property to configuration. #### [Dispatch Namespaces](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/get-started/developing-with-wrangler/): Workers for Platforms users can configure `experimental_remote: true` in dispatch namespace binding definitions: * wrangler.jsonc ```jsonc { "dispatch_namespaces": [ { "binding": "DISPATCH_NAMESPACE", "namespace": "testing", "experimental_remote":true } ] } ``` * wrangler.toml ```toml [[dispatch_namespaces]] binding = "DISPATCH_NAMESPACE" namespace = "testing" experimental_remote = true ``` This allows you to run your [dynamic dispatch Worker](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dynamic-dispatch-worker) locally, while connecting it to your remote dispatch namespace binding. This allows you to test changes to your core dispatching logic against real, deployed [user Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#user-workers). ### Unsupported remote bindings Certain bindings are not supported for remote connections during local development (`experimental_remote: true`). These will always use local simulations or local values. If `experimental_remote: true` is specified in Wrangler configuration for any of the following unsupported binding types, Cloudflare **will issue an error**. See [all supported and unsupported bindings for remote bindings](https://developers.cloudflare.com/workers/development-testing/bindings-per-env/). * [**Durable Objects**](https://developers.cloudflare.com/workers/wrangler/configuration/#durable-objects): Enabling remote connections for Durable Objects may be supported in the future, but currently will always run locally. * [**Environment Variables (`vars`)**](https://developers.cloudflare.com/workers/wrangler/configuration/#environment-variables): Environment variables are intended to be distinct between local development and deployed environments. They are easily configurable locally (such as in a `.dev.vars` file or directly in Wrangler configuration). * [**Secrets**](https://developers.cloudflare.com/workers/wrangler/configuration/#secrets): Like environment variables, secrets are expected to have different values in local development versus deployed environments for security reasons. Use `.dev.vars` for local secret management. * **[Static Assets](https://developers.cloudflare.com/workers/wrangler/configuration/#assets)**: Static assets are always served from your local disk during development for speed and direct feedback on changes. * [**Version Metadata**](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/): Since your Worker code is running locally, version metadata (like commit hash, version tags) associated with a specific deployed version is not applicable or accurate. * [**Analytics Engine**](https://developers.cloudflare.com/analytics/analytics-engine/): Local development sessions typically don't contribute data directly to production Analytics Engine. * [**Hyperdrive**](https://developers.cloudflare.com/workers/wrangler/configuration/#hyperdrive): This is being actively worked on, but is currently unsupported. * [**Rate Limiting**](https://developers.cloudflare.com/workers/runtime-apis/bindings/rate-limit/#configuration): Local development sessions typically should not share or affect rate limits of your deployed Workers. Rate limiting logic should be tested against local simulations. Tip If you have use-cases for connecting to any of the remote resources above, please [open a feature request](https://github.com/cloudflare/workers-sdk/issues) in our [`workers-sdk` repository](https://github.com/cloudflare/workers-sdk). ### Important Considerations * **Data modification**: Operations (writes, deletes, updates) on bindings connected remotely will affect your actual data in the targeted Cloudflare resource (be it preview or production). * **Billing**: Interactions with remote Cloudflare services through these connections will incur standard operational costs for those services (such as KV operations, R2 storage/operations, AI requests, D1 usage). * **Network latency**: Expect network latency for operations on these remotely connected bindings, as they involve communication over the internet. ### API Wrangler provides programmatic utilities to help tooling authors support remote binding connections when running Workers code with [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/). **Key APIs include:** * [`experimental_startRemoteProxySession`](#experimental_startRemoteProxySession): Starts a proxy session that allows interaction with remote bindings. * [`unstable_convertConfigBindingsToStartWorkerBindings`](#unstable_convertconfigbindingstostartworkerbindings): Utility for converting binding definitions. * [`experimental_maybeStartOrUpdateProxySession`](#experimental_maybestartorupdatemixedmodesession): Convenience function to easily start or update a proxy session. #### `experimental_startRemoteProxySession` This function starts a proxy session for a given set of bindings. It accepts options to control session behavior, including an `auth` option with your Cloudflare account ID and API token for remote binding access. It returns an object with: * `ready` Promise\: Resolves when the session is ready. * `dispose` () => Promise\: Stops the session. * `updateBindings` (bindings: StartDevWorkerInput\['bindings']) => Promise\: Updates session bindings. * `remoteProxyConnectionString` remoteProxyConnectionString: String to pass to Miniflare for remote binding access. #### `unstable_convertConfigBindingsToStartWorkerBindings` The `unstable_readConfig` utility returns an `Unstable_Config` object which includes the definition of the bindings included in the configuration file. These bindings definitions are however not directly compatible with `experimental_startRemoteProxySession`. It can be quite convenient to however read the binding declarations with `unstable_readConfig` and then pass them to `experimental_startRemoteProxySession`, so for this wrangler exposes `unstable_convertConfigBindingsToStartWorkerBindings` which is a simple utility to convert the bindings in an `Unstable_Config` object into a structure that can be passed to `experimental_startRemoteProxySession`. Note This type conversion is temporary. In the future, the types will be unified so you can pass the config object directly to `experimental_startRemoteProxySession`. #### `experimental_maybeStartOrUpdateRemoteProxySession` This wrapper simplifies proxy session management. It takes: * The path to your Wrangler config, or an object with remote bindings. * The current proxy session details (this parameter can be set to `null` or not being provided if none). It returns an object with the proxy session details if started or updated, or `null` if no proxy session is needed. The function: * Based on the first argument prepares the input arguments for the proxy session. * If there are no remote bindings to be used (nor a pre-existing proxy session) it returns null, signaling that no proxy session is needed. * If the details of an existing proxy session have been provided it updates the proxy session accordingly. * Otherwise if starts a new proxy session. * Returns the proxy session details (that can later be passed as the second argument to `experimental_maybeStartOrUpdateRemoteProxySession`). #### Example Here's a basic example of using Miniflare with `experimental_maybeStartOrUpdateRemoteProxySession` to provide a local dev session with remote bindings. This example uses a single hardcoded KV binding. * JavaScript ```js import { Miniflare, MiniflareOptions } from "miniflare"; import { experimental_maybeStartOrUpdateRemoteProxySession } from "wrangler"; let mf; let remoteProxySessionDetails = null; async function startOrUpdateDevSession() { remoteProxySessionDetails = await experimental_maybeStartOrUpdateRemoteProxySession( { bindings: { MY_KV: { type: "kv_namespace", id: "kv-id", experimental_remote: true, }, }, }, remoteProxySessionDetails, ); const miniflareOptions = { scriptPath: "./worker.js", kvNamespaces: { MY_KV: { id: "kv-id", remoteProxyConnectionString: remoteProxySessionDetails?.session.remoteProxyConnectionString, }, }, }; if (!mf) { mf = new Miniflare(miniflareOptions); } else { mf.setOptions(miniflareOptions); } } // ... tool logic that invokes `startOrUpdateDevSession()` ... // ... once the dev session is no longer needed run // `remoteProxySessionDetails?.session.dispose()` ``` * TypeScript ```ts import { Miniflare, MiniflareOptions } from "miniflare"; import { experimental_maybeStartOrUpdateRemoteProxySession } from "wrangler"; let mf: Miniflare | null; let remoteProxySessionDetails: Awaited< ReturnType > | null = null; async function startOrUpdateDevSession() { remoteProxySessionDetails = await experimental_maybeStartOrUpdateRemoteProxySession( { bindings: { MY_KV: { type: 'kv_namespace', id: 'kv-id', experimental_remote: true, } } }, remoteProxySessionDetails ); const miniflareOptions: MiniflareOptions = { scriptPath: "./worker.js", kvNamespaces: { MY_KV: { id: "kv-id", remoteProxyConnectionString: remoteProxySessionDetails?.session.remoteProxyConnectionString, }, }, }; if (!mf) { mf = new Miniflare(miniflareOptions); } else { mf.setOptions(miniflareOptions); } } // ... tool logic that invokes `startOrUpdateDevSession()` ... // ... once the dev session is no longer needed run // `remoteProxySessionDetails?.session.dispose()` ``` ## `wrangler dev --remote` (Legacy) Separate from Miniflare-powered local development, Wrangler also offers a fully remote development mode via [`wrangler dev --remote`](https://developers.cloudflare.com/workers/wrangler/commands/#dev). Remote development is [**not** supported in the Vite plugin](https://developers.cloudflare.com/workers/development-testing/wrangler-vs-vite/). * npm ```sh npx wrangler dev --remote ``` * yarn ```sh yarn wrangler dev --remote ``` * pnpm ```sh pnpm wrangler dev --remote ``` During **remote development**, all of your Worker code is uploaded to a temporary preview environment on Cloudflare's infrastructure, and changes to your code are automatically uploaded as you save. When using remote development, all bindings automatically connect to their remote resources. Unlike local development, you cannot configure bindings to use local simulations - they will always use the deployed resources on Cloudflare's network. ### When to use Remote development * For most development tasks, the most efficient and productive experience will be local development along with [remote bindings](https://developers.cloudflare.com/workers/development-testing/#remote-bindings) when needed. * You may want to use `wrangler dev --remote` for testing features or behaviors that are highly specific to Cloudflare's network and cannot be adequately simulated locally or tested via remote bindings. ### Considerations * Iteration is significantly slower than local development due to the upload/deployment step for each change. ### Limitations * When you run a remote development session using the `--remote` flag, a limit of 50 [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) per zone is enforced. Learn more in[ Workers platform limits](https://developers.cloudflare.com/workers/platform/limits/#number-of-routes-per-zone-when-using-wrangler-dev---remote). --- title: Examples · Cloudflare Workers docs description: Explore the following examples for Workers. lastUpdated: 2025-05-06T17:35:57.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/ md: https://developers.cloudflare.com/workers/examples/index.md --- Explore the following examples for Workers. 45 examples [103 Early Hints](https://developers.cloudflare.com/workers/examples/103-early-hints/) Allow a client to request static assets while waiting for the HTML response. [A/B testing with same-URL direct access](https://developers.cloudflare.com/workers/examples/ab-testing/) Set up an A/B test by controlling what response is served based on cookies. This version supports passing the request through to test and control on the origin, bypassing random assignment. [Accessing the Cloudflare Object](https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/) Access custom Cloudflare properties and control how Cloudflare features are applied to every request. [Aggregate requests](https://developers.cloudflare.com/workers/examples/aggregate-requests/) Send two GET request to two urls and aggregates the responses into one response. [Alter headers](https://developers.cloudflare.com/workers/examples/alter-headers/) Example of how to add, change, or delete headers sent in a request or returned in a response. [Auth with headers](https://developers.cloudflare.com/workers/examples/auth-with-headers/) Allow or deny a request based on a known pre-shared key in a header. This is not meant to replace the WebCrypto API. [Block on TLS](https://developers.cloudflare.com/workers/examples/block-on-tls/) Inspects the incoming request's TLS version and blocks if under TLSv1.2. [Bulk origin override](https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/) Resolve requests to your domain to a set of proxy third-party origin URLs. [Bulk redirects](https://developers.cloudflare.com/workers/examples/bulk-redirects/) Redirect requests to certain URLs based on a mapped object to the request's URL. [Cache POST requests](https://developers.cloudflare.com/workers/examples/cache-post-request/) Cache POST requests using the Cache API. [Cache Tags using Workers](https://developers.cloudflare.com/workers/examples/cache-tags/) Send Additional Cache Tags using Workers [Cache using fetch](https://developers.cloudflare.com/workers/examples/cache-using-fetch/) Determine how to cache a resource by setting TTLs, custom cache keys, and cache headers in a fetch request. [Conditional response](https://developers.cloudflare.com/workers/examples/conditional-response/) Return a response based on the incoming request's URL, HTTP method, User Agent, IP address, ASN or device type. [Cookie parsing](https://developers.cloudflare.com/workers/examples/extract-cookie-value/) Given the cookie name, get the value of a cookie. You can also use cookies for A/B testing. [CORS header proxy](https://developers.cloudflare.com/workers/examples/cors-header-proxy/) Add the necessary CORS headers to a third party API response. [Country code redirect](https://developers.cloudflare.com/workers/examples/country-code-redirect/) Redirect a response based on the country code in the header of a visitor. [Custom Domain with Images](https://developers.cloudflare.com/workers/examples/images-workers/) Set up custom domain for Images using a Worker or serve images using a prefix path and Cloudflare registered domain. [Data loss prevention](https://developers.cloudflare.com/workers/examples/data-loss-prevention/) Protect sensitive data to prevent data loss, and send alerts to a webhooks server in the event of a data breach. [Debugging logs](https://developers.cloudflare.com/workers/examples/debugging-logs/) Send debugging information in an errored response to a logging service. [Fetch HTML](https://developers.cloudflare.com/workers/examples/fetch-html/) Send a request to a remote server, read HTML from the response, and serve that HTML. [Fetch JSON](https://developers.cloudflare.com/workers/examples/fetch-json/) Send a GET request and read in JSON from the response. Use to fetch external data. [Geolocation: Custom Styling](https://developers.cloudflare.com/workers/examples/geolocation-custom-styling/) Personalize website styling based on localized user time. [Geolocation: Hello World](https://developers.cloudflare.com/workers/examples/geolocation-hello-world/) Get all geolocation data fields and display them in HTML. [Geolocation: Weather application](https://developers.cloudflare.com/workers/examples/geolocation-app-weather/) Fetch weather data from an API using the user's geolocation data. [Hot-link protection](https://developers.cloudflare.com/workers/examples/hot-link-protection/) Block other websites from linking to your content. This is useful for protecting images. [HTTP Basic Authentication](https://developers.cloudflare.com/workers/examples/basic-auth/) Shows how to restrict access using the HTTP Basic schema. [Logging headers to console](https://developers.cloudflare.com/workers/examples/logging-headers/) Examine the contents of a Headers object by logging to console with a Map. [Modify request property](https://developers.cloudflare.com/workers/examples/modify-request-property/) Create a modified request with edited properties based off of an incoming request. [Modify response](https://developers.cloudflare.com/workers/examples/modify-response/) Fetch and modify response properties which are immutable by creating a copy first. [Multiple Cron Triggers](https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/) Set multiple Cron Triggers on three different schedules. [Post JSON](https://developers.cloudflare.com/workers/examples/post-json/) Send a POST request with JSON data. Use to share data with external servers. [Read POST](https://developers.cloudflare.com/workers/examples/read-post/) Serve an HTML form, then read POST requests. Use also to read JSON or POST data from an incoming request. [Redirect](https://developers.cloudflare.com/workers/examples/redirect/) Redirect requests from one URL to another or from one set of URLs to another set. [Respond with another site](https://developers.cloudflare.com/workers/examples/respond-with-another-site/) Respond to the Worker request with the response from another website (example.com in this example). [Return JSON](https://developers.cloudflare.com/workers/examples/return-json/) Return JSON directly from a Worker script, useful for building APIs and middleware. [Return small HTML page](https://developers.cloudflare.com/workers/examples/return-html/) Deliver an HTML page from an HTML string directly inside the Worker script. [Rewrite links](https://developers.cloudflare.com/workers/examples/rewrite-links/) Rewrite URL links in HTML using the HTMLRewriter. This is useful for JAMstack websites. [Set security headers](https://developers.cloudflare.com/workers/examples/security-headers/) Set common security headers (X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, Permissions-Policy, Referrer-Policy, Strict-Transport-Security, Content-Security-Policy). [Setting Cron Triggers](https://developers.cloudflare.com/workers/examples/cron-trigger/) Set a Cron Trigger for your Worker. [Sign requests](https://developers.cloudflare.com/workers/examples/signing-requests/) Verify a signed request using the HMAC and SHA-256 algorithms or return a 403. [Stream OpenAI API Responses](https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/) Use the OpenAI v4 SDK to stream responses from OpenAI. [Turnstile with Workers](https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/) Inject [Turnstile](https://developers.cloudflare.com/turnstile/) implicitly into HTML elements using the HTMLRewriter runtime API. [Using the Cache API](https://developers.cloudflare.com/workers/examples/cache-api/) Use the Cache API to store responses in Cloudflare's cache. [Using the WebSockets API](https://developers.cloudflare.com/workers/examples/websockets/) Use the WebSockets API to communicate in real time with your Cloudflare Workers. [Using timingSafeEqual](https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/) Protect against timing attacks by safely comparing values using `timingSafeEqual`. --- title: Framework guides · Cloudflare Workers docs description: Create full-stack applications deployed to Cloudflare Workers. lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/framework-guides/ md: https://developers.cloudflare.com/workers/framework-guides/index.md --- Create full-stack applications deployed to Cloudflare Workers. * [AI & agents](https://developers.cloudflare.com/workers/framework-guides/ai-and-agents/) * [Agents SDK](https://developers.cloudflare.com/agents/) * [LangChain](https://developers.cloudflare.com/workers/languages/python/packages/langchain/) * [Web applications](https://developers.cloudflare.com/workers/framework-guides/web-apps/) * [React + Vite](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/) * [Astro](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/) * [React Router (formerly Remix)](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/) * [Next.js](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/) * [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/) * [RedwoodSDK](https://developers.cloudflare.com/workers/framework-guides/web-apps/redwoodsdk/) * [TanStack](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack/) * [Svelte](https://developers.cloudflare.com/workers/framework-guides/web-apps/svelte/) * [More guides...](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/) * [Angular](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/angular/) * [Docusaurus](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/docusaurus/) * [Gatsby](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/gatsby/) * [Hono](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/hono/) * [Nuxt](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/nuxt/) * [Qwik](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/qwik/) * [Solid](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/solid/) * [Mobile applications](https://developers.cloudflare.com/workers/framework-guides/mobile-apps/) * [Expo](https://docs.expo.dev/eas/hosting/reference/worker-runtime/) * [APIs](https://developers.cloudflare.com/workers/framework-guides/apis/) * [FastAPI](https://developers.cloudflare.com/workers/languages/python/packages/fastapi/) * [Hono](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/hono/) --- title: Getting started · Cloudflare Workers docs description: Build your first Worker. lastUpdated: 2025-03-13T17:52:53.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/get-started/ md: https://developers.cloudflare.com/workers/get-started/index.md --- Build your first Worker. * [CLI](https://developers.cloudflare.com/workers/get-started/guide/) * [Dashboard](https://developers.cloudflare.com/workers/get-started/dashboard/) * [Prompting](https://developers.cloudflare.com/workers/get-started/prompting/) * [Templates](https://developers.cloudflare.com/workers/get-started/quickstarts/) --- title: Glossary · Cloudflare Workers docs description: Review the definitions for terms used across Cloudflare's Workers documentation. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/glossary/ md: https://developers.cloudflare.com/workers/glossary/index.md --- Review the definitions for terms used across Cloudflare's Workers documentation. | Term | Definition | | - | - | | Auxiliary Worker | A Worker created locally via the [Workers Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration/) that runs in a separate isolate to the test runner, with a different global scope. | | binding | [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare Developer Platform. | | C3 | [C3](https://developers.cloudflare.com/learning-paths/workers/get-started/c3-and-wrangler/) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. | | CPU time | [CPU time](https://developers.cloudflare.com/workers/platform/limits/#cpu-time) is the amount of time the central processing unit (CPU) actually spends doing work, during a given request. | | Cron Triggers | [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) allow users to map a cron expression to a Worker using a [`scheduled()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule. | | D1 | [D1](https://developers.cloudflare.com/d1/) is Cloudflare's native serverless database. | | deployment | [Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#deployments) track the version(s) of your Worker that are actively serving traffic. | | Durable Objects | [Durable Objects](https://developers.cloudflare.com/durable-objects/) is a globally distributed coordination API with strongly consistent storage. | | duration | [Duration](https://developers.cloudflare.com/workers/platform/limits/#duration) is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker. | | environment | [Environments](https://developers.cloudflare.com/workers/wrangler/environments/) allow you to deploy the same Worker application with different configuration for each environment. Only available for use with a [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). | | environment variable | [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) are a type of binding that allow you to attach text strings or JSON values to your Worker. | | handler | [Handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/) are methods on Workers that can receive and process external inputs, and can be invoked from outside your Worker. | | isolate | [Isolates](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) are lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. | | KV | [Workers KV](https://developers.cloudflare.com/kv/) is Cloudflare's key-value data storage. | | module Worker | Refers to a Worker written in [module syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/). | | origin | [Origin](https://www.cloudflare.com/learning/cdn/glossary/origin-server/) generally refers to the web server behind Cloudflare where your application is hosted. | | Pages | [Cloudflare Pages](https://developers.cloudflare.com/pages/) is Cloudflare's product offering for building and deploying full-stack applications. | | Queues | [Queues](https://developers.cloudflare.com/queues/) integrates with Cloudflare Workers and enables you to build applications that can guarantee delivery. | | R2 | [R2](https://developers.cloudflare.com/r2/) is an S3-compatible distributed object storage designed to eliminate the obstacles of sharing data across clouds. | | rollback | [Rollbacks](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/) are a way to deploy an older deployment to the Cloudflare global network. | | secret | [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are a type of binding that allow you to attach encrypted text values to your Worker. | | service Worker | Refers to a Worker written in [service worker](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API) [syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/). | | subrequest | A subrequest is any request that a Worker makes to either Internet resources using the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) or requests to other Cloudflare services like [R2](https://developers.cloudflare.com/r2/), [KV](https://developers.cloudflare.com/kv/), or [D1](https://developers.cloudflare.com/d1/). | | Tail Worker | A [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) receives information about the execution of other Workers (known as producer Workers), such as HTTP statuses, data passed to `console.log()` or uncaught exceptions. | | V8 | Chrome V8 is a [JavaScript engine](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/), which means that it [executes JavaScript code](https://developers.cloudflare.com/workers/reference/how-workers-works/). | | version | A [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) is defined by the state of code as well as the state of configuration in a Worker's Wrangler file. | | wall-clock time | [Wall-clock time](https://developers.cloudflare.com/workers/platform/limits/#duration) is the total amount of time from the start to end of an invocation of a Worker. | | workerd | [`workerd`](https://github.com/cloudflare/workerd?cf_target_id=D15F29F105B3A910EF4B2ECB12D02E2A) is a JavaScript / Wasm server runtime based on the same code that powers Cloudflare Workers. | | Wrangler | [Wrangler](https://developers.cloudflare.com/learning-paths/workers/get-started/c3-and-wrangler/) is the Cloudflare Developer Platform command-line interface (CLI) that allows you to manage projects, such as Workers, created from the Cloudflare Developer Platform product offering. | | wrangler.toml / wrangler.json / wrangler.jsonc | The [configuration](https://developers.cloudflare.com/workers/wrangler/configuration/) used to customize the development and deployment setup for a Worker or a Pages Function. | --- title: Languages · Cloudflare Workers docs description: Languages supported on Workers, a polyglot platform. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/languages/ md: https://developers.cloudflare.com/workers/languages/index.md --- Workers is a polyglot platform, and provides first-class support for the following programming languages: * [JavaScript](https://developers.cloudflare.com/workers/languages/javascript/) * [TypeScript](https://developers.cloudflare.com/workers/languages/typescript/) * [Python](https://developers.cloudflare.com/workers/languages/python/) * [Rust](https://developers.cloudflare.com/workers/languages/rust/) Workers also supports [WebAssembly](https://developers.cloudflare.com/workers/runtime-apis/webassembly/) (abbreviated as "Wasm") — a binary format that many languages can be compiled to. This allows you to write Workers using programming language beyond the languages listed above, including C, C++, Kotlin, Go and more. --- title: Observability · Cloudflare Workers docs description: Understand how your Worker projects are performing via logs, traces, and other data sources. lastUpdated: 2025-04-09T02:45:13.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/observability/ md: https://developers.cloudflare.com/workers/observability/index.md --- Understand how your Worker projects are performing via logs, traces, and other data sources. * [Errors and exceptions](https://developers.cloudflare.com/workers/observability/errors/) * [Metrics and analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/) * [Logs](https://developers.cloudflare.com/workers/observability/logs/) * [Query Builder](https://developers.cloudflare.com/workers/observability/query-builder/) * [DevTools](https://developers.cloudflare.com/workers/observability/dev-tools/) * [Integrations](https://developers.cloudflare.com/workers/observability/third-party-integrations/) * [Source maps and stack traces](https://developers.cloudflare.com/workers/observability/source-maps/) --- title: Platform · Cloudflare Workers docs description: Pricing, limits and other information about the Workers platform. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/ md: https://developers.cloudflare.com/workers/platform/index.md --- Pricing, limits and other information about the Workers platform. * [Pricing](https://developers.cloudflare.com/workers/platform/pricing/) * [Changelog](https://developers.cloudflare.com/workers/platform/changelog/) * [Limits](https://developers.cloudflare.com/workers/platform/limits/) * [Choose a data or storage product](https://developers.cloudflare.com/workers/platform/storage-options/) * [Betas](https://developers.cloudflare.com/workers/platform/betas/) * [Deploy to Cloudflare buttons](https://developers.cloudflare.com/workers/platform/deploy-buttons/) * [Known issues](https://developers.cloudflare.com/workers/platform/known-issues/) * [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) * [Infrastructure as Code (IaC)](https://developers.cloudflare.com/workers/platform/infrastructure-as-code/) --- title: Playground · Cloudflare Workers docs description: The quickest way to experiment with Cloudflare Workers is in the Playground. It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/playground/ md: https://developers.cloudflare.com/workers/playground/index.md --- Browser support The Cloudflare Workers Playground is currently only supported in Firefox and Chrome desktop browsers. In Safari, it will show a `PreviewRequestFailed` error message. The quickest way to experiment with Cloudflare Workers is in the [Playground](https://workers.cloudflare.com/playground). It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser. The Playground uses the same editor as the authenticated experience. The Playground provides the ability to [share](#share) the code you write as well as [deploy](#deploy) it instantly to Cloudflare's global network. This way, you can try new things out and deploy them when you are ready. [Launch the Playground](https://workers.cloudflare.com/playground) ## Hello Cloudflare Workers When you arrive in the Playground, you will see this default code: ```js import welcome from "welcome.html"; /** * @typedef {Object} Env */ export default { /** * @param {Request} request * @param {Env} env * @param {ExecutionContext} ctx * @returns {Response} */ fetch(request, env, ctx) { console.log("Hello Cloudflare Workers!"); return new Response(welcome, { headers: { "content-type": "text/html", }, }); }, }; ``` This is an example of a multi-module Worker that is receiving a [request](https://developers.cloudflare.com/workers/runtime-apis/request/), logging a message to the console, and then returning a [response](https://developers.cloudflare.com/workers/runtime-apis/response/) body containing the content from `welcome.html`. Refer to the [Fetch handler documentation](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to learn more. ## Use the Playground As you edit the default code, the Worker will auto-update such that the preview on the right shows your Worker running just as it would in a browser. If your Worker uses URL paths, you can enter those in the input field on the right to navigate to them. The Playground provides type-checking via JSDoc comments and [`workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types). The Playground also provides pretty error pages in the event of application errors. To test a raw HTTP request (for example, to test a `POST` request), go to the **HTTP** tab and select **Send**. You can add and edit headers via this panel, as well as edit the body of a request. ## DevTools For debugging Workers inside the Playground, use the developer tools at the bottom of the Playground's preview panel to view `console.logs`, network requests, memory and CPU usage. The developer tools for the Workers Playground work similarly to the developer tools in Chrome or Firefox, and are the same developer tools users have access to in the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/install-and-update/) and the authenticated dashboard. ### Network tab **Network** shows the outgoing requests from your Worker — that is, any calls to `fetch` inside your Worker code. ### Console Logs The console displays the output of any calls to `console.log` that were called for the current preview run as well as any other preview runs in that session. ### Sources **Sources** displays the sources that make up your Worker. Note that KV, text, and secret bindings are only accessible when authenticated with an account. This means you must be logged in to the dashboard, or use [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) with your account credentials. ## Share To share what you have created, select **Copy Link** in the top right of the screen. This will copy a unique URL to your clipboard that you can share with anyone. These links do not expire, so you can bookmark your creation and share it at any time. Users that open a shared link will see the Playground with the shared code and preview. ## Deploy You can deploy a Worker from the Playground. If you are already logged in, you can review the Worker before deploying. Otherwise, you will be taken through the first-time user onboarding flow before you can review and deploy. Once deployed, your Worker will get its own unique URL and be available almost instantly on Cloudflare's global network. From here, you can add [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), [storage resources](https://developers.cloudflare.com/workers/platform/storage-options/), and more. --- title: Reference · Cloudflare Workers docs description: Conceptual knowledge about how Workers works. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/reference/ md: https://developers.cloudflare.com/workers/reference/index.md --- Conceptual knowledge about how Workers works. * [How the Cache works](https://developers.cloudflare.com/workers/reference/how-the-cache-works/) * [How Workers works](https://developers.cloudflare.com/workers/reference/how-workers-works/) * [Migrate from Service Workers to ES Modules](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) * [Protocols](https://developers.cloudflare.com/workers/reference/protocols/) * [Security model](https://developers.cloudflare.com/workers/reference/security-model/) --- title: Static Assets · Cloudflare Workers docs description: Create full-stack applications deployed to Cloudflare Workers. lastUpdated: 2025-06-20T19:49:19.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/static-assets/ md: https://developers.cloudflare.com/workers/static-assets/index.md --- You can upload static assets (HTML, CSS, images and other files) as part of your Worker, and Cloudflare will handle caching and serving them to web browsers. **Start from CLI** - Scaffold a React SPA with an API Worker, and use the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). * npm ```sh npm create cloudflare@latest -- my-react-app --framework=react ``` * yarn ```sh yarn create cloudflare my-react-app --framework=react ``` * pnpm ```sh pnpm create cloudflare@latest my-react-app --framework=react ``` *** **Or just deploy to Cloudflare** [![Deploy to Workers](https://deploy.workers.cloudflare.com/button)](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create/deploy-to-workers\&repository=https://github.com/cloudflare/templates/tree/main/vite-react-template) Learn more about supported frameworks on Workers. [Supported frameworks ](https://developers.cloudflare.com/workers/framework-guides/)Start building on Workers with our framework guides. ### How it works When you deploy your project, Cloudflare deploys both your Worker code and your static assets in a single operation. This deployment operates as a tightly integrated "unit" running across Cloudflare's network, combining static file hosting, custom logic, and global caching. The **assets directory** specified in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) is central to this design. During deployment, Wrangler automatically uploads the files from this directory to Cloudflare's infrastructure. Once deployed, requests for these assets are routed efficiently to locations closest to your users. * wrangler.jsonc ```jsonc { "name": "my-spa", "main": "src/index.js", "compatibility_date": "2025-01-01", "assets": { "directory": "./dist", "binding": "ASSETS" } } ``` * wrangler.toml ```toml name = "my-spa" main = "src/index.js" compatibility_date = "2025-01-01" [assets] directory = "./dist" binding = "ASSETS" ``` Note If you are using the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you do not need to specify `assets.directory`. For more information about using static assets with the Vite plugin, refer to the [plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/static-assets/). By adding an [**assets binding**](https://developers.cloudflare.com/workers/static-assets/binding/#binding), you can directly fetch and serve assets within your Worker code. ```js // index.js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return new Response(JSON.stringify({ name: "Cloudflare" }), { headers: { "Content-Type": "application/json" }, }); } return env.ASSETS.fetch(request); }, }; ``` ### Routing behavior By default, if a requested URL matches a file in the static assets directory, that file will be served — without invoking Worker code. If no matching asset is found and a Worker script is present, the request will be processed by the Worker. The Worker can return a response or choose to defer again to static assets by using the [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) (e.g. `env.ASSETS.fetch(request)`). If no Worker script is present, a `404 Not Found` response is returned. The default behavior for requests which don't match a static asset can be changed by setting the [`not_found_handling` option under `assets`](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) in your Wrangler configuration file: * [`not_found_handling = "single-page-application"`](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/): Sets your application to return a `200 OK` response with `index.html` for requests which don't match a static asset. Use this if you have a Single Page Application. We recommend pairing this with selective routing using `run_worker_first` for [advanced routing control](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/#advanced-routing-control). * [`not_found_handling = "404-page"`](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/#custom-404-pages): Sets your application to return a `404 Not Found` response with the nearest `404.html` for requests which don't match a static asset. - wrangler.jsonc ```jsonc { "assets": { "directory": "./dist", "not_found_handling": "single-page-application" } } ``` - wrangler.toml ```toml [assets] directory = "./dist" not_found_handling = "single-page-application" ``` If you want the Worker code to execute before serving assets, you can use the `run_worker_first` option. This can be set to `true` to invoke the Worker script for all requests, or configured as an array of route patterns for selective Worker-script-first routing: **Invoking your Worker script on specific paths:** * wrangler.jsonc ```jsonc { "name": "my-spa-worker", "compatibility_date": "2025-07-16", "main": "./src/index.ts", "assets": { "directory": "./dist/", "not_found_handling": "single-page-application", "binding": "ASSETS", "run_worker_first": ["/api/*", "!/api/docs/*"] } } ``` * wrangler.toml ```toml name = "my-spa-worker" compatibility_date = "2025-07-16" main = "./src/index.ts" [assets] directory = "./dist/" not_found_handling = "single-page-application" binding = "ASSETS" run_worker_first = [ "/api/*", "!/api/docs/*" ] ``` [Routing options ](https://developers.cloudflare.com/workers/static-assets/routing/)Learn more about how you can customize routing behavior. ### Caching behavior Cloudflare provides automatic caching for static assets across its network, ensuring fast delivery to users worldwide. When a static asset is requested, it is automatically cached for future requests. * **First Request:** When an asset is requested for the first time, it is fetched from storage and cached at the nearest Cloudflare location. * **Subsequent Requests:** If a request for the same asset reaches a data center that does not have it cached, Cloudflare's [tiered caching system](https://developers.cloudflare.com/cache/how-to/tiered-cache/) allows it to be retrieved from a nearby cache rather than going back to storage. This improves cache hit ratio, reduces latency, and reduces unnecessary origin fetches. ## Try it out [Vite + React SPA tutorial ](https://developers.cloudflare.com/workers/vite-plugin/tutorial/)Learn how to build and deploy a full-stack Single Page Application with static assets and API routes. ## Learn more [Supported frameworks ](https://developers.cloudflare.com/workers/framework-guides/)Start building on Workers with our framework guides. [Billing and limitations ](https://developers.cloudflare.com/workers/static-assets/billing-and-limitations/)Learn more about how requests are billed, current limitations, and troubleshooting. --- title: Runtime APIs · Cloudflare Workers docs description: The Workers runtime is designed to be JavaScript standards compliant and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across WinterCG JavaScript runtimes. lastUpdated: 2025-02-05T10:06:53.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/runtime-apis/ md: https://developers.cloudflare.com/workers/runtime-apis/index.md --- The Workers runtime is designed to be [JavaScript standards compliant](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG](https://wintercg.org/) JavaScript runtimes. [Workers runtime features](https://developers.cloudflare.com/workers/runtime-apis/) are [compatible with a subset of Node.js APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs) and the ability to set a [compatibility date or compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-dates/). * [Bindings (env)](https://developers.cloudflare.com/workers/runtime-apis/bindings/) * [Cache](https://developers.cloudflare.com/workers/runtime-apis/cache/) * [Console](https://developers.cloudflare.com/workers/runtime-apis/console/) * [Context (ctx)](https://developers.cloudflare.com/workers/runtime-apis/context/) * [Encoding](https://developers.cloudflare.com/workers/runtime-apis/encoding/) * [EventSource](https://developers.cloudflare.com/workers/runtime-apis/eventsource/) * [Fetch](https://developers.cloudflare.com/workers/runtime-apis/fetch/) * [Handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/) * [Headers](https://developers.cloudflare.com/workers/runtime-apis/headers/) * [HTMLRewriter](https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/) * [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) * [Performance and timers](https://developers.cloudflare.com/workers/runtime-apis/performance/) * [Remote-procedure call (RPC)](https://developers.cloudflare.com/workers/runtime-apis/rpc/) * [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) * [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) * [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/) * [TCP sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) * [Web Crypto](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/) * [Web standards](https://developers.cloudflare.com/workers/runtime-apis/web-standards/) * [WebAssembly (Wasm)](https://developers.cloudflare.com/workers/runtime-apis/webassembly/) * [WebSockets](https://developers.cloudflare.com/workers/runtime-apis/websockets/) --- title: Testing · Cloudflare Workers docs description: The Workers platform has a variety of ways to test your applications, depending on your requirements. We recommend using the Vitest integration, which allows you to run tests to inside the Workers runtime, and unit test individual functions within your Worker. lastUpdated: 2025-04-10T14:17:11.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/testing/ md: https://developers.cloudflare.com/workers/testing/index.md --- The Workers platform has a variety of ways to test your applications, depending on your requirements. We recommend using the [Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration), which allows you to run tests to *inside* the Workers runtime, and unit test individual functions within your Worker. [Get started with Vitest](https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/) ## Testing comparison matrix However, if you don't use Vitest, both [Miniflare's API](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests) and the [`unstable_startWorker()`](https://developers.cloudflare.com/workers/wrangler/api/#unstable_startworker) API provide options for testing your Worker in any testing framework. | Feature | [Vitest integration](https://developers.cloudflare.com/workers/testing/vitest-integration) | [`unstable_startWorker()`](https://developers.cloudflare.com/workers/testing/unstable_startworker/) | [Miniflare's API](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests/) | | - | - | - | - | | Unit testing | ✅ | ❌ | ❌ | | Integration testing | ✅ | ✅ | ✅ | | Loading Wrangler configuration files | ✅ | ✅ | ❌ | | Use bindings directly in tests | ✅ | ❌ | ✅ | | Isolated per-test storage | ✅ | ❌ | ❌ | | Outbound request mocking | ✅ | ❌ | ✅ | | Multiple Worker support | ✅ | ✅ | ✅ | | Direct access to Durable Objects | ✅ | ❌ | ❌ | | Run Durable Object alarms immediately | ✅ | ❌ | ❌ | | List Durable Objects | ✅ | ❌ | ❌ | | Testing service Workers | ❌ | ✅ | ✅ | Pages Functions The content described on this page is also applicable to [Pages Functions](https://developers.cloudflare.com/pages/functions/). Pages Functions are Cloudflare Workers and can be thought of synonymously with Workers in this context. --- title: Tutorials · Cloudflare Workers docs description: View tutorials to help you get started with Workers. lastUpdated: 2025-05-06T17:35:57.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/tutorials/ md: https://developers.cloudflare.com/workers/tutorials/index.md --- View tutorials to help you get started with Workers. ## Docs | Name | Last Updated | Type | Difficulty | | - | - | - | - | | [Query D1 using Prisma ORM](https://developers.cloudflare.com/d1/tutorials/d1-and-prisma-orm/) | about 1 month ago | 📝 Tutorial | Beginner | | [Migrate from Netlify to Workers](https://developers.cloudflare.com/workers/static-assets/migration-guides/netlify-to-workers/) | 2 months ago | 📝 Tutorial | Beginner | | [Migrate from Vercel to Workers](https://developers.cloudflare.com/workers/static-assets/migration-guides/vercel-to-workers/) | 3 months ago | 📝 Tutorial | Beginner | | [Setup Fullstack Authentication with Next.js, Auth.js, and Cloudflare D1](https://developers.cloudflare.com/developer-spotlight/tutorials/fullstack-authentication-with-next-js-and-cloudflare-d1/) | 3 months ago | 📝 Tutorial | Intermediate | | [Ingest data from a Worker, and analyze using MotherDuck](https://developers.cloudflare.com/pipelines/tutorials/query-data-with-motherduck/) | 3 months ago | 📝 Tutorial | Intermediate | | [Create a data lake of clickstream data](https://developers.cloudflare.com/pipelines/tutorials/send-data-from-client/) | 3 months ago | 📝 Tutorial | Intermediate | | [Connect to a MySQL database with Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/mysql/) | 4 months ago | 📝 Tutorial | Beginner | | [Set up and use a Prisma Postgres database](https://developers.cloudflare.com/workers/tutorials/using-prisma-postgres-with-workers/) | 5 months ago | 📝 Tutorial | Beginner | | [Build a Voice Notes App with auto transcriptions using Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-voice-notes-app-with-auto-transcription/) | 7 months ago | 📝 Tutorial | Intermediate | | [Protect payment forms from malicious bots using Turnstile](https://developers.cloudflare.com/turnstile/tutorials/protecting-your-payment-form-from-attackers-bots-using-turnstile/) | 7 months ago | 📝 Tutorial | Beginner | | [Build a Retrieval Augmented Generation (RAG) AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/) | 8 months ago | 📝 Tutorial | Beginner | | [Automate analytics reporting with Cloudflare Workers and email routing](https://developers.cloudflare.com/workers/tutorials/automated-analytics-reporting/) | 8 months ago | 📝 Tutorial | Beginner | | [Build Live Cursors with Next.js, RPC and Durable Objects](https://developers.cloudflare.com/workers/tutorials/live-cursors-with-nextjs-rpc-do/) | 8 months ago | 📝 Tutorial | Intermediate | | [Build an interview practice tool with Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-ai-interview-practice-tool/) | 8 months ago | 📝 Tutorial | Intermediate | | [Using BigQuery with Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/) | 9 months ago | 📝 Tutorial | Beginner | | [How to Build an Image Generator using Workers AI](https://developers.cloudflare.com/workers-ai/guides/tutorials/image-generation-playground/) | 9 months ago | 📝 Tutorial | Beginner | | [Use event notification to summarize PDF files on upload](https://developers.cloudflare.com/r2/tutorials/summarize-pdf/) | 9 months ago | 📝 Tutorial | Intermediate | | [Build a Comments API](https://developers.cloudflare.com/d1/tutorials/build-a-comments-api/) | 10 months ago | 📝 Tutorial | Intermediate | | [Handle rate limits of external APIs](https://developers.cloudflare.com/queues/tutorials/handle-rate-limits/) | 10 months ago | 📝 Tutorial | Beginner | | [Build an API to access D1 using a proxy Worker](https://developers.cloudflare.com/d1/tutorials/build-an-api-to-access-d1/) | 10 months ago | 📝 Tutorial | Intermediate | | [Deploy a Worker](https://developers.cloudflare.com/pulumi/tutorial/hello-world/) | 10 months ago | 📝 Tutorial | Beginner | | [Connect to a PostgreSQL database with Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/postgres/) | 11 months ago | 📝 Tutorial | Beginner | | [Build a web crawler with Queues and Browser Rendering](https://developers.cloudflare.com/queues/tutorials/web-crawler-with-browser-rendering/) | 11 months ago | 📝 Tutorial | Intermediate | | [Recommend products on e-commerce sites using Workers AI and Stripe](https://developers.cloudflare.com/developer-spotlight/tutorials/creating-a-recommendation-api/) | about 1 year ago | 📝 Tutorial | Beginner | | [Custom access control for files in R2 using D1 and Workers](https://developers.cloudflare.com/developer-spotlight/tutorials/custom-access-control-for-files/) | about 1 year ago | 📝 Tutorial | Beginner | | [Send form submissions using Astro and Resend](https://developers.cloudflare.com/developer-spotlight/tutorials/handle-form-submission-with-astro-resend/) | about 1 year ago | 📝 Tutorial | Beginner | | [Create a fine-tuned OpenAI model with R2](https://developers.cloudflare.com/workers/tutorials/create-finetuned-chatgpt-ai-models-with-r2/) | about 1 year ago | 📝 Tutorial | Intermediate | | [Build a Slackbot](https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/) | about 1 year ago | 📝 Tutorial | Beginner | | [Use Workers KV directly from Rust](https://developers.cloudflare.com/workers/tutorials/workers-kv-from-rust/) | about 1 year ago | 📝 Tutorial | Intermediate | | [Build a todo list Jamstack application](https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app/) | about 1 year ago | 📝 Tutorial | Beginner | | [Send Emails With Postmark](https://developers.cloudflare.com/workers/tutorials/send-emails-with-postmark/) | about 1 year ago | 📝 Tutorial | Beginner | | [Send Emails With Resend](https://developers.cloudflare.com/workers/tutorials/send-emails-with-resend/) | about 1 year ago | 📝 Tutorial | Beginner | | [Create a sitemap from Sanity CMS with Workers](https://developers.cloudflare.com/developer-spotlight/tutorials/create-sitemap-from-sanity-cms/) | about 1 year ago | 📝 Tutorial | Beginner | | [Log and store upload events in R2 with event notifications](https://developers.cloudflare.com/r2/tutorials/upload-logs-event-notifications/) | over 1 year ago | 📝 Tutorial | Beginner | | [Create custom headers for Cloudflare Access-protected origins with Workers](https://developers.cloudflare.com/cloudflare-one/tutorials/access-workers/) | over 1 year ago | 📝 Tutorial | Intermediate | | [Create a serverless, globally distributed time-series API with Timescale](https://developers.cloudflare.com/hyperdrive/tutorials/serverless-timeseries-api-with-timescale/) | over 1 year ago | 📝 Tutorial | Beginner | | [Deploy a Browser Rendering Worker with Durable Objects](https://developers.cloudflare.com/browser-rendering/workers-bindings/browser-rendering-with-do/) | almost 2 years ago | 📝 Tutorial | Beginner | | [GitHub SMS notifications using Twilio](https://developers.cloudflare.com/workers/tutorials/github-sms-notifications-using-twilio/) | almost 2 years ago | 📝 Tutorial | Beginner | | [Deploy a Worker that connects to OpenAI via AI Gateway](https://developers.cloudflare.com/ai-gateway/tutorials/deploy-aig-worker/) | almost 2 years ago | 📝 Tutorial | Beginner | | [Tutorial - React SPA with an API](https://developers.cloudflare.com/workers/vite-plugin/tutorial/) | | 📝 Tutorial | | | [Deploy a real-time chat application](https://developers.cloudflare.com/workers/tutorials/deploy-a-realtime-chat-app/) | almost 2 years ago | 📝 Tutorial | Intermediate | | [Build a QR code generator](https://developers.cloudflare.com/workers/tutorials/build-a-qr-code-generator/) | about 2 years ago | 📝 Tutorial | Beginner | | [Securely access and upload assets with Cloudflare R2](https://developers.cloudflare.com/workers/tutorials/upload-assets-with-r2/) | about 2 years ago | 📝 Tutorial | Beginner | | [OpenAI GPT function calling with JavaScript and Cloudflare Workers](https://developers.cloudflare.com/workers/tutorials/openai-function-calls-workers/) | about 2 years ago | 📝 Tutorial | Beginner | | [Handle form submissions with Airtable](https://developers.cloudflare.com/workers/tutorials/handle-form-submissions-with-airtable/) | about 2 years ago | 📝 Tutorial | Beginner | | [Connect to and query your Turso database using Workers](https://developers.cloudflare.com/workers/tutorials/connect-to-turso-using-workers/) | over 2 years ago | 📝 Tutorial | Beginner | | [Generate YouTube thumbnails with Workers and Cloudflare Image Resizing](https://developers.cloudflare.com/workers/tutorials/generate-youtube-thumbnails-with-workers-and-images/) | over 2 years ago | 📝 Tutorial | Intermediate | ## Videos OpenAI Relay Server on Cloudflare Workers In this video, Craig Dennis walks you through the deployment of OpenAI's relay server to use with their realtime API. Deploy your React App to Cloudflare Workers Learn how to deploy an existing React application to Cloudflare Workers. Cloudflare Workflows | Schedule and Sleep For Your Apps (Part 3 of 3) Cloudflare Workflows allows you to initiate sleep as an explicit step, which can be useful when you want a Workflow to wait, schedule work ahead, or pause until an input or other external state is ready. Cloudflare Workflows | Introduction (Part 1 of 3) In this video, we introduce Cloudflare Workflows, the Newest Developer Platform Primitive at Cloudflare. Cloudflare Workflows | Batching and Monitoring Your Durable Execution (Part 2 of 3) Workflows exposes metrics such as execution, error rates, steps, and total duration! Building Front-End Applications | Now Supported by Cloudflare Workers You can now build front-end applications, just like you do on Cloudflare Pages, but with the added benefit of Workers. Build a private AI chatbot using Meta's Llama 3.1 In this video, you will learn how to set up a private AI chat powered by Llama 3.1 for secure, fast interactions, deploy the model on Cloudflare Workers for serverless, scalable performance and use Cloudflare's Workers AI for seamless integration and edge computing benefits. How to Build Event-Driven Applications with Cloudflare Queues In this video, we demonstrate how to build an event-driven application using Cloudflare Queues. Event-driven system lets you decouple services, allowing them to process and scale independently. Welcome to the Cloudflare Developer Channel Welcome to the Cloudflare Developers YouTube channel. We've got tutorials and working demos and everything you need to level up your projects. Whether you're working on your next big thing or just dorking around with some side projects, we've got you covered! So why don't you come hang out, subscribe to our developer channel and together we'll build something awesome. You're gonna love it. AI meets Maps | Using Cloudflare AI, Langchain, Mapbox, Folium and Streamlit Welcome to RouteMe, a smart tool that helps you plan the most efficient route between landmarks in any city. Powered by Cloudflare Workers AI, Langchain and Mapbox. This Streamlit webapp uses LLMs and Mapbox off my scripts API to solve the classic traveling salesman problem, turning your sightseeing into an optimized adventure! Use Vectorize to add additional context to your AI Applications through RAG A RAG based AI Chat app that uses Vectorize to access video game data for employees of Gamertown. Build Rust Powered Apps In this video, we will show you how to build a global database using workers-rs to keep track of every country and city you’ve visited. Stateful Apps with Cloudflare Workers Learn how to access external APIs, cache and retrieve data using Workers KV, and create SQL-driven applications with Cloudflare D1. Learn Cloudflare Workers - Full Course for Beginners Learn how to build your first Cloudflare Workers application and deploy it to Cloudflare's global network. Learn AI Development (models, embeddings, vectors) In this workshop, Kristian Freeman, Cloudflare Developer Advocate, teaches the basics of AI Development - models, embeddings, and vectors (including vector databases). Optimize your AI App & fine-tune models (AI Gateway, R2) In this workshop, Kristian Freeman, Cloudflare Developer Advocate, shows how to optimize your existing AI applications with Cloudflare AI Gateway, and how to finetune OpenAI models using R2. How to use Cloudflare AI models and inference in Python with Jupyter Notebooks Cloudflare Workers AI provides a ton of AI models and inference capabilities. In this video, we will explore how to make use of Cloudflare’s AI model catalog using a Python Jupyter Notebook. --- title: Vite plugin · Cloudflare Workers docs description: A full-featured integration between Vite and the Workers runtime lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/vite-plugin/ md: https://developers.cloudflare.com/workers/vite-plugin/index.md --- The Cloudflare Vite plugin enables a full-featured integration between [Vite](https://vite.dev/) and the [Workers runtime](https://developers.cloudflare.com/workers/runtime-apis/). Your Worker code runs inside [workerd](https://github.com/cloudflare/workerd), matching the production behavior as closely as possible and providing confidence as you develop and deploy your applications. ## Features * Uses the Vite [Environment API](https://vite.dev/guide/api-environment) to integrate Vite with the Workers runtime * Provides direct access to [Workers runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) * Builds your front-end assets for deployment to Cloudflare, enabling you to build static sites, SPAs, and full-stack applications * Official support for [React Router v7](https://reactrouter.com/) with server-side rendering * Leverages Vite's hot module replacement for consistently fast updates * Supports `vite preview` for previewing your build output in the Workers runtime prior to deployment ## Use cases * [React Router v7](https://reactrouter.com/) (support for more full-stack frameworks is coming soon) * Static sites, such as single-page applications, with or without an integrated backend API * Standalone Workers * Multi-Worker applications ## Get started To create a new application from a ready-to-go template, refer to the [React Router](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/), [React](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/) or [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/) framework guides. To create a standalone Worker from scratch, refer to [Get started](https://developers.cloudflare.com/workers/vite-plugin/get-started/). For a more in-depth look at adapting an existing Vite project and an introduction to key concepts, refer to the [Tutorial](https://developers.cloudflare.com/workers/vite-plugin/tutorial/). --- title: Wrangler · Cloudflare Workers docs description: Wrangler, the Cloudflare Developer Platform command-line interface (CLI), allows you to manage Worker projects. lastUpdated: 2024-09-26T12:49:19.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/wrangler/ md: https://developers.cloudflare.com/workers/wrangler/index.md --- Wrangler, the Cloudflare Developer Platform command-line interface (CLI), allows you to manage Worker projects. * [API ](https://developers.cloudflare.com/workers/wrangler/api/): A set of programmatic APIs that can be integrated with local Cloudflare Workers-related workflows. * [Bundling ](https://developers.cloudflare.com/workers/wrangler/bundling/): Review Wrangler's default bundling. * [Commands ](https://developers.cloudflare.com/workers/wrangler/commands/): Create, develop, and deploy your Cloudflare Workers with Wrangler commands. * [Configuration ](https://developers.cloudflare.com/workers/wrangler/configuration/): Use a configuration file to customize the development and deployment setup for your Worker project and other Developer Platform products. * [Custom builds ](https://developers.cloudflare.com/workers/wrangler/custom-builds/): Customize how your code is compiled, before being processed by Wrangler. * [Deprecations ](https://developers.cloudflare.com/workers/wrangler/deprecations/): The differences between Wrangler versions, specifically deprecations and breaking changes. * [Environments ](https://developers.cloudflare.com/workers/wrangler/environments/): Use environments to create different configurations for the same Worker application. * [Install/Update Wrangler ](https://developers.cloudflare.com/workers/wrangler/install-and-update/): Get started by installing Wrangler, and update to newer versions by following this guide. * [Migrations ](https://developers.cloudflare.com/workers/wrangler/migration/): Review migration guides for specific versions of Wrangler. * [System environment variables ](https://developers.cloudflare.com/workers/wrangler/system-environment-variables/): Local environment variables that can change Wrangler's behavior. --- title: Builds · Cloudflare Workers docs description: Use Workers Builds to integrate with Git and automatically build and deploy your Worker when pushing a change lastUpdated: 2025-03-25T11:39:02.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/ci-cd/builds/ md: https://developers.cloudflare.com/workers/ci-cd/builds/index.md --- The Cloudflare [Git integration](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/) lets you connect a new or existing Worker to a GitHub or GitLab repository, enabling automated builds and deployments for your Worker on push. ## Get started ### Connect a new Worker To create a new Worker and connect it to a GitHub or GitLab repository: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**. 3. Select **Create**. 4. Under **Import a repository**, select a **Git account**. 5. Select the repository you want to import from the list. You can also use the search bar to narrow the results. 6. Configure your project and select **Save and Deploy**. 7. Preview your Worker at its provided [`workers.dev`](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) subdomain. ### Connect an existing Worker To connect an existing Worker to a GitHub or GitLab repository: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**. 3. Select the Worker you want to connect to a repository. 4. Select **Settings** and then **Builds**. 5. Select **Connect** and follow the prompts to connect the repository to your Worker and configure your [build settings](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/). 6. Push a commit to your Git repository to trigger a build and deploy to your Worker. Warning When connecting a repository to a Workers project, the Worker name in the Cloudflare dashboard must match the `name` in the wrangler.toml file in the specified root directory, or the build will fail. This ensures that the Worker deployed from the repository is consistent with the Worker registered in the Cloudflare dashboard. For details, see [Workers name requirement](https://developers.cloudflare.com/workers/ci-cd/builds/troubleshoot/#workers-name-requirement). ## View build and preview URL You can monitor a build's status and its build logs by navigating to **View build history** at the bottom of the **Deployments** tab of your Worker. If the build is successful, you can view the build details by selecting **View build** in the associated new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) created under Version History. There you will also find the [preview URL](https://developers.cloudflare.com/workers/configuration/previews/) generated by the version under Version ID. Builds, versions, deployments If a build succeeds, it is uploaded as a version. If the build is configured to deploy (for example, with `wrangler deploy` set as the deploy command), the uploaded version will be automatically promoted to the Active Deployment. ## Disconnecting builds To disconnect a Worker from a GitHub or GitLab repository: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**. 3. Select the Worker you want to disconnect from a repository. 4. Select **Settings** and then **Builds**. 5. Select **Disconnect**. If you want to switch to a different repository for your Worker, you must first disable builds, then reconnect to select the new repository. To disable automatic deployments while still allowing builds to run automatically and save as [versions](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) (without promoting them to an active deployment), update your deploy command to: `npx wrangler versions upload`. --- title: External CI/CD · Cloudflare Workers docs description: Integrate Workers development into your existing continuous integration and continuous development workflows, such as GitHub Actions or GitLab Pipelines. lastUpdated: 2025-01-28T14:11:51.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/ci-cd/external-cicd/ md: https://developers.cloudflare.com/workers/ci-cd/external-cicd/index.md --- Deploying Cloudflare Workers with CI/CD ensures reliable, automated deployments for every code change. If you prefer to use your existing CI/CD provider instead of [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/), this section offers guides for popular providers: * [**GitHub Actions**](https://developers.cloudflare.com/workers/ci-cd/external-cicd/github-actions/) * [**GitLab CI/CD**](https://developers.cloudflare.com/workers/ci-cd/external-cicd/gitlab-cicd/) Other CI/CD options including but not limited to Terraform, CircleCI, Jenkins, and more, can also be used to deploy Workers following a similar set up process. --- title: Bindings · Cloudflare Workers docs description: The various bindings that are available to Cloudflare Workers. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/bindings/ md: https://developers.cloudflare.com/workers/configuration/bindings/index.md --- --- title: Compatibility dates · Cloudflare Workers docs description: Opt into a specific version of the Workers runtime for your Workers project. lastUpdated: 2025-02-12T13:41:31.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/compatibility-dates/ md: https://developers.cloudflare.com/workers/configuration/compatibility-dates/index.md --- Cloudflare regularly updates the Workers runtime. These updates apply to all Workers globally and should never cause a Worker that is already deployed to stop functioning. Sometimes, though, some changes may be backwards-incompatible. In particular, there might be bugs in the runtime API that existing Workers may inadvertently depend upon. Cloudflare implements bug fixes that new Workers can opt into while existing Workers will continue to see the buggy behavior to prevent breaking deployed Workers. The compatibility date and flags are how you, as a developer, opt into these runtime changes. [Compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags) will often have a date in which they are enabled by default, and so, by specifying a `compatibility_date` for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date. ## Setting compatibility date When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [`npx wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) command. There is no need to update your `compatibility_date` if you do not want to. The Workers runtime will support old compatibility dates forever. If, for some reason, Cloudflare finds it is necessary to make a change that will break live Workers, Cloudflare will actively contact affected developers. That said, Cloudflare aims to avoid this if at all possible. However, even though you do not need to update the `compatibility_date` field, it is a good practice to do so for two reasons: 1. Sometimes, new features can only be made available to Workers that have a current `compatibility_date`. To access the latest features, you need to stay up-to-date. 2. Generally, other than the [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags) page, the Workers documentation may only describe the current `compatibility_date`, omitting information about historical behavior. If your Worker uses an old `compatibility_date`, you will need to continuously refer to the compatibility flags page in order to check if any of the APIs you are using have changed. #### Via Wrangler The compatibility date can be set in a Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). * wrangler.jsonc ```jsonc { "compatibility_date": "2022-04-05" } ``` * wrangler.toml ```toml # Opt into backwards-incompatible changes through April 5, 2022. compatibility_date = "2022-04-05" ``` #### Via the Cloudflare Dashboard When a Worker is created through the Cloudflare Dashboard, the compatibility date is automatically set to the current date. The compatibility date can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/). #### Via the Cloudflare API The compatibility date can be set when uploading a Worker using the [Workers Script API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field. If a compatibility date is not specified on upload via the API, it defaults to the oldest compatibility date, before any flags took effect (2021-11-02). When creating new Workers, it is highly recommended to set the compatibility date to the current date when uploading via the API. --- title: Compatibility flags · Cloudflare Workers docs description: Opt into a specific features of the Workers runtime for your Workers project. lastUpdated: 2025-02-12T13:41:31.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/compatibility-flags/ md: https://developers.cloudflare.com/workers/configuration/compatibility-flags/index.md --- Compatibility flags enable specific features. They can be useful if you want to help the Workers team test upcoming changes that are not yet enabled by default, or if you need to hold back a change that your code depends on but still want to apply other compatibility changes. Compatibility flags will often have a date in which they are enabled by default, and so, by specifying a [`compatibility_date`](https://developers.cloudflare.com/workers/configuration/compatibility-dates) for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date. ## Setting compatibility flags You may provide a list of `compatibility_flags`, which enable or disable specific changes. #### Via Wrangler Compatibility flags can be set in a Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This example enables the specific flag `formdata_parser_supports_files`, which is described [below](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#formdata-parsing-supports-file). As of the specified date, `2021-09-14`, this particular flag was not yet enabled by default, but, by specifying it in `compatibility_flags`, we can enable it anyway. `compatibility_flags` can also be used to disable changes that became the default in the past. * wrangler.jsonc ```jsonc { "compatibility_date": "2021-09-14", "compatibility_flags": [ "formdata_parser_supports_files" ] } ``` * wrangler.toml ```toml # Opt into backwards-incompatible changes through September 14, 2021. compatibility_date = "2021-09-14" # Also opt into an upcoming fix to the FormData API. compatibility_flags = [ "formdata_parser_supports_files" ] ``` #### Via the Cloudflare Dashboard Compatibility flags can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/). #### Via the Cloudflare API Compatibility flags can be set when uploading a Worker using the [Workers Script API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field. ## Node.js compatibility flag Note [The `nodejs_compat` flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. The v2 flag improves runtime Node.js compatibility by bundling additional polyfills and globals into your Worker. However, this improvement increases bundle size. If your compatibility date is 2024-09-22 or before and you want to enable v2, add the `nodejs_compat_v2` in addition to the `nodejs_compat` flag. If your compatibility date is after 2024-09-23, but you want to disable v2 to avoid increasing your bundle size, add the `no_nodejs_compat_v2` in addition to the `nodejs_compat flag`. A [growing subset](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, add the `nodejs_compat` compatibility flag to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/): To enable both built-in runtime APIs and polyfills for your Worker or Pages project, add the [`nodejs_compat`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) [compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag) to your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/), and set your compatibility date to September 23rd, 2024 or later. This will enable [Node.js compatibility](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) for your Workers project. * wrangler.jsonc ```jsonc { "compatibility_flags": [ "nodejs_compat" ], "compatibility_date": "2024-09-23" } ``` * wrangler.toml ```toml compatibility_flags = [ "nodejs_compat" ] compatibility_date = "2024-09-23" ``` A [growing subset](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, only the `nodejs_compat` compatibility flag is required: * wrangler.jsonc ```jsonc { "compatibility_flags": [ "nodejs_compat" ] } ``` * wrangler.toml ```toml compatibility_flags = [ "nodejs_compat" ] ``` As additional Node.js APIs are added, they will be made available under the `nodejs_compat` compatibility flag. Unlike most other compatibility flags, we do not expect the `nodejs_compat` to become active by default at a future date. The Node.js `AsyncLocalStorage` API is a particularly useful feature for Workers. To enable only the `AsyncLocalStorage` API, use the `nodejs_als` compatibility flag. * wrangler.jsonc ```jsonc { "compatibility_flags": [ "nodejs_als" ] } ``` * wrangler.toml ```toml compatibility_flags = [ "nodejs_als" ] ``` ## Flags history Newest flags are listed first. ### Enable `Request.signal` for incoming requests | | | | - | - | | **Flag to enable** | `enable_request_signal` | | **Flag to disable** | `disable_request_signal` | When you use the `enable_request_signal` compatibility flag, you can attach an event listener to [`Request`](https://developers.cloudflare.com/workers/runtime-apis/request/) objects, using the [`signal` property](https://developer.mozilla.org/en-US/docs/Web/API/Request/signal). This allows you to perform tasks when the request to your Worker is canceled by the client. ### Enable `FinalizationRegistry` and `WeakRef` | | | | - | - | | **Default as of** | 2025-05-05 | | **Flag to enable** | `enable_weak_ref` | | **Flag to disable** | `disable_weak_ref` | Enables the use of [`FinalizationRegistry`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry) and [`WeakRef`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef) built-ins. * `FinalizationRegistry` allows you to register a cleanup callback that runs after an object has been garbage-collected. * `WeakRef` creates a weak reference to an object, allowing it to be garbage-collected if no other strong references exist. Behaviour `FinalizationRegistry` cleanup callbacks may execute at any point during your request lifecycle, even after your invoked handler has completed (similar to `ctx.waitUntil()`). These callbacks do not have an associated async context. You cannot perform any I/O within them, including emitting events to a tail Worker. Warning These APIs are fundamentally non-deterministic. The timing and execution of garbage collection are unpredictable, and you **should not rely on them for essential program logic**. Additionally, cleanup callbacks registered with `FinalizationRegistry` may **never be executed**, including but not limited to cases where garbage collection is not triggered, or your Worker gets evicted. ### Navigation requests prefer asset serving | | | | - | - | | **Default as of** | 2025-04-01 | | **Flag to enable** | `assets_navigation_prefers_asset_serving` | | **Flag to disable** | `assets_navigation_has_no_effect` | For Workers with [static assets](https://developers.cloudflare.com/workers/static-assets/) and this compatibility flag enabled, navigation requests (requests which have a `Sec-Fetch-Mode: navigate` header) will prefer to be served by our asset-serving logic, even when an exact asset match cannot be found. This is particularly useful for applications which operate in either [Single Page Application (SPA) mode](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/) or have [custom 404 pages](https://developers.cloudflare.com/workers/static-assets/routing/static-site-generation/#custom-404-pages), as this now means the fallback pages of `200 /index.html` and `404 /404.html` will be served ahead of invoking a Worker script and will therefore avoid incurring a charge. Without this flag, the runtime will continue to apply the old behavior of invoking a Worker script (if present) for any requests which do not exactly match a static asset. When `assets.run_worker_first = true` is set, this compatibility flag has no effect. The `assets.run_worker_first = true` setting ensures the Worker script executes before any asset-serving logic. ### Enable auto-populating `process.env` | | | | - | - | | **Default as of** | 2025-04-01 | | **Flag to enable** | `nodejs_compat_populate_process_env` | | **Flag to disable** | `nodejs_compat_do_not_populate_process_env` | When you enable the `nodejs_compat_populate_process_env` compatibility flag and the [`nodejs_compat`](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) flag is also enabled, `process.env` will be populated with values from any bindings with text or JSON values. This means that if you have added [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/), [secrets](https://developers.cloudflare.com/workers/configuration/secrets/), or [version metadata](https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/) bindings, these values can be accessed on `process.env`. ```js const apiClient = ApiClient.new({ apiKey: process.env.API_KEY }); const LOG_LEVEL = process.env.LOG_LEVEL || "info"; ``` This makes accessing these values easier and conforms to common Node.js patterns, which can reduce toil and help with compatibility for existing Node.js libraries. If users do not wish for these values to be accessible via `process.env`, they can use the `nodejs_compat_do_not_populate_process_env` flag. In this case, `process.env` will still be available, but will not have values automatically added. ### Queue consumers don't wait for `ctx.waitUntil()` to resolve | | | | - | - | | **Flag to enable** | `queue_consumer_no_wait_for_wait_until` | By default, [Queues](https://developers.cloudflare.com/queues/) Consumer Workers acknowledge messages only after promises passed to [`ctx.waitUntil()`](https://developers.cloudflare.com/workers/runtime-apis/context) have resolved. This behavior can cause queue consumers which utilize `ctx.waitUntil()` to process messages slowly. The default behavior is documented in the [Queues Consumer Configuration Guide](https://developers.cloudflare.com/queues/configuration/javascript-apis#consumer). This Consumer Worker is an example of a Worker which utilizes `ctx.waitUntil()`. Under the default behavior, this consumer Worker will only acknowledge a batch of messages after the sleep function has resolved. ```js export default { async fetch(request, env, ctx) { // omitted }, async queue(batch, env, ctx) { console.log(`received batch of ${batch.messages.length} messages to queue ${batch.queue}`); for (let i = 0; i < batch.messages.length; ++i) { console.log(`message #${i}: ${JSON.stringify(batch.messages[i])}`); } ctx.waitUntil(sleep(30 * 1000)); } }; function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } ``` If the `queue_consumer_no_wait_for_wait_until` flag is enabled, Queues consumers will no longer wait for promises passed to `ctx.waitUntil()` to resolve before acknowledging messages. This can improve the performance of queue consumers which utilize `ctx.waitUntil()`. With the flag enabled, in the above example, the consumer Worker will acknowledge the batch without waiting for the sleep function to resolve. Using this flag will not affect the behavior of `ctx.waitUntil()`. `ctx.waitUntil()` will continue to extend the lifetime of your consumer Worker to continue to work even after the batch of messages has been acknowledged. ### Apply TransformStream backpressure fix | | | | - | - | | **Default as of** | 2024-12-16 | | **Flag to enable** | `fixup-transform-stream-backpressure` | | **Flag to disable** | `original-transform-stream-backpressure` | The original implementation of `TransformStream` included a bug that would cause backpressure signaling to fail after the first write to the transform. Unfortunately, the fix can cause existing code written to address the bug to fail. Therefore, the `fixup-transform-stream-backpressure` compat flag is provided to enable the fix. The fix is enabled by default with compatibility dates of 2024-12-16 or later. To restore the original backpressure logic, disable the fix using the `original-transform-stream-backpressure` flag. ### Disable top-level await in require(...) | | | | - | - | | **Default as of** | 2024-12-02 | | **Flag to enable** | `disable_top_level_await_in_require` | | **Flag to disable** | `enable_top_level_await_in_require` | Workers implements the ability to use the Node.js style `require(...)` method to import modules in the Worker bundle. Historically, this mechanism allowed required modules to use top-level await. This, however, is not Node.js compatible. The `disable_top_level_await_in_require` compat flag will cause `require()` to fail if the module uses a top-level await. This flag is default enabled with a compatibility date of 2024-12-02 or later. To restore the original behavior allowing top-level await, use the `enable_top_level_await_in_require` compatibility flag. ### Enable `cache: no-store` HTTP standard API | | | | - | - | | **Default as of** | 2024-11-11 | | **Flag to enable** | `cache_option_enabled` | | **Flag to disable** | `cache_option_disabled` | When you enable the `cache_option_enabled` compatibility flag, you can specify a value for the `cache` property of the Request interface. When this compatibility flag is not enabled, or `cache_option_disabled` is set, the Workers runtime will throw an `Error` saying `The 'cache' field on 'RequestInitializerDict' is not implemented.` When this flag is enabled you can instruct Cloudflare not to cache the response from a subrequest you make from your Worker using the [`fetch()` API](https://developers.cloudflare.com/workers/runtime-apis/fetch/): The only cache option enabled with `cache_option_enabled` is `'no-store'`. Specifying any other value will cause the Workers runtime to throw a `TypeError` with the message `Unsupported cache mode: `. When `no-store` is specified: * All requests have the headers `Pragma: no-cache` and `Cache-Control: no-cache` are set on them. * Subrequests to origins not hosted by Cloudflare bypass Cloudflare's cache. Examples using `cache: 'no-store'`: ```js const response = await fetch("https://example.com", { cache: "no-store" }); ``` The cache value can also be set on a `Request` object. ```js const request = new Request("https://example.com", { cache: "no-store" }); const response = await fetch(request); ``` ### Global fetch() strictly public | | | | - | - | | **Flag to enable** | `global_fetch_strictly_public` | | **Flag to disable** | `global_fetch_private_origin` | When the `global_fetch_strictly_public` compatibility flag is enabled, the global [`fetch()` function](https://developers.cloudflare.com/workers/runtime-apis/fetch/) will strictly route requests as if they were made on the public Internet. This means requests to a Worker's own zone will loop back to the "front door" of Cloudflare and will be treated like a request from the Internet, possibly even looping back to the same Worker again. When the `global_fetch_strictly_public` is not enabled, such requests are routed to the zone's origin server, ignoring any Workers mapped to the URL and also bypassing Cloudflare security settings. ### Upper-case HTTP methods | | | | - | - | | **Default as of** | 2024-10-14 | | **Flag to enable** | `upper_case_all_http_methods` | | **Flag to disable** | `no_upper_case_all_http_methods` | HTTP methods are expected to be upper-cased. Per the fetch spec, if the method is specified as `get`, `post`, `put`, `delete`, `head`, or `options`, implementations are expected to uppercase the method. All other method names would generally be expected to throw as unrecognized (for example, `patch` would be an error while `PATCH` is accepted). This is a bit restrictive, even if it is in the spec. This flag modifies the behavior to uppercase all methods prior to parsing so that the method is always recognized if it is a known method. To restore the standard behavior, use the `no_upper_case_all_http_methods` compatibility flag. ### Automatically set the Symbol.toStringTag for Workers API objects | | | | - | - | | **Default as of** | 2024-09-26 | | **Flag to enable** | `set_tostring_tag` | | **Flag to disable** | `do_not_set_tostring_tag` | A change was made to set the Symbol.toStringTag on all Workers API objects in order to fix several spec compliance bugs. Unfortunately, this change was more breaking than anticipated. The `do_not_set_tostring_tag` compat flag restores the original behavior with compatibility dates of 2024-09-26 or earlier. ### Allow specifying a custom port when making a subrequest with the fetch() API | | | | - | - | | **Default as of** | 2024-09-02 | | **Flag to enable** | `allow_custom_ports` | | **Flag to disable** | `ignore_custom_ports` | When this flag is enabled, and you specify a port when making a subrequest with the [`fetch()` API](https://developers.cloudflare.com/workers/runtime-apis/fetch/), the port number you specify will be used. When you make a subrequest to a website that uses Cloudflare ("Orange Clouded") — only [ports supported by Cloudflare's reverse proxy](https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy) can be specified. If you attempt to specify an unsupported port, it will be ignored. When you make a subrequest to a website that does not use Cloudflare ("Grey Clouded") - any port can be specified. For example: ```js const response = await fetch("https://example.com:8000"); ``` With allow\_custom\_ports the above example would fetch `https://example.com:8000` rather than `https://example.com:443`. Note that creating a WebSocket client with a call to `new WebSocket(url)` will also obey this flag. ### Properly extract blob MIME type from `content-type` headers | | | | - | - | | **Default as of** | 2024-06-03 | | **Flag to enable** | `blob_standard_mime_type` | | **Flag to disable** | `blob_legacy_mime_type` | When calling `response.blob.type()`, the MIME type will now be properly extracted from `content-type` headers, per the [WHATWG spec](https://fetch.spec.whatwg.org/#concept-header-extract-mime-type). ### Use standard URL parsing in `fetch()` | | | | - | - | | **Default as of** | 2024-06-03 | | **Flag to enable** | `fetch_standard_url` | | **Flag to disable** | `fetch_legacy_url` | The `fetch_standard_url` flag makes `fetch()` use [WHATWG URL Standard](https://url.spec.whatwg.org/) parsing rules. The original implementation would throw `TypeError: Fetch API cannot load` errors with some URLs where standard parsing does not, for instance with the inclusion of whitespace before the URL. URL errors will now be thrown immediately upon calling `new Request()` with an improper URL. Previously, URL errors were thrown only once `fetch()` was called. ### Returning empty Uint8Array on final BYOB read | | | | - | - | | **Default as of** | 2024-05-13 | | **Flag to enable** | `internal_stream_byob_return_view` | | **Flag to disable** | `internal_stream_byob_return_undefined` | In the original implementation of BYOB ("Bring your own buffer") `ReadableStreams`, the `read()` method would return `undefined` when the stream was closed and there was no more data to read. This behavior was inconsistent with the standard `ReadableStream` behavior, which returns an empty `Uint8Array` when the stream is closed. When the `internal_stream_byob_return_view` flag is used, the BYOB `read()` will implement standard behavior. ```js const resp = await fetch('https://example.org'); const reader = resp.body.getReader({ mode: 'byob' }); await result = await reader.read(new Uint8Array(10)); if (result.done) { // The result gives us an empty Uint8Array... console.log(result.value.byteLength); // 0 // However, it is backed by the same underlying memory that was passed // into the read call. console.log(result.value.buffer.byteLength); // 10 } ``` ### Brotli Content-Encoding support | | | | - | - | | **Default as of** | 2024-04-29 | | **Flag to enable** | `brotli_content_encoding` | | **Flag to disable** | `no_brotli_content_encoding` | When the `brotli_content_encoding` compatibility flag is enabled, Workers supports the `br` content encoding and can request and respond with data encoded using the [Brotli](https://developer.mozilla.org/en-US/docs/Glossary/Brotli_compression) compression algorithm. This reduces the amount of data that needs to be fetched and can be used to pass through the original compressed data to the client. See the Fetch API [documentation](https://developers.cloudflare.com/workers/runtime-apis/fetch/#how-the-accept-encoding-header-is-handled) for details. ### Durable Object stubs and Service Bindings support RPC | | | | - | - | | **Default as of** | 2024-04-03 | | **Flag to enable** | `rpc` | | **Flag to disable** | `no_rpc` | With this flag on, [Durable Object](https://developers.cloudflare.com/durable-objects/) stubs and [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) support [RPC](https://developers.cloudflare.com/workers/runtime-apis/rpc/). This means that these objects now appear as if they define every possible method name. Calling any method name sends an RPC to the remote Durable Object or Worker service. For most applications, this change will have no impact unless you use it. However, it is possible some existing code will be impacted if it explicitly checks for the existence of method names that were previously not defined on these types. For example, we have seen code in the wild which iterates over [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) and tries to auto-detect their types based on what methods they implement. Such code will now see service bindings as implementing every method, so may misinterpret service bindings as being some other type. In the cases we have seen, the impact was benign (nothing actually broke), but out of caution we are guarding this change behind a flag. ### Handling custom thenables | | | | - | - | | **Default as of** | 2024-04-01 | | **Flag to enable** | `unwrap_custom_thenables` | | **Flag to disable** | `no_unwrap_custom_thenables` | With the `unwrap_custom_thenables` flag set, various Workers APIs that accept promises will also correctly handle custom thenables (objects with a `then` method) that are not native promises, but are intended to be treated as such). For example, the `waitUntil` method of the `ExecutionContext` object will correctly handle custom thenables, allowing them to be used in place of native promises. ```js async fetch(req, env, ctx) { ctx.waitUntil({ then(res) { // Resolve the thenable after 1 second setTimeout(res, 1000); } }); // ... } ``` ### Fetchers no longer have get/put/delete helper methods | | | | - | - | | **Default as of** | 2024-03-26 | | **Flag to enable** | `fetcher_no_get_put_delete` | | **Flag to disable** | `fetcher_has_get_put_delete` | [Durable Object](https://developers.cloudflare.com/durable-objects/) stubs and [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) both implement a `fetch()` method which behaves similarly to the global `fetch()` method, but requests are instead sent to the destination represented by the object, rather than being routed based on the URL. Historically, API objects that had such a `fetch()` method also had methods `get()`, `put()`, and `delete()`. These methods were thin wrappers around `fetch()` which would perform the corresponding HTTP method and automatically handle writing/reading the request/response bodies as needed. These methods were a very early idea from many years ago, but were never actually documented, and therefore rarely (if ever) used. Enabling the `fetcher_no_get_put_delete`, or setting a compatibility date on or after `2024-03-26` disables these methods for your Worker. This change paves a future path for you to be able to define your own custom methods using these names. Without this change, you would be unable to define your own `get`, `put`, and `delete` methods, since they would conflict with these built-in helper methods. ### Queues send messages in `JSON` format | | | | - | - | | **Default as of** | 2024-03-18 | | **Flag to enable** | `queues_json_messages` | | **Flag to disable** | `no_queues_json_messages` | With the `queues_json_messages` flag set, Queue bindings will serialize values passed to `send()` or `sendBatch()` into JSON format by default (when no specific `contentType` is provided). ### Suppress global `importScripts()` | | | | - | - | | **Default as of** | 2024-03-04 | | **Flag to enable** | `no_global_importscripts` | | **Flag to disable** | `global_importscripts` | Suppresses the global `importScripts()` function. This method was included in the Workers global scope but was marked explicitly as non-implemented. However, the presence of the function could cause issues with some libraries. This compatibility flag removes the function from the global scope. ### Node.js AsyncLocalStorage | | | | - | - | | **Flag to enable** | `nodejs_als` | | **Flag to disable** | `no_nodejs_als` | Enables the availability of the Node.js [AsyncLocalStorage](https://nodejs.org/api/async_hooks.html#async_hooks_class_asynclocalstorage) API in Workers. ### Python Workers | | | | - | - | | **Default as of** | 2024-01-29 | | **Flag to enable** | `python_workers` | This flag enables first class support for Python. [Python Workers](https://developers.cloudflare.com/workers/languages/python/) implement the majority of Python's [standard library](https://developers.cloudflare.com/workers/languages/python/stdlib), support all [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings), [environment variable](https://developers.cloudflare.com/workers/configuration/environment-variables), and [secrets](https://developers.cloudflare.com/workers/configuration/secrets), and integration with JavaScript objects and functions via a [foreign function interface](https://developers.cloudflare.com/workers/languages/python/ffi). ### WebCrypto preserve publicExponent field | | | | - | - | | **Default as of** | 2023-12-01 | | **Flag to enable** | `crypto_preserve_public_exponent` | | **Flag to disable** | `no_crypto_preserve_public_exponent` | In the WebCrypto API, the `publicExponent` field of the algorithm of RSA keys would previously be an `ArrayBuffer`. Using this flag, `publicExponent` is a `Uint8Array` as mandated by the specification. ### `Vectorize` query with metadata optionally returned | | | | - | - | | **Default as of** | 2023-11-08 | | **Flag to enable** | `vectorize_query_metadata_optional` | | **Flag to disable** | `vectorize_query_original` | A set value on `vectorize_query_metadata_optional` indicates that the Vectorize query operation should accept newer arguments with `returnValues` and `returnMetadata` specified discretely over the older argument `returnVectors`. This also changes the return format. If the vector values have been indicated for return, the return value is now a flattened vector object with `score` attached where it previously contained a nested vector object. ### WebSocket Compression | | | | - | - | | **Default as of** | 2023-08-15 | | **Flag to enable** | `web_socket_compression` | | **Flag to disable** | `no_web_socket_compression` | The Workers runtime did not support WebSocket compression when the initial WebSocket implementation was released. Historically, the runtime has stripped or ignored the `Sec-WebSocket-Extensions` header -- but is now capable of fully complying with the WebSocket Compression RFC. Since many clients are likely sending `Sec-WebSocket-Extensions: permessage-deflate` to their Workers today (`new WebSocket(url)` automatically sets this in browsers), we have decided to maintain prior behavior if this flag is absent. If the flag is present, the Workers runtime is capable of using WebSocket Compression on both inbound and outbound WebSocket connections. Like browsers, calling `new WebSocket(url)` in a Worker will automatically set the `Sec-WebSocket-Extensions: permessage-deflate` header. If you are using the non-standard `fetch()` API to obtain a WebSocket, you can include the `Sec-WebSocket-Extensions` header with value `permessage-deflate` and include any of the compression parameters defined in [RFC-7692](https://datatracker.ietf.org/doc/html/rfc7692#section-7). ### Strict crypto error checking | | | | - | - | | **Default as of** | 2023-08-01 | | **Flag to enable** | `strict_crypto_checks` | | **Flag to disable** | `no_strict_crypto_checks` | Perform additional error checking in the Web Crypto API to conform with the specification and reject possibly unsafe key parameters: * For RSA key generation, key sizes are required to be multiples of 128 bits as boringssl may otherwise truncate the key. * The size of imported RSA keys must be at least 256 bits and at most 16384 bits, as with newly generated keys. * The public exponent for imported RSA keys is restricted to the commonly used values `[3, 17, 37, 65537]`. * In conformance with the specification, an error will be thrown when trying to import a public ECDH key with non-empty usages. ### Strict compression error checking | | | | - | - | | **Default as of** | 2023-08-01 | | **Flag to enable** | `strict_compression_checks` | | **Flag to disable** | `no_strict_compression_checks` | Perform additional error checking in the Compression Streams API and throw an error if a `DecompressionStream` has trailing data or gets closed before the full compressed data has been provided. ### Override cache rules cache settings in `request.cf` object for Fetch API | | | | - | - | | **Default as of** | 2025-04-02 | | **Flag to enable** | `request_cf_overrides_cache_rules` | | **Flag to disable** | `no_request_cf_overrides_cache_rules` | This flag changes the behavior of cache when requesting assets via the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch). Cache settings specified in the `request.cf` object, such as `cacheEverything` and `cacheTtl`, are now given precedence over any [Cache Rules](https://developers.cloudflare.com/cache/how-to/cache-rules/) set. ### Bot Management data | | | | - | - | | **Default as of** | 2023-08-01 | | **Flag to enable** | `no_cf_botmanagement_default` | | **Flag to disable** | `cf_botmanagement_default` | This flag streamlines Workers requests by reducing unnecessary properties in the `request.cf` object. With the flag enabled - either by default after 2023-08-01 or by setting the `no_cf_botmanagement_default` flag - Cloudflare will only include the [Bot Management object](https://developers.cloudflare.com/bots/reference/bot-management-variables/) in a Worker's `request.cf` if the account has access to Bot Management. With the flag disabled, Cloudflare will include a default Bot Management object, regardless of whether the account is entitled to Bot Management. ### URLSearchParams delete() and has() value argument | | | | - | - | | **Default as of** | 2023-07-01 | | **Flag to enable** | `urlsearchparams_delete_has_value_arg` | | **Flag to disable** | `no_urlsearchparams_delete_has_value_arg` | The WHATWG introduced additional optional arguments to the `URLSearchParams` object [`delete()`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/delete) and [`has()`](https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/has) methods that allow for more precise control over the removal of query parameters. Because the arguments are optional and change the behavior of the methods when present there is a risk of breaking existing code. If your compatibility date is set to July 1, 2023 or after, this compatibility flag will be enabled by default. For an example of how this change could break existing code, consider code that uses the `Array` `forEach()` method to iterate through a number of parameters to delete: ```js const usp = new URLSearchParams(); // ... ['abc', 'xyz'].forEach(usp.delete.bind(usp)); ``` The `forEach()` automatically passes multiple parameters to the function that is passed in. Prior to the addition of the new standard parameters, these extra arguments would have been ignored. Now, however, the additional arguments have meaning and change the behavior of the function. With this flag, the example above would need to be changed to: ```js const usp = new URLSearchParams(); // ... ['abc', 'xyz'].forEach((key) => usp.delete(key)); ``` ### Use a spec compliant URL implementation in redirects | | | | - | - | | **Default as of** | 2023-03-14 | | **Flag to enable** | `response_redirect_url_standard` | | **Flag to disable** | `response_redirect_url_original` | Change the URL implementation used in `Response.redirect()` to be spec-compliant (WHATWG URL Standard). ### Dynamic Dispatch Exception Propagation | | | | - | - | | **Default as of** | 2023-03-01 | | **Flag to enable** | `dynamic_dispatch_tunnel_exceptions` | | **Flag to disable** | `dynamic_dispatch_treat_exceptions_as_500` | Previously, when using Workers for Platforms' [dynamic dispatch API](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/get-started/dynamic-dispatch/) to send an HTTP request to a user Worker, if the user Worker threw an exception, the dynamic dispatch Worker would receive an HTTP `500` error with no body. When the `dynamic_dispatch_tunnel_exceptions` compatibility flag is enabled, the exception will instead propagate back to the dynamic dispatch Worker. The `fetch()` call in the dynamic dispatch Worker will throw the same exception. This matches the similar behavior of [service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) and [Durable Objects](https://developers.cloudflare.com/durable-objects/). ### `Headers` supports `getSetCookie()` | | | | - | - | | **Default as of** | 2023-03-01 | | **Flag to enable** | `http_headers_getsetcookie` | | **Flag to disable** | `no_http_headers_getsetcookie` | Adds the [`getSetCookie()`](https://developer.mozilla.org/en-US/docs/Web/API/Headers/getSetCookie) method to the [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers) API in Workers. ```js const response = await fetch("https://example.com"); let cookieValues = response.headers.getSetCookie(); ``` ### Node.js compatibility | | | | - | - | | **Flag to enable** | `nodejs_compat` | | **Flag to disable** | `no_nodejs_compat` | Enables the full set of [available Node.js APIs](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) in the Workers Runtime. ### Streams Constructors | | | | - | - | | **Default as of** | 2022-11-30 | | **Flag to enable** | `streams_enable_constructors` | | **Flag to disable** | `streams_disable_constructors` | Adds the work-in-progress `new ReadableStream()` and `new WritableStream()` constructors backed by JavaScript underlying sources and sinks. ### Compliant TransformStream constructor | | | | - | - | | **Default as of** | 2022-11-30 | | **Flag to enable** | `transformstream_enable_standard_constructor` | | **Flag to disable** | `transformstream_disable_standard_constructor` | Previously, the `new TransformStream()` constructor was not compliant with the Streams API standard. Use the `transformstream_enable_standard_constructor` to opt-in to the backwards-incompatible change to make the constructor compliant. Must be used in combination with the `streams_enable_constructors` flag. ### CommonJS modules do not export a module namespace | | | | - | - | | **Default as of** | 2022-10-31 | | **Flag to enable** | `export_commonjs_default` | | **Flag to disable** | `export_commonjs_namespace` | CommonJS modules were previously exporting a module namespace (an object like `{ default: module.exports }`) rather than exporting only the `module.exports`. When this flag is enabled, the export is fixed. ### Do not throw from async functions | | | | - | - | | **Default as of** | 2022-10-31 | | **Flag to enable** | `capture_async_api_throws` | | **Flag to disable** | `do_not_capture_async_api_throws` | The `capture_async_api_throws` compatibility flag will ensure that, in conformity with the standards API, async functions will only ever reject if they throw an error. The inverse `do_not_capture_async_api_throws` flag means that async functions which contain an error may throw that error synchronously rather than rejecting. ### New URL parser implementation | | | | - | - | | **Default as of** | 2022-10-31 | | **Flag to enable** | `url_standard` | | **Flag to disable** | `url_original` | The original implementation of the [`URL`](https://developer.mozilla.org/en-US/docs/Web/API/URL) API in Workers was not fully compliant with the [WHATWG URL Standard](https://url.spec.whatwg.org/), differing in several ways, including: * The original implementation collapsed sequences of multiple slashes into a single slash: `new URL("https://example.com/a//b").toString() === "https://example.com/a/b"` * The original implementation would throw `"TypeError: Invalid URL string."` if it encountered invalid percent-encoded escape sequences, like `https://example.com/a%%b`. * The original implementation would percent-encode or percent-decode certain content differently: `new URL("https://example.com/a%40b?c d%20e?f").toString() === "https://example.com/a@b?c+d+e%3Ff"` * The original implementation lacked more recently implemented `URL` features, like [`URL.canParse()`](https://developer.mozilla.org/en-US/docs/Web/API/URL/canParse_static). Set the compatibility date of your Worker to a date after `2022-10-31` or enable the `url_standard` compatibility flag to opt-in the fully spec compliant `URL` API implementation. Refer to the [`response_redirect_url_standard` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#use-a-spec-compliant-url-implementation-in-redirects) , which affects the URL implementation used in `Response.redirect()`. ### `R2` bucket `list` respects the `include` option | | | | - | - | | **Default as of** | 2022-08-04 | | **Flag to enable** | `r2_list_honor_include` | With the `r2_list_honor_include` flag set, the `include` argument to R2 `list` options is honored. With an older compatibility date and without this flag, the `include` argument behaves implicitly as `include: ["httpMetadata", "customMetadata"]`. ### Do not substitute `null` on `TypeError` | | | | - | - | | **Default as of** | 2022-06-01 | | **Flag to enable** | `dont_substitute_null_on_type_error` | | **Flag to disable** | `substitute_null_on_type_error` | There was a bug in the runtime that meant that when being passed into built-in APIs, invalid values were sometimes mistakenly coalesced with `null`. Instead, a `TypeError` should have been thrown. The `dont_substitute_null_on_type_error` fixes this behavior so that an error is correctly thrown in these circumstances. ### Minimal subrequests | | | | - | - | | **Default as of** | 2022-04-05 | | **Flag to enable** | `minimal_subrequests` | | **Flag to disable** | `no_minimal_subrequests` | With the `minimal_subrequests` flag set, `fetch()` subrequests sent to endpoints on the Worker's own zone (also called same-zone subrequests) have a reduced set of features applied to them. In general, these features should not have been initially applied to same-zone subrequests, and very few user-facing behavior changes are anticipated. Specifically, Workers might observe the following behavior changes with the new flag: * Response bodies will not be opportunistically gzipped before being transmitted to the Workers runtime. If a Worker reads the response body, it will read it in plaintext, as has always been the case, so disabling this prevents unnecessary decompression. Meanwhile, if the Worker passes the response through to the client, Cloudflare's HTTP proxy will opportunistically gzip the response body on that side of the Workers runtime instead. The behavior change observable by a Worker script should be that some `Content-Encoding: gzip` headers will no longer appear. * Automatic Platform Optimization may previously have been applied on both the Worker's initiating request and its subrequests in some circumstances. It will now only apply to the initiating request. * Link prefetching will now only apply to the Worker's response, not responses to the Worker's subrequests. ### Global `navigator` | | | | - | - | | **Default as of** | 2022-03-21 | | **Flag to enable** | `global_navigator` | | **Flag to disable** | `no_global_navigator` | With the `global_navigator` flag set, a new global `navigator` property is available from within Workers. Currently, it exposes only a single `navigator.userAgent` property whose value is set to `'Cloudflare-Workers'`. This property can be used to reliably determine whether code is running within the Workers environment. ### Do not use the Custom Origin Trust Store for external subrequests | | | | - | - | | **Default as of** | 2022-03-08 | | **Flag to enable** | `no_cots_on_external_fetch` | | **Flag to disable** | `cots_on_external_fetch` | The `no_cots_on_external_fetch` flag disables the use of the [Custom Origin Trust Store](https://developers.cloudflare.com/ssl/origin-configuration/custom-origin-trust-store/) when making external (grey-clouded) subrequests from a Cloudflare Worker. ### Setters/getters on API object prototypes | | | | - | - | | **Default as of** | 2022-01-31 | | **Flag to enable** | `workers_api_getters_setters_on_prototype` | | **Flag to disable** | `workers_api_getters_setters_on_instance` | Originally, properties on Workers API objects were defined as instance properties as opposed to prototype properties. This broke subclassing at the JavaScript layer, preventing a subclass from correctly overriding the superclass getters/setters. This flag controls the breaking change made to set those getters/setters on the prototype template instead. This changes applies to: * `AbortSignal` * `AbortController` * `Blob` * `Body` * `DigestStream` * `Event` * `File` * `Request` * `ReadableStream` * `ReadableStreamDefaultReader` * `ReadableStreamBYOBReader` * `Response` * `TextDecoder` * `TextEncoder` * `TransformStream` * `URL` * `WebSocket` * `WritableStream` * `WritableStreamDefaultWriter` ### Durable Object `stub.fetch()` requires a full URL | | | | - | - | | **Default as of** | 2021-11-10 | | **Flag to enable** | `durable_object_fetch_requires_full_url` | | **Flag to disable** | `durable_object_fetch_allows_relative_url` | Originally, when making a request to a Durable Object by calling `stub.fetch(url)`, a relative URL was accepted as an input. The URL would be interpreted relative to the placeholder URL `http://fake-host`, and the resulting absolute URL was delivered to the destination object's `fetch()` handler. This behavior was incorrect — full URLs were meant to be required. This flag makes full URLs required. ### `fetch()` improperly interprets unknown protocols as HTTP | | | | - | - | | **Default as of** | 2021-11-10 | | **Flag to enable** | `fetch_refuses_unknown_protocols` | | **Flag to disable** | `fetch_treats_unknown_protocols_as_http` | Originally, if the `fetch()` function was passed a URL specifying any protocol other than `http:` or `https:`, it would silently treat it as if it were `http:`. For example, `fetch()` would appear to accept `ftp:` URLs, but it was actually making HTTP requests instead. Note that Cloudflare Workers supports a non-standard extension to `fetch()` to make it support WebSockets. However, when making an HTTP request that is intended to initiate a WebSocket handshake, you should still use `http:` or `https:` as the protocol, not `ws:` nor `wss:`. The `ws:` and `wss:` URL schemes are intended to be used together with the `new WebSocket()` constructor, which exclusively supports WebSocket. The extension to `fetch()` is designed to support HTTP and WebSocket in the same request (the response may or may not choose to initiate a WebSocket), and so all requests are considered to be HTTP. ### Streams BYOB reader detaches buffer | | | | - | - | | **Default as of** | 2021-11-10 | | **Flag to enable** | `streams_byob_reader_detaches_buffer` | | **Flag to disable** | `streams_byob_reader_does_not_detach_buffer` | Originally, the Workers runtime did not detach the `ArrayBuffer`s from user-provided TypedArrays when using the [BYOB reader's `read()` method](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/#methods), as required by the Streams spec, meaning it was possible to inadvertently reuse the same buffer for multiple `read()` calls. This change makes Workers conform to the spec. User code should never try to reuse an `ArrayBuffer` that has been passed into a [BYOB reader's `read()` method](https://developers.cloudflare.com/workers/runtime-apis/streams/readablestreambyobreader/#methods). Instead, user code can reuse the `ArrayBuffer` backing the result of the `read()` promise, as in the example below. ```js // Consume and discard `readable` using a single 4KiB buffer. let reader = readable.getReader({ mode: "byob" }); let arrayBufferView = new Uint8Array(4096); while (true) { let result = await reader.read(arrayBufferView); if (result.done) break; // Optionally something with `result` here. // Re-use the same memory for the next `read()` by creating // a new Uint8Array backed by the result's ArrayBuffer. arrayBufferView = new Uint8Array(result.value.buffer); } ``` The more recently added extension method `readAtLeast()` will always detach the `ArrayBuffer` and is unaffected by this feature flag setting. ### `FormData` parsing supports `File` | | | | - | - | | **Default as of** | 2021-11-03 | | **Flag to enable** | `formdata_parser_supports_files` | | **Flag to disable** | `formdata_parser_converts_files_to_strings` | [The `FormData` API](https://developer.mozilla.org/en-US/docs/Web/API/FormData) is used to parse data (especially HTTP request bodies) in `multipart/form-data` format. Originally, the Workers runtime's implementation of the `FormData` API incorrectly converted uploaded files to strings. Therefore, `formData.get("filename")` would return a string containing the file contents instead of a `File` object. This change fixes the problem, causing files to be represented using `File` as specified in the standard. ### `HTMLRewriter` handling of `` | | | | - | - | | **Flag to enable** | `html_rewriter_treats_esi_include_as_void_tag` | The HTML5 standard defines a fixed set of elements as void elements, meaning they do not use an end tag: ``, ``, `
`, ``, ``, ``, `
`, ``, ``, ``, ``, ``, ``, ``, ``, and ``. HTML5 does not recognize XML self-closing tag syntax. For example, `` ending tag is still required. The `/>` syntax simply is not recognized by HTML5 at all and it is treated the same as `>`. However, many developers still like to use this syntax, as a holdover from XHTML, a standard which failed to gain traction in the early 2000's. `` and `` are two tags that are not part of the HTML5 standard, but are instead used as part of [Edge Side Includes](https://en.wikipedia.org/wiki/Edge_Side_Includes), a technology for server-side HTML modification. These tags are not expected to contain any body and are commonly written with XML self-closing syntax. `HTMLRewriter` was designed to parse standard HTML5, not ESI. However, it would be useful to be able to implement some parts of ESI using `HTMLRewriter`. To that end, this compatibility flag causes `HTMLRewriter` to treat `` and `` as void tags, so that they can be parsed and handled properly. ## Experimental flags These flags can be enabled via `compatibility_flags`, but are not yet scheduled to become default on any particular date. ### Queue consumers don't wait for `ctx.waitUntil()` to resolve | | | | - | - | | **Flag to enable** | `queue_consumer_no_wait_for_wait_until` | By default, [Queues](https://developers.cloudflare.com/queues/) Consumer Workers acknowledge messages only after promises passed to [`ctx.waitUntil()`](https://developers.cloudflare.com/workers/runtime-apis/context) have resolved. This behavior can cause queue consumers which utilize `ctx.waitUntil()` to process messages slowly. The default behavior is documented in the [Queues Consumer Configuration Guide](https://developers.cloudflare.com/queues/configuration/javascript-apis#consumer). This Consumer Worker is an example of a Worker which utilizes `ctx.waitUntil()`. Under the default behavior, this consumer Worker will only acknowledge a batch of messages after the sleep function has resolved. ```js export default { async fetch(request, env, ctx) { // omitted }, async queue(batch, env, ctx) { console.log(`received batch of ${batch.messages.length} messages to queue ${batch.queue}`); for (let i = 0; i < batch.messages.length; ++i) { console.log(`message #${i}: ${JSON.stringify(batch.messages[i])}`); } ctx.waitUntil(sleep(30 * 1000)); } }; function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } ``` If the `queue_consumer_no_wait_for_wait_until` flag is enabled, Queues consumers will no longer wait for promises passed to `ctx.waitUntil()` to resolve before acknowledging messages. This can improve the performance of queue consumers which utilize `ctx.waitUntil()`. With the flag enabled, in the above example, the consumer Worker will acknowledge the batch without waiting for the sleep function to resolve. Using this flag will not affect the behavior of `ctx.waitUntil()`. `ctx.waitUntil()` will continue to extend the lifetime of your consumer Worker to continue to work even after the batch of messages has been acknowledged. ### `HTMLRewriter` handling of `` | | | | - | - | | **Flag to enable** | `html_rewriter_treats_esi_include_as_void_tag` | The HTML5 standard defines a fixed set of elements as void elements, meaning they do not use an end tag: ``, ``, `
`, ``, ``, ``, `
`, ``, ``, ``, ``, ``, ``, ``, ``, and ``. HTML5 does not recognize XML self-closing tag syntax. For example, `` ending tag is still required. The `/>` syntax simply is not recognized by HTML5 at all and it is treated the same as `>`. However, many developers still like to use this syntax, as a holdover from XHTML, a standard which failed to gain traction in the early 2000's. `` and `` are two tags that are not part of the HTML5 standard, but are instead used as part of [Edge Side Includes](https://en.wikipedia.org/wiki/Edge_Side_Includes), a technology for server-side HTML modification. These tags are not expected to contain any body and are commonly written with XML self-closing syntax. `HTMLRewriter` was designed to parse standard HTML5, not ESI. However, it would be useful to be able to implement some parts of ESI using `HTMLRewriter`. To that end, this compatibility flag causes `HTMLRewriter` to treat `` and `` as void tags, so that they can be parsed and handled properly.
--- title: Cron Triggers · Cloudflare Workers docs description: Enable your Worker to be executed on a schedule. lastUpdated: 2025-06-20T15:54:31.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/cron-triggers/ md: https://developers.cloudflare.com/workers/configuration/cron-triggers/index.md --- ## Background Cron Triggers allow users to map a cron expression to a Worker using a [`scheduled()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule. Cron Triggers are ideal for running periodic jobs, such as for maintenance or calling third-party APIs to collect up-to-date data. Workers scheduled by Cron Triggers will run on underutilized machines to make the best use of Cloudflare's capacity and route traffic efficiently. Note Cron Triggers can also be combined with [Workflows](https://developers.cloudflare.com/workflows/) to trigger multi-step, long-running tasks. You can [bind to a Workflow](https://developers.cloudflare.com/workflows/build/workers-api/) from directly from your Cron Trigger to execute a Workflow on a schedule. Cron Triggers execute on UTC time. ## Add a Cron Trigger ### 1. Define a scheduled event listener To respond to a Cron Trigger, you must add a [`"scheduled"` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) to your Worker. * JavaScript ```js export default { async scheduled(controller, env, ctx) { console.log("cron processed"); }, }; ``` * TypeScript ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); }, }; ``` * Python ```python from workers import handler @handler async def on_scheduled(controller, env, ctx): print("cron processed") ``` Refer to the following additional examples to write your code: * [Setting Cron Triggers](https://developers.cloudflare.com/workers/examples/cron-trigger/) * [Multiple Cron Triggers](https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/) ### 2. Update configuration Cron Trigger changes take time to propagate. Changes such as adding a new Cron Trigger, updating an old Cron Trigger, or deleting a Cron Trigger may take several minutes (up to 15 minutes) to propagate to the Cloudflare global network. After you have updated your Worker code to include a `"scheduled"` event, you must update your Worker project configuration. #### Via the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). Refer to the example below for a Cron Triggers configuration: * wrangler.jsonc ```jsonc { "triggers": { "crons": [ "*/3 * * * *", "0 15 1 * *", "59 23 LW * *" ] } } ``` * wrangler.toml ```toml [triggers] # Schedule cron triggers: # - At every 3rd minute # - At 15:00 (UTC) on first day of the month # - At 23:59 (UTC) on the last weekday of the month crons = [ "*/3 * * * *", "0 15 1 * *", "59 23 LW * *" ] ``` You also can set a different Cron Trigger for each [environment](https://developers.cloudflare.com/workers/wrangler/environments/) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). You need to put the `triggers` array under your chosen environment. For example: * wrangler.jsonc ```jsonc { "env": { "dev": { "triggers": { "crons": [ "0 * * * *" ] } } } } ``` * wrangler.toml ```toml [env.dev.triggers] crons = ["0 * * * *"] ``` #### Via the dashboard To add Cron Triggers in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings** > **Triggers** > **Cron Triggers**. ## Supported cron expressions Cloudflare supports cron expressions with five fields, along with most [Quartz scheduler](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html#introduction)-like cron syntax extensions: | Field | Values | Characters | | - | - | - | | Minute | 0-59 | \* , - / | | Hours | 0-23 | \* , - / | | Days of Month | 1-31 | \* , - / L W | | Months | 1-12, case-insensitive 3-letter abbreviations ("JAN", "aug", etc.) | \* , - / | | Weekdays | 1-7, case-insensitive 3-letter abbreviations ("MON", "fri", etc.) | \* , - / L # | Note Days of the week go from 1 = Sunday to 7 = Saturday, which is different on some other cron systems (where 0 = Sunday and 6 = Saturday). To avoid ambiguity you may prefer to use the three-letter abbreviations (e.g. `SUN` rather than 1). ### Examples Some common time intervals that may be useful for setting up your Cron Trigger: * `* * * * *` * At every minute * `*/30 * * * *` * At every 30th minute * `45 * * * *` * On the 45th minute of every hour * `0 17 * * sun` or `0 17 * * 1` * 17:00 (UTC) on Sunday * `10 7 * * mon-fri` or `10 7 * * 2-6` * 07:10 (UTC) on weekdays * `0 15 1 * *` * 15:00 (UTC) on first day of the month * `0 18 * * 6L` or `0 18 * * friL` * 18:00 (UTC) on the last Friday of the month * `59 23 LW * *` * 23:59 (UTC) on the last weekday of the month ## Test Cron Triggers locally Test Cron Triggers using Wrangler with [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev). This will expose a `/cdn-cgi/handler/scheduled` route which can be used to test using a HTTP request. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled" ``` To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" ``` Optionally, you can also pass a `time` query parameter to override `controller.scheduledTime` in your scheduled event listener. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*&time=1745856238" ``` ## View past events To view the execution history of Cron Triggers, view **Cron Events**: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. In **Overview**, select your **Worker**. 4. Select **Settings**. 5. Under **Trigger Events**, select **View events**. Cron Events stores the 100 most recent invocations of the Cron scheduled event. [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs) also records invocation logs for the Cron Trigger with a longer retention period and a filter & query interface. If you are interested in an API to access Cron Events, use Cloudflare's [GraphQL Analytics API](https://developers.cloudflare.com/analytics/graphql-api). Note It can take up to 30 minutes before events are displayed in **Past Cron Events** when creating a new Worker or changing a Worker's name. Refer to [Metrics and Analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/) for more information. ## Remove a Cron Trigger ### Via the dashboard To delete a Cron Trigger on a deployed Worker via the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**, and select your Worker. 3. Go to **Triggers** > select the three dot icon next to the Cron Trigger you want to remove > **Delete**. #### Via the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). When deploying a Worker with Wrangler any previous Cron Triggers are replaced with those specified in the `triggers` array. * If the `crons` property is an empty array then all the Cron Triggers are removed. * If the `triggers` or `crons` property are `undefined` then the currently deploy Cron Triggers are left in-place. - wrangler.jsonc ```jsonc { "triggers": { "crons": [] } } ``` - wrangler.toml ```toml [triggers] # Remove all cron triggers: crons = [ ] ``` ## Limits Refer to [Limits](https://developers.cloudflare.com/workers/platform/limits/) to track the maximum number of Cron Triggers per Worker. ## Green Compute With Green Compute enabled, your Cron Triggers will only run on Cloudflare points of presence that are located in data centers that are powered purely by renewable energy. Organizations may claim that they are powered by 100 percent renewable energy if they have procured sufficient renewable energy to account for their overall energy use. Renewable energy can be purchased in a number of ways, including through on-site generation (wind turbines, solar panels), directly from renewable energy producers through contractual agreements called Power Purchase Agreements (PPA), or in the form of Renewable Energy Credits (REC, IRECs, GoOs) from an energy credit market. Green Compute can be configured at the account level: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In the **Account details** section, find **Compute Setting**. 4. Select **Change**. 5. Select **Green Compute**. 6. Select **Confirm**. ## Related resources * [Triggers](https://developers.cloudflare.com/workers/wrangler/configuration/#triggers) - Review Wrangler configuration file syntax for Cron Triggers. * Learn how to access Cron Triggers in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) for an optimized experience. --- title: Environment variables · Cloudflare Workers docs description: You can add environment variables, which are a type of binding, to attach text strings or JSON values to your Worker. lastUpdated: 2025-05-06T09:04:36.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/environment-variables/ md: https://developers.cloudflare.com/workers/configuration/environment-variables/index.md --- ## Background You can add environment variables, which are a type of binding, to attach text strings or JSON values to your Worker. Environment variables are available on the [`env` parameter](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). Text strings and JSON values are not encrypted and are useful for storing application configuration. ## Add environment variables via Wrangler To add env variables using Wrangler, define text and JSON via the `[vars]` configuration in your Wrangler file. In the following example, `API_HOST` and `API_ACCOUNT_ID` are text values and `SERVICE_X_DATA` is a JSON value. * wrangler.jsonc ```jsonc { "name": "my-worker-dev", "vars": { "API_HOST": "example.com", "API_ACCOUNT_ID": "example_user", "SERVICE_X_DATA": { "URL": "service-x-api.dev.example", "MY_ID": 123 } } } ``` * wrangler.toml ```toml name = "my-worker-dev" [vars] API_HOST = "example.com" API_ACCOUNT_ID = "example_user" SERVICE_X_DATA = { URL = "service-x-api.dev.example", MY_ID = 123 } ``` Refer to the following example on how to access the `API_HOST` environment variable in your Worker code: * JavaScript ```js export default { async fetch(request, env, ctx) { return new Response(`API host: ${env.API_HOST}`); }, }; ``` * TypeScript ```ts export interface Env { API_HOST: string; } export default { async fetch(request, env, ctx): Promise { return new Response(`API host: ${env.API_HOST}`); }, } satisfies ExportedHandler; ``` ### Configuring different environments in Wrangler [Environments in Wrangler](https://developers.cloudflare.com/workers/wrangler/environments) let you specify different configurations for the same Worker, including different values for `vars` in each environment. As `vars` is a [non-inheritable key](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys), they are not inherited by environments and must be specified for each environment. The example below sets up two environments, `staging` and `production`, with different values for `API_HOST`. * wrangler.jsonc ```jsonc { "name": "my-worker-dev", "vars": { "API_HOST": "api.example.com" }, "env": { "staging": { "vars": { "API_HOST": "staging.example.com" } }, "production": { "vars": { "API_HOST": "production.example.com" } } } } ``` * wrangler.toml ```toml name = "my-worker-dev" # top level environment [vars] API_HOST = "api.example.com" [env.staging.vars] API_HOST = "staging.example.com" [env.production.vars] API_HOST = "production.example.com" ``` To run Wrangler commands in specific environments, you can pass in the `--env` or `-e` flag. For example, you can develop the Worker in an environment called `staging` by running `npx wrangler dev --env staging`, and deploy it with `npx wrangler deploy --env staging`. Learn about [environments in Wrangler](https://developers.cloudflare.com/workers/wrangler/environments). ## Add environment variables via the dashboard To add environment variables via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Settings**. 5. Under **Variables and Secrets**, select **Add**. 6. Select a **Type**, input a **Variable name**, and input its **Value**. This variable will be made available to your Worker. 7. (Optional) To add multiple environment variables, select **Add variable**. 8. Select **Deploy** to implement your changes. Plaintext strings and secrets Select the **Secret** type if your environment variable is a [secret](https://developers.cloudflare.com/workers/configuration/secrets/). Alternatively, consider [Cloudflare Secrets Store](https://developers.cloudflare.com/secrets-store/), for account-level secrets. ## Compare secrets and environment variables Use secrets for sensitive information Do not use plaintext environment variables to store sensitive information. Use [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) or [Secrets Store bindings](https://developers.cloudflare.com/secrets-store/integrations/workers/) instead. [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). The difference is secret values are not visible within Wrangler or Cloudflare dashboard after you define them. This means that sensitive data, including passwords or API tokens, should always be encrypted to prevent data leaks. To your Worker, there is no difference between an environment variable and a secret. The secret's value is passed through as defined. When developing your Worker or Pages Function, create a `.dev.vars` file in the root of your project to define secrets that will be used when running `wrangler dev` or `wrangler pages dev`, as opposed to using environment variables in the [Wrangler configuration file](https://developers.cloudflare.com/workers/configuration/environment-variables/#compare-secrets-and-environment-variables). This works both in local and remote development modes. The `.dev.vars` file should be formatted like a `dotenv` file, such as `KEY="VALUE"`: ```bash SECRET_KEY="value" API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9" ``` To set different secrets for each environment, create files named `.dev.vars.`. When you use `wrangler --env `, the corresponding environment-specific file will be loaded instead of the `.dev.vars` file. Like other environment variables, secrets are [non-inheritable](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) and must be defined per environment. ## Related resources * Migrating environment variables from [Service Worker format to ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/#environment-variables). --- title: Integrations · Cloudflare Workers docs description: Integrate with third-party services and products. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/integrations/ md: https://developers.cloudflare.com/workers/configuration/integrations/index.md --- One of the key features of Cloudflare Workers is the ability to integrate with other services and products. In this document, we will explain the types of integrations available with Cloudflare Workers and provide step-by-step instructions for using them. ## Types of integrations Cloudflare Workers offers several types of integrations, including: * [Databases](https://developers.cloudflare.com/workers/databases/): Cloudflare Workers can be integrated with a variety of databases, including SQL and NoSQL databases. This allows you to store and retrieve data from your databases directly from your Cloudflare Workers code. * [APIs](https://developers.cloudflare.com/workers/configuration/integrations/apis/): Cloudflare Workers can be used to integrate with external APIs, allowing you to access and use the data and functionality exposed by those APIs in your own code. * [Third-party services](https://developers.cloudflare.com/workers/configuration/integrations/external-services/): Cloudflare Workers can be used to integrate with a wide range of third-party services, such as payment gateways, authentication providers, and more. This makes it possible to use these services in your Cloudflare Workers code. ## How to use integrations To use any of the available integrations: * Determine which integration you want to use and make sure you have the necessary accounts and credentials for it. * In your Cloudflare Workers code, import the necessary libraries or modules for the integration. * Use the provided APIs and functions to connect to the integration and access its data or functionality. * Store necessary secrets and keys using secrets via [`wrangler secret put `](https://developers.cloudflare.com/workers/wrangler/commands/#secret). ## Tips and best practices To help you get the most out of using integrations with Cloudflare Workers: * Secure your integrations and protect sensitive data. Ensure you use secure authentication and authorization where possible, and ensure the validity of libraries you import. * Use [caching](https://developers.cloudflare.com/workers/reference/how-the-cache-works) to improve performance and reduce the load on an external service. * Split your Workers into service-oriented architecture using [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to make your application more modular, easier to maintain, and more performant. * Use [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/) when communicating with external APIs and services, which create a DNS record on your behalf and treat your Worker as an application instead of a proxy. --- title: Multipart upload metadata · Cloudflare Workers docs description: If you're using the Workers Script Upload API or Version Upload API directly, multipart/form-data uploads require you to specify a metadata part. This metadata defines the Worker's configuration in JSON format, analogue to the wrangler.toml file. lastUpdated: 2025-07-03T13:00:13.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/ md: https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/index.md --- If you're using the [Workers Script Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Version Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) directly, `multipart/form-data` uploads require you to specify a `metadata` part. This metadata defines the Worker's configuration in JSON format, analogue to the [wrangler.toml file](https://developers.cloudflare.com/workers/wrangler/configuration/). ## Sample `metadata` ```json { "main_module": "main.js", "bindings": [ { "type": "plain_text", "name": "MESSAGE", "text": "Hello, world!" } ], "compatibility_date": "2021-09-14" } ``` Note See examples of metadata being used with the Workers Script Upload API [here](https://developers.cloudflare.com/workers/platform/infrastructure-as-code#cloudflare-rest-api). ## Attributes The following attributes are configurable at the top-level. Note At a minimum, the `main_module` key is required to upload a Worker. * `main_module` string required * The part name that contains the module entry point of the Worker that will be executed. For example, `main.js`. * `assets` object optional * [Asset](https://developers.cloudflare.com/workers/static-assets/) configuration for a Worker. * `config` object optional * [html\_handling](https://developers.cloudflare.com/workers/static-assets/routing/advanced/html-handling/) determines the redirects and rewrites of requests for HTML content. * [not\_found\_handling](https://developers.cloudflare.com/workers/static-assets/#routing-behavior) determines the response when a request does not match a static asset. * `jwt` field provides a token authorizing assets to be attached to a Worker. * `keep_assets` boolean optional * Specifies whether assets should be retained from a previously uploaded Worker version; used in lieu of providing a completion token. * `bindings` array\[object] optional * [Bindings](#bindings) to expose in the Worker. * `placement` object optional * [Smart placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) object for the Worker. * `mode` field only supports `smart` for automatic placement. * `compatibility_date` string optional * [Compatibility Date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/#setting-compatibility-date) indicating targeted support in the Workers runtime. Backwards incompatible fixes to the runtime following this date will not affect this Worker. Highly recommended to set a `compatibility_date`, otherwise if on upload via the API, it defaults to the oldest compatibility date before any flags took effect (2021-11-02). * `compatibility_flags` array\[string] optional * [Compatibility Flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#setting-compatibility-flags) that enable or disable certain features in the Workers runtime. Used to enable upcoming features or opt in or out of specific changes not included in a `compatibility_date`. ## Additional attributes: [Workers Script Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) For [immediately deployed uploads](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#upload-a-new-version-and-deploy-it-immediately), the following **additional** attributes are configurable at the top-level. Note These attributes are **not available** for version uploads. * `migrations` array\[object] optional * [Durable Objects migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/) to apply. * `logpush` boolean optional * Whether [Logpush](https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/#logpush) is turned on for the Worker. * `tail_consumers` array\[object] optional * [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) that will consume logs from the attached Worker. * `tags` array\[string] optional * List of strings to use as tags for this Worker. ## Additional attributes: [Version Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) For [version uploads](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#upload-a-new-version-to-be-gradually-deployed-or-deployed-at-a-later-time), the following **additional** attributes are configurable at the top-level. Note These attributes are **not available** for immediately deployed uploads. * `annotations` object optional * Annotations object specific to the Worker version. * `workers/message` specifies a custom message for the version. * `workers/tag` specifies a custom identifier for the version. * `workers/alias` specifies a custom alias for this version. ## Bindings Workers can interact with resources on the Cloudflare Developer Platform using [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/). Refer to the JSON example below that shows how to add bindings in the `metadata` part. ```json { "bindings": [ { "type": "ai", "name": "" }, { "type": "analytics_engine", "name": "", "dataset": "" }, { "type": "assets", "name": "" }, { "type": "browser_rendering", "name": "" }, { "type": "d1", "name": "", "id": "" }, { "type": "durable_object_namespace", "name": "", "class_name": "" }, { "type": "hyperdrive", "name": "", "id": "" }, { "type": "kv_namespace", "name": "", "namespace_id": "" }, { "type": "mtls_certificate", "name": "", "certificate_id": "" }, { "type": "plain_text", "name": "", "text": "" }, { "type": "queue", "name": "", "queue_name": "" }, { "type": "r2_bucket", "name": "", "bucket_name": "" }, { "type": "secret_text", "name": "", "text": "" }, { "type": "service", "name": "", "service": "", "environment": "production" }, { "type": "tail_consumer", "service": "" }, { "type": "vectorize", "name": "", "index_name": "" }, { "type": "version_metadata", "name": "" } ] } ``` --- title: Preview URLs · Cloudflare Workers docs description: Preview URLs allow you to preview new versions of your project without deploying it to production. lastUpdated: 2025-07-03T13:00:13.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/previews/ md: https://developers.cloudflare.com/workers/configuration/previews/index.md --- # Overview Preview URLs allow you to preview new versions of your Worker without deploying it to production. There are two types of preview URLs: * **Versioned Preview URLs**: A unique URL generated automatically for each new version of your Worker. * **Aliased Preview URLs**: A static, human-readable alias that you can manually assign to a Worker version. Both preview URL types follow the format: `-..workers.dev`. Preview URLs can be: * Integrated into CI/CD pipelines, allowing automatic generation of preview environments for every pull request. * Used for collaboration between teams to test code changes in a live environment and verify updates. * Used to test new API endpoints, validate data formats, and ensure backward compatibility with existing services. When testing zone level performance or security features for a version, we recommend using [version overrides](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) so that your zone's performance and security settings apply. Note Preview URLs are only available for Worker versions uploaded after 2024-09-25. ## Types of Preview URLs ### Versioned Preview URLs Every time you create a new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker, a unique static version preview URL is generated automatically. These URLs use a version prefix and follow the format `-..workers.dev`. New versions of a Worker are created when you run: * [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) * [`wrangler versions upload`](https://developers.cloudflare.com/workers/wrangler/commands/#upload) * Or when you make edits via the Cloudflare dashboard These URLs are public by default and available immediately after version creation. Note Minimum required Wrangler version: 3.74.0. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/). #### View versioned preview URLs using Wrangler The [`wrangler versions upload`](https://developers.cloudflare.com/workers/wrangler/commands/#upload) command uploads a new [version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded. #### View versioned preview URLs on the Workers dashboard 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your project. 2. Go to the **Deployments** tab, and find the version you would like to view. ### Aliased preview URLs Aliased preview URLs let you assign a persistent, readable alias to a specific Worker version. These are useful for linking to stable previews across many versions (e.g. to share an upcoming but still actively being developed new feature). A common workflow would be to assign an alias for the branch that you're working on. These types of preview URLs follow the same pattern as other preview URLs: `-..workers.dev` Note Minimum required Wrangler version: `4.21.0`. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/). #### Create an Alias Aliases may be created during `versions upload`, by providing the `--preview-alias` flag with a valid alias name: ```bash wrangler versions upload --preview-alias staging ``` The resulting alias would be associated with this version, and immediately available at: `staging-..workers.dev` #### Rules and limitations * Aliases may only be created during version upload. * Aliases must use only lowercase letters, numbers, and dashes. * Aliases must begin with a lowercase letter. * The alias and Worker name combined (with a dash) must not exceed 63 characters due to DNS label limits. * Only the 20 most recently used aliases are retained. When a new alias is created beyond this limit, the least recently used alias is deleted. ## Manage access to Preview URLs By default, all preview URLs are enabled and available publicly. You can use [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/policies/access/) to require visitors to authenticate before accessing preview URLs. You can limit access to yourself, your teammates, your organization, or anyone else you specify in your [access policy](https://developers.cloudflare.com/cloudflare-one/policies/access). To limit your preview URLs to authorized emails only: 1. Log in to the [Cloudflare Access dashboard](https://one.dash.cloudflare.com/?to=/:account/access/apps). 2. Select your account. 3. Add an application. 4. Select **Self Hosted**. 5. Name your application (for example, "my-worker") and add your `workers.dev` subdomain as the **Application domain**. For example, if you want to secure preview URLs for a Worker running on `my-worker.my-subdomain.workers.dev`. * Subdomain: `*-my-worker` * Domain: `my-subdomain.workers.dev` Note You must press enter after you input your Application domain for it to save. You will see a "Zone is not associated with the current account" warning that you may ignore. 1. Go to the next page. 2. Add a name for your access policy (for example, "Allow employees access to preview URLs for my-worker"). 3. In the **Configure rules** section create a new rule with the **Emails** selector, or any other attributes which you wish to gate access to previews with. 4. Enter the emails you want to authorize. View [access policies](https://developers.cloudflare.com/cloudflare-one/policies/access/#selectors) to learn about configuring alternate rules. 5. Go to the next page. 6. Add application. ## Disabling Preview URLs Disabling Preview URLs will disable routing to both versioned and aliased preview URLs. ### Disabling Preview URLs in the dashboard To disable Preview URLs for a Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages** and in **Overview**, select your Worker. 3. Go to **Settings** > **Domains & Routes**. 4. On "Preview URLs" click "Disable". 5. Confirm you want to disable. ### Disabling Preview URLs in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) Note Wrangler 3.91.0 or higher is required to use this feature. To disable Preview URLs for a Worker, include the following in your Worker's Wrangler file: * wrangler.jsonc ```jsonc { "preview_urls": false } ``` * wrangler.toml ```toml preview_urls = false ``` When you redeploy your Worker with this change, Preview URLs will be disabled. Warning If you disable Preview URLs in the Cloudflare dashboard but do not update your Worker's Wrangler file with `preview_urls = false`, then Preview URLs will be re-enabled the next time you deploy your Worker with Wrangler. ## Limitations * Preview URLs are not generated for Workers that implement a [Durable Object](https://developers.cloudflare.com/durable-objects/). * Preview URLs are not currently generated for [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) [user Workers](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#user-workers). This is a temporary limitation, we are working to remove it. * You cannot currently configure Preview URLs to run on a subdomain other than [`workers.dev`](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/). * You cannot view logs for Preview URLs today, this includes Workers Logs, Wrangler tail and Logpush. --- title: Routes and domains · Cloudflare Workers docs description: Connect your Worker to an external endpoint (via Routes, Custom Domains or a `workers.dev` subdomain) such that it can be accessed by the Internet. lastUpdated: 2024-11-04T16:38:55.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/routing/ md: https://developers.cloudflare.com/workers/configuration/routing/index.md --- To allow a Worker to receive inbound HTTP requests, you must connect it to an external endpoint such that it can be accessed by the Internet. There are three types of routes: * [Custom Domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains): Routes to a domain or subdomain (such as `example.com` or `shop.example.com`) within a Cloudflare zone where the Worker is the origin. * [Routes](https://developers.cloudflare.com/workers/configuration/routing/routes/): Routes that are set within a Cloudflare zone where your origin server, if you have one, is behind a Worker that the Worker can communicate with. * [`workers.dev`](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/): A `workers.dev` subdomain route is automatically created for each Worker to help you getting started quickly. You may choose to [disable](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) your `workers.dev` subdomain. ## What is best for me? It's recommended to run production Workers on a [Workers route or custom domain](https://developers.cloudflare.com/workers/configuration/routing/), rather than on your `workers.dev` subdomain. Your `workers.dev` subdomain is treated as a [Free website](https://www.cloudflare.com/plans/) and is intended for personal or hobby projects that aren't business-critical. Custom Domains are recommended for use cases where your Worker is your application's origin server. Custom Domains can also be invoked within the same zone via `fetch()`, unlike Routes. Routes are recommended for use cases where your application's origin server is external to Cloudflare. Note that Routes cannot be the target of a same-zone `fetch()` call. --- title: Secrets · Cloudflare Workers docs description: Store sensitive information, like API keys and auth tokens, in your Worker. lastUpdated: 2025-07-02T16:34:28.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/secrets/ md: https://developers.cloudflare.com/workers/configuration/secrets/index.md --- ## Background Secrets are a type of binding that allow you to attach encrypted text values to your Worker. You cannot see secrets after you set them and can only access secrets via [Wrangler](https://developers.cloudflare.com/workers/wrangler/commands/#secret) or programmatically via the [`env` parameter](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters). Secrets are used for storing sensitive information like API keys and auth tokens. Secrets are available on the [`env` parameter](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). ## Access your secrets with Workers Secrets can be accessed from Workers as you would any other [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). For instance, given a `DB_CONNECTION_STRING` secret, you can access it in your Worker code: ```js import postgres from "postgres"; export default { async fetch(request, env, ctx) { const sql = postgres(env.DB_CONNECTION_STRING); const result = await sql`SELECT * FROM products;`; return new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" }, }); }, }; ``` Secrets Store (beta) Secrets described on this page are defined and managed on a per-Worker level. If you want to use account-level secrets, refer to [Secrets Store](https://developers.cloudflare.com/secrets-store/). Account-level secrets are configured on your Worker as a [Secrets Store binding](https://developers.cloudflare.com/secrets-store/integrations/workers/). ## Local Development with Secrets When developing your Worker or Pages Function, create a `.dev.vars` file in the root of your project to define secrets that will be used when running `wrangler dev` or `wrangler pages dev`, as opposed to using environment variables in the [Wrangler configuration file](https://developers.cloudflare.com/workers/configuration/environment-variables/#compare-secrets-and-environment-variables). This works both in local and remote development modes. The `.dev.vars` file should be formatted like a `dotenv` file, such as `KEY="VALUE"`: ```bash SECRET_KEY="value" API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9" ``` To set different secrets for each environment, create files named `.dev.vars.`. When you use `wrangler --env `, the corresponding environment-specific file will be loaded instead of the `.dev.vars` file. Like other environment variables, secrets are [non-inheritable](https://developers.cloudflare.com/workers/wrangler/configuration/#non-inheritable-keys) and must be defined per environment. ## Secrets on deployed Workers ### Adding secrets to your project #### Via Wrangler Secrets can be added through [`wrangler secret put`](https://developers.cloudflare.com/workers/wrangler/commands/#secret) or [`wrangler versions secret put`](https://developers.cloudflare.com/workers/wrangler/commands/#secret-put) commands. `wrangler secret put` creates a new version of the Worker and deploys it immediately. ```sh npx wrangler secret put ``` If using [gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy-2). Note Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. ```sh npx wrangler versions secret put ``` #### Via the dashboard To add a secret via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings**. 4. Under **Variables and Secrets**, select **Add**. 5. Select the type **Secret**, input a **Variable name**, and input its **Value**. This secret will be made available to your Worker but the value will be hidden in Wrangler and the dashboard. 6. (Optional) To add more secrets, select **Add variable**. 7. Select **Deploy** to implement your changes. ### Delete secrets from your project #### Via Wrangler Secrets can be deleted through [`wrangler secret delete`](https://developers.cloudflare.com/workers/wrangler/commands/#delete-1) or [`wrangler versions secret delete`](https://developers.cloudflare.com/workers/wrangler/commands/#secret-delete) commands. `wrangler secret delete` creates a new version of the Worker and deploys it immediately. ```sh npx wrangler secret delete ``` If using [gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy-2). ```sh npx wrangler versions secret delete ``` #### Via the dashboard To delete a secret from your Worker project via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings**. 4. Under **Variables and Secrets**, select **Edit**. 5. In the **Edit** drawer, select **X** next to the secret you want to delete. 6. Select **Deploy** to implement your changes. 7. (Optional) Instead of using the edit drawer, you can click the delete icon next to the secret. ## Compare secrets and environment variables Use secrets for sensitive information Do not use plaintext environment variables to store sensitive information. Use [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) or [Secrets Store bindings](https://developers.cloudflare.com/secrets-store/integrations/workers/) instead. [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) are [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). The difference is secret values are not visible within Wrangler or Cloudflare dashboard after you define them. This means that sensitive data, including passwords or API tokens, should always be encrypted to prevent data leaks. To your Worker, there is no difference between an environment variable and a secret. The secret's value is passed through as defined. ## Related resources * [Wrangler secret commands](https://developers.cloudflare.com/workers/wrangler/commands/#secret) - Review the Wrangler commands to create, delete and list secrets. * [Cloudflare Secrets Store](https://developers.cloudflare.com/secrets-store/) - Encrypt and store sensitive information as secrets that are securely reusable across your account. --- title: Smart Placement · Cloudflare Workers docs description: Speed up your Worker application by automatically placing your workloads in an optimal location that minimizes latency. lastUpdated: 2025-01-29T12:28:42.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/smart-placement/ md: https://developers.cloudflare.com/workers/configuration/smart-placement/index.md --- By default, [Workers](https://developers.cloudflare.com/workers/) and [Pages Functions](https://developers.cloudflare.com/pages/functions/) are invoked in a data center closest to where the request was received. If you are running back-end logic in a Worker, it may be more performant to run that Worker closer to your back-end infrastructure rather than the end user. Smart Placement automatically places your workloads in an optimal location that minimizes latency and speeds up your applications. ## Background The following example demonstrates how moving your Worker close to your back-end services could decrease application latency: You have a user in Sydney, Australia who is accessing an application running on Workers. This application makes multiple round trips to a database located in Frankfurt, Germany in order to serve the user’s request. ![A user located in Sydney, AU connecting to a Worker in the same region which then makes multiple round trips to a database located in Frankfurt, DE. ](https://developers.cloudflare.com/_astro/workers-smart-placement-disabled.CgvAE24H_ZlRB8R.webp) The issue is the time that it takes the Worker to perform multiple round trips to the database. Instead of the request being processed close to the user, the Cloudflare network, with Smart Placement enabled, would process the request in a data center closest to the database. ![A user located in Sydney, AU connecting to a Worker in Frankfurt, DE which then makes multiple round trips to a database also located in Frankfurt, DE. ](https://developers.cloudflare.com/_astro/workers-smart-placement-enabled.D6RN33at_20sSCa.webp) ## Understand how Smart Placement works Smart Placement is enabled on a per-Worker basis. Once enabled, Smart Placement analyzes the [request duration](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#request-duration) of the Worker in different Cloudflare locations around the world on a regular basis. Smart Placement decides where to run the Worker by comparing the estimated request duration in the location closest to where the request was received (the default location where the Worker would run) to a set of candidate locations around the world. For each candidate location, Smart Placement considers the performance of the Worker in that location as well as the network latency added by forwarding the request to that location. If the estimated request duration in the best candidate location is significantly faster than the location where the request was received, the request will be forwarded to that candidate location. Otherwise, the Worker will run in the default location closest to where the request was received. Smart Placement only considers candidate locations where the Worker has previously run, since the estimated request duration in each candidate location is based on historical data from the Worker running in that location. This means that Smart Placement cannot run the Worker in a location that it does not normally receive traffic from. Smart Placement only affects the execution of [fetch event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). Smart Placement does not affect the execution of [RPC methods](https://developers.cloudflare.com/workers/runtime-apis/rpc/) or [named entrypoints](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints). Workers without a fetch event handler will be ignored by Smart Placement. For Workers with both fetch and non-fetch event handlers, Smart Placement will only affect the execution of the fetch event handler. Similarly, Smart Placement will not affect where [static assets](https://developers.cloudflare.com/workers/static-assets/) are served from. Static assets will continue to be served from the location nearest to the incoming request. If a Worker is invoked and your code retrieves assets via the [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), then assets will be served from the location that your Worker runs in. ## Enable Smart Placement Smart Placement is available to users on all Workers plans. ### Enable Smart Placement via Wrangler To enable Smart Placement via Wrangler: 1. Make sure that you have `wrangler@2.20.0` or later [installed](https://developers.cloudflare.com/workers/wrangler/install-and-update/). 2. Add the following to your Worker project's Wrangler file: * wrangler.jsonc ```jsonc { "placement": { "mode": "smart" } } ``` * wrangler.toml ```toml [placement] mode = "smart" ``` 3. Wait for Smart Placement to analyze your Worker. This process may take up to 15 minutes. 4. View your Worker's [request duration analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#request-duration). ### Enable Smart Placement via the dashboard To enable Smart Placement via the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**,select your Worker. 4. Select **Settings** > **General**. 5. Under **Placement**, choose **Smart**. 6. Wait for Smart Placement to analyze your Worker. Smart Placement requires consistent traffic to the Worker from multiple locations around the world to make a placement decision. The analysis process may take up to 15 minutes. 7. View your Worker's [request duration analytics](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#request-duration) ## Observability ### Placement Status A Worker's metadata contains details about a Worker's placement status. Query your Worker's placement status through the following Workers API endpoint: ```bash curl -X GET https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/workers/services/{WORKER_NAME} \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" | jq . ``` Possible placement states include: * *(not present)*: The Worker has not been analyzed for Smart Placement yet. The Worker will always run in the default Cloudflare location closest to where the request was received. * `SUCCESS`: The Worker was successfully analyzed and will be optimized by Smart Placement. The Worker will run in the Cloudflare location that minimizes expected request duration, which may be the default location closest to where the request was received or may be a faster location elsewhere in the world. * `INSUFFICIENT_INVOCATIONS`: The Worker has not received enough requests to make a placement decision. Smart Placement requires consistent traffic to the Worker from multiple locations around the world. The Worker will always run in the default Cloudflare location closest to where the request was received. * `UNSUPPORTED_APPLICATION`: Smart Placement began optimizing the Worker and measured the results, which showed that Smart Placement made the Worker slower. In response, Smart Placement reverted the placement decision. The Worker will always run in the default Cloudflare location closest to where the request was received, and Smart Placement will not analyze the Worker again until it's redeployed. This state is rare and accounts for less that 1% of Workers with Smart Placement enabled. ### Request Duration Analytics Once Smart Placement is enabled, data about request duration gets collected. Request duration is measured at the data center closest to the end user. By default, one percent (1%) of requests are not routed with Smart Placement. These requests serve as a baseline to compare to. ### `cf-placement` header Once Smart Placement is enabled, Cloudflare adds a `cf-placement` header to all requests. This can be used to check whether a request has been routed with Smart Placement and where the Worker is processing the request (which is shown as the nearest airport code to the data center). For example, the `cf-placement: remote-LHR` header's `remote` value indicates that the request was routed using Smart Placement to a Cloudflare data center near London. The `cf-placement: local-EWR` header's `local` value indicates that the request was not routed using Smart Placement and the Worker was invoked in a data center closest to where the request was received, close to Newark Liberty International Airport (EWR). Beta use only We may remove the `cf-placement` header before Smart Placement enters general availability. ## Best practices If you are building full-stack applications on Workers, we recommend splitting up the front-end and back-end logic into different Workers and using [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to connect your front-end logic and back-end logic Workers. ![Smart Placement and Service Bindings](https://developers.cloudflare.com/_astro/smart-placement-service-bindings.Ce58BYeF_1YYSoG.webp) Enabling Smart Placement on your back-end Worker will invoke it close to your back-end service, while the front-end Worker serves requests close to the user. This architecture maintains fast, reactive front-ends while also improving latency when the back-end Worker is called. ## Give feedback on Smart Placement Smart Placement is in beta. To share your thoughts and experience with Smart Placement, join the [Cloudflare Developer Discord](https://discord.cloudflare.com). --- title: Workers Sites · Cloudflare Workers docs description: Use [Workers Static Assets](/workers/static-assets/) to host full-stack applications instead of Workers Sites. Do not use Workers Sites for new projects. lastUpdated: 2025-02-10T15:04:35.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/sites/ md: https://developers.cloudflare.com/workers/configuration/sites/index.md --- Use Workers Static Assets Instead You should use [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) to host full-stack applications instead of Workers Sites. It has been deprecated in Wrangler v4, and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) does not support Workers Sites. Do not use Workers Sites for new projects. Workers Sites enables developers to deploy static applications directly to Workers. It can be used for deploying applications built with static site generators like [Hugo](https://gohugo.io) and [Gatsby](https://www.gatsbyjs.org), or front-end frameworks like [Vue](https://vuejs.org) and [React](https://reactjs.org). To deploy with Workers Sites, select from one of these three approaches depending on the state of your target project: *** ## 1. Start from scratch If you are ready to start a brand new project, this quick start guide will help you set up the infrastructure to deploy a HTML website to Workers. [Start from scratch](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch/) *** ## 2. Deploy an existing static site If you have an existing project or static assets that you want to deploy with Workers, this quick start guide will help you install Wrangler and configure Workers Sites for your project. [Start from an existing static site](https://developers.cloudflare.com/workers/configuration/sites/start-from-existing/) *** ## 3. Add static assets to an existing Workers project If you already have a Worker deployed to Cloudflare, this quick start guide will show you how to configure the existing codebase to use Workers Sites. [Start from an existing Worker](https://developers.cloudflare.com/workers/configuration/sites/start-from-worker/) Note Workers Sites is built on Workers KV, and usage rates may apply. Refer to [Pricing](https://developers.cloudflare.com/workers/platform/pricing/) to learn more. --- title: Versions & Deployments · Cloudflare Workers docs description: Upload versions of Workers and create deployments to release new versions. lastUpdated: 2025-04-15T15:42:08.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/versions-and-deployments/ md: https://developers.cloudflare.com/workers/configuration/versions-and-deployments/index.md --- Versions track changes to your Worker. Deployments configure how those changes are deployed to your traffic. You can upload changes (versions) to your Worker independent of changing the version that is actively serving traffic (deployment). ![Versions and Deployments](https://developers.cloudflare.com/_astro/versions-and-deployments.Dnwtp7bX_AGXxo.webp) Using versions and deployments is useful if: * You are running critical applications on Workers and want to reduce risk when deploying new versions of your Worker using a rolling deployment strategy. * You want to monitor for performance differences when deploying new versions of your Worker. * You have a CI/CD pipeline configured for Workers but want to cut manual releases. ## Versions A version is defined by the state of code as well as the state of configuration in a Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). Versions track historical changes to [bundled code](https://developers.cloudflare.com/workers/wrangler/bundling/), [static assets](https://developers.cloudflare.com/workers/static-assets/) and changes to configuration like [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) and [compatibility date and compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) over time. Versions also track metadata associated with a version, including: the version ID, the user that created the version, deploy source, and timestamp. Optionally, a version message and version tag can be configured on version upload. Note State changes for associated Workers [storage resources](https://developers.cloudflare.com/workers/platform/storage-options/) such as [KV](https://developers.cloudflare.com/kv/), [R2](https://developers.cloudflare.com/r2/), [Durable Objects](https://developers.cloudflare.com/durable-objects/) and [D1](https://developers.cloudflare.com/d1/) are not tracked with versions. ## Deployments Deployments track the version(s) of your Worker that are actively serving traffic. A deployment can consist of one or two versions of a Worker. By default, Workers supports an all-at-once deployment model where traffic is immediately shifted from one version to the newly deployed version automatically. Alternatively, you can use [gradual deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) to create a rolling deployment strategy. You can also track metadata associated with a deployment, including: the user that created the deployment, deploy source, timestamp and the version(s) in the deployment. Optionally, you can configure a deployment message when you create a deployment. ## Use versions and deployments ### Create a new version Review the different ways you can create versions of your Worker and deploy them. #### Upload a new version and deploy it immediately A new version that is automatically deployed to 100% of traffic when: * Changes are uploaded with [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) via the Cloudflare Dashboard * Changes are deployed with the command [`npx wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) via [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) * Changes are uploaded with the [Workers Script Upload API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) #### Upload a new version to be gradually deployed or deployed at a later time Note Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. To create a new version of your Worker that is not deployed immediately, use the [`wrangler versions upload`](https://developers.cloudflare.com/workers/wrangler/commands/#upload) command or create a new version via the Cloudflare dashboard using the **Save** button. You can find the **Save** option under the down arrow beside the "Deploy" button. Versions created in this way can then be deployed all at once or gradually deployed using the [`wrangler versions deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy-2) command or via the Cloudflare dashboard under the **Deployments** tab. Note When using [Wrangler](https://developers.cloudflare.com/workers/wrangler/), changes made to a Worker's triggers [routes, domains](https://developers.cloudflare.com/workers/configuration/routing/) or [cron triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) need to be applied with the command [`wrangler triggers deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#triggers). Note New versions are not created when you make changes to [resources connected to your Worker](https://developers.cloudflare.com/workers/runtime-apis/bindings/). For example, if two Workers (Worker A and Worker B) are connected via a [service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/), changing the code of Worker B will not create a new version of Worker A. Changing the code of Worker B will only create a new version of Worker B. Changes to the service binding (such as, deleting the binding or updating the [environment](https://developers.cloudflare.com/workers/wrangler/environments/) it points to) on Worker A will also not create a new version of Worker B. ### View versions and deployments #### Via Wrangler Wrangler allows you to view the 10 most recent versions and deployments. Refer to the [`versions list`](https://developers.cloudflare.com/workers/wrangler/commands/#list-5) and [`deployments`](https://developers.cloudflare.com/workers/wrangler/commands/#list-6) documentation to view the commands. #### Via the Cloudflare dashboard To view your deployments in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your account. 2. Go to **Workers & Pages**. 3. Select your Worker > **Deployments**. ## Limits ### First upload You must use [C3](https://developers.cloudflare.com/workers/get-started/guide/#1-create-a-new-worker-project) or [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) the first time you create a new Workers project. Using [`wrangler versions upload`](https://developers.cloudflare.com/workers/wrangler/commands/#upload) the first time you upload a Worker will fail. ### Service worker syntax Service worker syntax is not supported for versions that are uploaded through [`wrangler versions upload`](https://developers.cloudflare.com/workers/wrangler/commands/#upload). You must use ES modules format. Refer to [Migrate from Service Workers to ES modules](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/#advantages-of-migrating) to learn how to migrate your Workers from the service worker format to the ES modules format. ### Durable Object migrations Uploading a version with [Durable Object migrations](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/) is not supported. Use [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) if you are applying a [Durable Object migration](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/). This will be supported in the near future. --- title: Page Rules with Workers · Cloudflare Workers docs description: Review the interaction between various Page Rules and Workers. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/ md: https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/index.md --- Page Rules trigger certain actions whenever a request matches one of the URL patterns you define. You can define a page rule to trigger one or more actions whenever a certain URL pattern is matched. Refer to [Page Rules](https://developers.cloudflare.com/rules/page-rules/) to learn more about configuring Page Rules. ## Page Rules with Workers Cloudflare acts as a [reverse proxy](https://www.cloudflare.com/learning/what-is-cloudflare/) to provide services, like Page Rules, to Internet properties. Your application's traffic will pass through a Cloudflare data center that is closest to the visitor. There are hundreds of these around the world, each of which are capable of running services like Workers and Page Rules. If your application is built on Workers and/or Pages, the [Cloudflare global network](https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/) acts as your origin server and responds to requests directly from the Cloudflare global network. When using Page Rules with Workers, the following workflow is applied. 1. Request arrives at Cloudflare data center. 2. Cloudflare decides if this request is a Worker route. Because this is a Worker route, Cloudflare evaluates and disabled a number of features, including some that would be set by Page Rules. 3. Page Rules run as part of normal request processing with some features now disabled. 4. Worker executes. 5. Worker makes a same-zone or other-zone subrequest. Because this is a Worker route, Cloudflare disables a number of features, including some that would be set by Page Rules. Page Rules are evaluated both at the client-to-Worker request stage (step 2) and the Worker subrequest stage (step 5). If you are experiencing Page Rule errors when running Workers, contact your Cloudflare account team or [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/). ## Affected Page Rules The following Page Rules may not work as expected when an incoming request is matched to a Worker route: * Always Online * [Always Use HTTPS](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#always-use-https) * [Automatic HTTPS Rewrites](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#automatic-https-rewrites) * [Browser Cache TTL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#browser-cache-ttl) * [Browser Integrity Check](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#browser-integrity-check) * [Cache Deception Armor](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#cache-deception-armor) * [Cache Level](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#cache-level) * Disable Apps * [Disable Zaraz](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#disable-zaraz) * [Edge Cache TTL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#edge-cache-ttl) * [Email Obfuscation](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#email-obfuscation) * [Forwarding URL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#forwarding-url) * Host Header Override * [IP Geolocation Header](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#ip-geolocation-header) * Mirage * [Origin Cache Control](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#origin-cache-control) * [Rocket Loader](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#rocket-loader) * [Security Level](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#security-level) * [SSL](https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/#ssl) This is because the default setting of these Page Rules will be disabled when Cloudflare recognizes that the request is headed to a Worker. Testing Due to ongoing changes to the Workers runtime, detailed documentation on how these rules will be affected are updated following testing. To learn what these Page Rules do, refer to [Page Rules](https://developers.cloudflare.com/rules/page-rules/). Same zone versus other zone A same zone subrequest is a request the Worker makes to an orange-clouded hostname in the same zone the Worker runs on. Depending on your DNS configuration, any request that falls outside that definition may be considered an other zone request by the Cloudflare network. ### Always Use HTTPS | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Automatic HTTPS Rewrites | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Browser Cache TTL | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Browser Integrity Check | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Cache Deception Armor | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Cache Level | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Disable Zaraz | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Edge Cache TTL | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Email Obfuscation | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Forwarding URL | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### IP Geolocation Header | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Origin Cache Control | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Rocket Loader | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Security Level | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### SSL | Source | Target | Behavior | | - | - | - | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | --- title: Analytics Engine · Cloudflare Workers docs description: Use Workers to receive performance analytics about your applications, products and projects. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/databases/analytics-engine/ md: https://developers.cloudflare.com/workers/databases/analytics-engine/index.md --- --- title: Connect to databases · Cloudflare Workers docs description: Learn about the different kinds of database integrations Cloudflare supports. lastUpdated: 2025-07-02T16:48:57.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/databases/connecting-to-databases/ md: https://developers.cloudflare.com/workers/databases/connecting-to-databases/index.md --- Cloudflare Workers can connect to and query your data in both SQL and NoSQL databases, including: * Cloudflare's own [D1](https://developers.cloudflare.com/d1/), a serverless SQL-based database. * Traditional hosted relational databases, including Postgres and MySQL, using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) (recommended) to significantly speed up access. * Serverless databases, including Supabase, MongoDB Atlas, PlanetScale, and Prisma. ### D1 SQL database D1 is Cloudflare's own SQL-based, serverless database. It is optimized for global access from Workers, and can scale out with multiple, smaller (10GB) databases, such as per-user, per-tenant or per-entity databases. Similar to some serverless databases, D1 pricing is based on query and storage costs. | Database | Library or Driver | Connection Method | | - | - | - | | [D1](https://developers.cloudflare.com/d1/) | [Workers binding](https://developers.cloudflare.com/d1/worker-api/), integrates with [Prisma](https://www.prisma.io/), [Drizzle](https://orm.drizzle.team/), and other ORMs | [Workers binding](https://developers.cloudflare.com/d1/worker-api/), [REST API](https://developers.cloudflare.com/api/resources/d1/subresources/database/methods/create/) | ### Traditional SQL databases Traditional databases use SQL drivers that use [TCP sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) to connect to the database. TCP is the de-facto standard protocol that many databases, such as PostgreSQL and MySQL, use for client connectivity. These drivers are also widely compatible with your preferred ORM libraries and query builders. This also includes serverless databases that are PostgreSQL or MySQL-compatible like [Supabase](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/supabase/), [Neon](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/neon/) or [PlanetScale](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-database-providers/planetscale/), which can be connected to using both native [TCP sockets and Hyperdrive](https://developers.cloudflare.com/hyperdrive/) or [serverless HTTP-based drivers](https://developers.cloudflare.com/workers/databases/connecting-to-databases/#serverless-databases) (detailed below). | Database | Integration | Library or Driver | Connection Method | | - | - | - | - | | [Postgres](https://developers.cloudflare.com/workers/tutorials/postgres/) | Direct connection | [node-postgres](https://node-postgres.com/),[Postgres.js](https://github.com/porsager/postgres) | [TCP Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) via database driver, using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) for optimal performance (optional, recommended) | | [MySQL](https://developers.cloudflare.com/workers/tutorials/mysql/) | Direct connection | [mysql2](https://github.com/sidorares/node-mysql2), [mysql](https://github.com/mysqljs/mysql) | [TCP Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) via database driver, using [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) for optimal performance (optional, recommended) | Speed up database connectivity with Hyperdrive Connecting to SQL databases with TCP sockets requires multiple roundtrips to establish a secure connection before a query to the database is made. Since a connection must be re-established on every Worker invocation, this adds unnecessary latency. [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) solves this by pooling database connections globally to eliminate unnecessary roundtrips and speed up your database access. Learn more about [how Hyperdrive works](https://developers.cloudflare.com/hyperdrive/configuration/how-hyperdrive-works/). ### Serverless databases Serverless databases may provide direct connection to the underlying database, or provide HTTP-based proxies and drivers (also known as serverless drivers). For PostgreSQL and MySQL serverless databases, you can connect to the underlying database directly using the native database drivers and ORMs you are familiar with, using Hyperdrive (recommended) to speed up connectivity and pool database connections. When you use Hyperdrive, your connection pool is managed across all of Cloudflare regions and optimized for usage from Workers. You can also use serverless driver libraries to connect to the HTTP-based proxies managed by the database provider. These may also provide connection pooling for traditional SQL databases and reduce the amount of roundtrips needed to establish a secure connection, similarly to Hyperdrive. | Database | Library or Driver | Connection Method | | - | - | - | | [PlanetScale](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript) | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-database-providers/planetscale), [@planetscale/database](https://github.com/planetscale/database-js) | [mysql2](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-drivers-and-libraries/mysql2/) or [mysql](https://developers.cloudflare.com/hyperdrive/examples/connect-to-mysql/mysql-drivers-and-libraries/mysql/), or API via client library | | [Supabase](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers) | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/supabase/), [@supabase/supabase-js](https://github.com/supabase/supabase-js) | [node-postgres](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/),[Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/), or API via client library | | [Prisma](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers) | [prisma](https://github.com/prisma/prisma) | API via client library | | [Neon](https://blog.cloudflare.com/neon-postgres-database-from-workers/) | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-database-providers/neon/), [@neondatabase/serverless](https://neon.tech/blog/serverless-driver-for-postgres/) | [node-postgres](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/node-postgres/),[Postgres.js](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/postgres-drivers-and-libraries/postgres-js/), or API via client library | | [Hasura](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | API | GraphQL API via fetch() | | [Upstash Redis](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) | [@upstash/redis](https://github.com/upstash/upstash-redis) | API via client library | | [TiDB Cloud](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare) | [@tidbcloud/serverless](https://github.com/tidbcloud/serverless-js) | API via client library | Once you have installed the necessary packages, use the APIs provided by these packages to connect to your database and perform operations on it. Refer to detailed links for service-specific instructions. ## Authentication If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](https://developers.cloudflare.com/workers/wrangler/commands/#secret) command: ```sh wrangler secret put ``` Then, retrieve the secret value in your code using the following code snippet: ```js const secretValue = env.; ``` Use the secret value to authenticate with the external service. For example, if the external service requires an API key or database username and password for authentication, include these in using the relevant service's library or API. For services that require mTLS authentication, use [mTLS certificates](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls) to present a client certificate. ## Next steps * Learn how to connect to [an existing PostgreSQL database](https://developers.cloudflare.com/hyperdrive/) with Hyperdrive. * Discover [other storage options available](https://developers.cloudflare.com/workers/platform/storage-options/) for use with Workers. * [Create your first database](https://developers.cloudflare.com/d1/get-started/) with Cloudflare D1. --- title: Cloudflare D1 · Cloudflare Workers docs description: Cloudflare’s native serverless database. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/databases/d1/ md: https://developers.cloudflare.com/workers/databases/d1/index.md --- --- title: Hyperdrive · Cloudflare Workers docs description: Use Workers to accelerate queries you make to existing databases. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/databases/hyperdrive/ md: https://developers.cloudflare.com/workers/databases/hyperdrive/index.md --- --- title: 3rd Party Integrations · Cloudflare Workers docs description: Connect to third-party databases such as Supabase, Turso and PlanetScale) lastUpdated: 2025-06-25T15:22:01.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/databases/third-party-integrations/ md: https://developers.cloudflare.com/workers/databases/third-party-integrations/index.md --- ## Background Connect to databases by configuring connection strings and credentials as [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) in your Worker. Connecting to a regional database from a Worker? If your Worker is connecting to a regional database, you can reduce your query latency by using [Hyperdrive](https://developers.cloudflare.com/hyperdrive) and [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) which are both included in any Workers plan. Hyperdrive will pool your databases connections globally across Cloudflare's network. Smart Placement will monitor your application to run your Workers closest to your backend infrastructure when this reduces the latency of your Worker invocations. Learn more about [how Smart Placement works](https://developers.cloudflare.com/workers/configuration/smart-placement/). ## Database credentials When you rotate or update database credentials, you must update the corresponding [secrets](https://developers.cloudflare.com/workers/configuration/secrets/) in your Worker. Use the [`wrangler secret put`](https://developers.cloudflare.com/workers/wrangler/commands/#secret) command to update secrets securely or update the secret directly in the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/settings). ## Database limits You can connect to multiple databases by configuring separate sets of secrets for each database connection. Use descriptive secret names to distinguish between different database connections (for example, `DATABASE_URL_PROD` and `DATABASE_URL_STAGING`). ## Popular providers * [Neon](https://developers.cloudflare.com/workers/databases/third-party-integrations/neon/) * [PlanetScale](https://developers.cloudflare.com/workers/databases/third-party-integrations/planetscale/) * [Supabase](https://developers.cloudflare.com/workers/databases/third-party-integrations/supabase/) * [Turso](https://developers.cloudflare.com/workers/databases/third-party-integrations/turso/) * [Upstash](https://developers.cloudflare.com/workers/databases/third-party-integrations/upstash/) * [Xata](https://developers.cloudflare.com/workers/databases/third-party-integrations/xata/) --- title: Vectorize (vector database) · Cloudflare Workers docs description: A globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/databases/vectorize/ md: https://developers.cloudflare.com/workers/databases/vectorize/index.md --- --- title: Supported bindings per development mode · Cloudflare Workers docs description: Supported bindings per development mode lastUpdated: 2025-06-18T17:02:32.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/bindings-per-env/ md: https://developers.cloudflare.com/workers/development-testing/bindings-per-env/index.md --- ## Local development During local development, your Worker code always executes locally and bindings connect to locally simulated resources [by default](https://developers.cloudflare.com/workers/development-testing/#remote-bindings). You can configure [**remote bindings** during local development](https://developers.cloudflare.com/workers/development-testing/#remote-bindings), allowing your bindings to connect to a deployed resource on a per-binding basis. | Binding | Local simulations | Remote binding connections | | - | - | - | | **AI** | ❌ | ✅ | | **Assets** | ✅ | ❌ | | **Analytics Engine** | ✅ | ❌ | | **Browser Rendering** | ✅ | ✅ | | **D1** | ✅ | ✅ | | **Durable Objects** | ✅ | ❌ | | **Email Bindings** | ✅ | ✅ | | **Hyperdrive** | ✅ | ❌ | | **Images** | ✅ | ✅ | | **KV** | ✅ | ✅ | | **mTLS** | ❌ | ✅ | | **Queues** | ✅ | ✅ | | **R2** | ✅ | ✅ | | **Rate Limiting** | ✅ | ❌ | | **Service Bindings (multiple Workers)** | ✅ | ✅ | | **Vectorize** | ❌ | ✅ | | **Workflows** | ✅ | ✅ | * **Local simulations:** Bindings connect to local resource simulations. Supported in [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/). * **Remote binding connections:** Bindings connect to remote resources via `experimental_remote: true` configuration. Supported in [`wrangler dev --x-remote-bindings`](https://developers.cloudflare.com/workers/development-testing/#using-wrangler-with-remote-bindings) and the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/development-testing/#using-vite-with-remote-bindings). ## Remote development During remote development, all of your Worker code is uploaded and executed on Cloudflare's infrastructure, and bindings always connect to remote resources. **We recommend using local development with remote binding connections instead** for faster iteration and debugging. Supported only in [`wrangler dev --remote`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) - there is **no Vite plugin equivalent**. | Binding | Remote development | | - | - | | **AI** | ✅ | | **Assets** | ✅ | | **Analytics Engine** | ✅ | | **Browser Rendering** | ✅ | | **D1** | ✅ | | **Durable Objects** | ✅ | | **Email Bindings** | ✅ | | **Hyperdrive** | ✅ | | **Images** | ✅ | | **KV** | ✅ | | **mTLS** | ✅ | | **Queues** | ❌ | | **R2** | ✅ | | **Rate Limiting** | ✅ | | **Service Bindings (multiple Workers)** | ✅ | | **Vectorize** | ✅ | | **Workflows** | ❌ | *** --- title: Environment variables and secrets · Cloudflare Workers docs description: Configuring environment variables and secrets for local development lastUpdated: 2025-06-18T17:02:32.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/environment-variables/ md: https://developers.cloudflare.com/workers/development-testing/environment-variables/index.md --- During local development, you may need to configure **environment variables** (such as API URLs, feature flags) and **secrets** (API tokens, private keys). You can use a `.dev.vars` file in the root of your project to override environment variables for local development, and both [Wrangler](https://developers.cloudflare.com/workers/configuration/environment-variables/#compare-secrets-and-environment-variables) and the [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/reference/secrets/) will respect this override. Warning Be sure to add `.dev.vars` to your `.gitignore` so it never gets committed. ### Why use a `.dev.vars` file? Use `.dev.vars` to set local overrides for environment variables that should not be checked into your repository. If you want to manage environment-based configuration that you **want checked into your repository** (for example, non-sensitive or shared environment defaults), you can define [environment variables as `[vars]`](https://developers.cloudflare.com/workers/wrangler/environments/#_top) in your Wrangler configuration. Using a `.dev.vars` file is specifically for local-only secrets or configuration that you do not want in version control and only want to inject in local dev sessions. ## Basic setup 1. Create a `.dev.vars` file in your project root. 2. Add key-value pairs: ```ini API_HOST="localhost:3000" DEBUG="true" SECRET_TOKEN="my-local-secret-token" ``` 3. Run your `dev` command **Wrangler** * npm ```sh npx wrangler dev ``` * yarn ```sh yarn wrangler dev ``` * pnpm ```sh pnpm wrangler dev ``` **Vite plugin** * npm ```sh npx vite dev ``` * yarn ```sh yarn vite dev ``` * pnpm ```sh pnpm vite dev ``` ## Multiple local environments with `.dev.vars` To simulate different local environments, you can: 1. Create a file named `.dev.vars.` . For example, we'll use `.dev.vars.staging`. 2. Add key-value pairs: ```ini API_HOST="staging.localhost:3000" DEBUG="false" SECRET_TOKEN="staging-token" ``` 3. Specify the environment when running the `dev` command: **Wrangler** * npm ```sh npx wrangler dev --env staging ``` * yarn ```sh yarn wrangler dev --env staging ``` * pnpm ```sh pnpm wrangler dev --env staging ``` **Vite plugin** * npm ```sh CLOUDFLARE_ENV=staging npx vite dev ``` * yarn ```sh CLOUDFLARE_ENV=staging yarn vite dev ``` * pnpm ```sh CLOUDFLARE_ENV=staging pnpm vite dev ``` Only the values from `.dev.vars.staging` will be applied instead of `.dev.vars`. ## Learn more * To learn how to configure multiple environments in Wrangler configuration, [read the documentation](https://developers.cloudflare.com/workers/wrangler/environments/#_top). * To learn how to use Wrangler environments and Vite environments together, [read the Vite plugin documentation](https://developers.cloudflare.com/workers/vite-plugin/reference/cloudflare-environments/) --- title: Adding local data · Cloudflare Workers docs description: Populating local resources with data lastUpdated: 2025-06-19T13:29:12.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/local-data/ md: https://developers.cloudflare.com/workers/development-testing/local-data/index.md --- Whether you are using Wrangler or the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), your workflow for **accessing** data during local development remains the same. However, you can only [populate local resources with data](https://developers.cloudflare.com/workers/development-testing/local-data/#populating-local-resources-with-data) via the Wrangler CLI. ### How it works When you run either `wrangler dev` or [`vite`](https://vite.dev/guide/cli#dev-server), [Miniflare](https://developers.cloudflare.com/workers/testing/miniflare/) automatically creates **local versions** of your resources (like [KV](https://developers.cloudflare.com/kv), [D1](https://developers.cloudflare.com/d1/), or [R2](https://developers.cloudflare.com/r2)). This means you **don’t** need to manually set up separate local instances for each service. However, newly created local resources **won’t** contain any data — you'll need to use Wrangler commands with the `--local` flag to populate them. Changes made to local resources won’t affect production data. ## Populating local resources with data When you first start developing, your local resources will be empty. You'll need to populate them with data using the Wrangler CLI. ### KV namespaces Syntax note Since version 3.60.0, Wrangler supports the `kv ...` syntax. If you are using versions below 3.60.0, the command follows the `kv:...` syntax. Learn more in the [Wrangler commands for KV page](https://developers.cloudflare.com/kv/reference/kv-commands/). #### [Add a single key-value pair](https://developers.cloudflare.com/workers/wrangler/commands/#kv-key) * npm ```sh npx wrangler kv key put --binding= --local ``` * yarn ```sh yarn wrangler kv key put --binding= --local ``` * pnpm ```sh pnpm wrangler kv key put --binding= --local ``` #### [Bulk upload](https://developers.cloudflare.com/workers/wrangler/commands/#kv-bulk) * npm ```sh npx wrangler kv bulk put --binding= --local ``` * yarn ```sh yarn wrangler kv bulk put --binding= --local ``` * pnpm ```sh pnpm wrangler kv bulk put --binding= --local ``` ### R2 buckets #### [Upload a file](https://developers.cloudflare.com/workers/wrangler/commands/#r2-object) * npm ```sh npx wrangler r2 object put / --file= --local ``` * yarn ```sh yarn wrangler r2 object put / --file= --local ``` * pnpm ```sh pnpm wrangler r2 object put / --file= --local ``` You may also include [other metadata](https://developers.cloudflare.com/workers/wrangler/commands/#r2-object-put). ### D1 databases #### [Execute a SQL statement](https://developers.cloudflare.com/workers/wrangler/commands/#d1-execute) * npm ```sh npx wrangler d1 execute --command="" --local ``` * yarn ```sh yarn wrangler d1 execute --command="" --local ``` * pnpm ```sh pnpm wrangler d1 execute --command="" --local ``` #### [Execute a SQL file](https://developers.cloudflare.com/workers/wrangler/commands/#d1-execute) * npm ```sh npx wrangler d1 execute --file=./schema.sql --local ``` * yarn ```sh yarn wrangler d1 execute --file=./schema.sql --local ``` * pnpm ```sh pnpm wrangler d1 execute --file=./schema.sql --local ``` ### Durable Objects For Durable Objects, unlike KV, D1, and R2, there are no CLI commands to populate them with local data. To add data to Durable Objects during local development, you must write application code that creates Durable Object instances and [calls methods on them that store state](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/). This typically involves creating development endpoints or test routes that initialize your Durable Objects with the desired data. ## Where local data gets stored By default, both Wrangler and the Vite plugin store local binding data in the same location: the `.wrangler/state` folder in your project directory. This folder stores data in subdirectories for all local bindings: KV namespaces, R2 buckets, D1 databases, Durable Objects, etc. ### Clearing local storage You can delete the `.wrangler/state` folder at any time to reset your local environment, and Miniflare will recreate it the next time you run your `dev` command. You can also delete specific sub-folders within `.wrangler/state` for more targeted clean-up. ### Changing the local data directory If you prefer to specify a different directory for local storage, you can do so through the Wranlger CLI or in the Vite plugin's configuration. #### Using Wrangler Use the [`--persist-to`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) flag with `wrangler dev`. You need to specify this flag every time you run the `dev` command: * npm ```sh npx wrangler dev --persist-to ``` * yarn ```sh yarn wrangler dev --persist-to ``` * pnpm ```sh pnpm wrangler dev --persist-to ``` Note The local persistence folder (like `.wrangler/state` or any custom folder you set) should be added to your `.gitignore` to avoid committing local development data to version control. Using `--local` with `--persist-to` If you run `wrangler dev --persist-to ` to specify a custom location for local data, you must also include the same `--persist-to ` when running other Wrangler commands that modify local data (and be sure to include the `--local` flag). For example, to create a KV key named `test` with a value of `12345` in a local KV namespace, run: * npm ```sh npx wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local ``` * yarn ```sh yarn wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local ``` * pnpm ```sh pnpm wrangler kv key put test 12345 --binding MY_KV_NAMESPACE --local --persist-to worker-local ``` This command: * Sets the KV key `test` to `12345` in the binding `MY_KV_NAMESPACE` (defined in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/)). * Uses `--persist-to worker-local` to ensure the data is created in the **worker-local** directory instead of the default `.wrangler/state`. * Adds the `--local` flag, indicating you want to modify local data. If `--persist-to` is not specified, Wrangler defaults to using `.wrangler/state` for local data. #### Using the Cloudflare Vite plugin To customize where the Vite plugin stores local data, configure the [`persistState` option](https://developers.cloudflare.com/workers/vite-plugin/reference/api/#interface-pluginconfig) in your Vite config file: ```js import { defineConfig } from "vite"; import { cloudflare } from "@cloudflare/vite-plugin"; export default defineConfig({ plugins: [ cloudflare({ persistState: "./my-custom-directory", }), ], }); ``` #### Sharing state between tools If you want Wrangler and the Vite plugin to share the same state, configure them to use the same persistence path. --- title: Developing with multiple Workers · Cloudflare Workers docs description: Learn how to develop with multiple Workers using different approaches and configurations. lastUpdated: 2025-06-26T14:38:25.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/multi-workers/ md: https://developers.cloudflare.com/workers/development-testing/multi-workers/index.md --- When building complex applications, you may want to run multiple Workers during development. This guide covers the different approaches for running multiple Workers locally and when to use each approach. ## Single dev command Tip We recommend this approach as the default for most development workflows as it ensures the best compatibility with bindings. You can run multiple Workers in a single dev command by passing multiple configuration files to your dev server: **Using Wrangler** * npm ```sh npx wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc ``` * yarn ```sh yarn wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc ``` * pnpm ```sh pnpm wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc ``` The first config (`./app/wrangler.jsonc`) is treated as the primary Worker, exposed at `http://localhost:8787`. Additional configs (e.g. `./api/wrangler.jsonc`) run as auxiliary Workers, available via service bindings or tail consumers from the primary Worker. **Using the Vite plugin** Configure `auxiliaryWorkers` in your Vite configuration: ```js import { defineConfig } from "vite"; import { cloudflare } from "@cloudflare/vite-plugin"; export default defineConfig({ plugins: [ cloudflare({ configPath: "./app/wrangler.jsonc", auxiliaryWorkers: [ { configPath: "./api/wrangler.jsonc", }, ], }), ], }); ``` Then run: * npm ```sh npx vite dev ``` * yarn ```sh yarn vite dev ``` * pnpm ```sh pnpm vite dev ``` **Use this approach when:** * You want the simplest setup for development * Workers are part of the same application or codebase * You need to access a Durable Object namespace from another Worker using `script_name`, or setup Queues where the producer and consumer Workers are seperated. ## Multiple dev commands You can also run each Worker in a separate dev commands, each with its own terminal and configuration. * npm ```sh # Terminal 1 npx wrangler dev -c ./app/wrangler.jsonc ``` * yarn ```sh # Terminal 1 yarn wrangler dev -c ./app/wrangler.jsonc ``` * pnpm ```sh # Terminal 1 pnpm wrangler dev -c ./app/wrangler.jsonc ``` - npm ```sh # Terminal 2 npx wrangler dev -c ./api/wrangler.jsonc ``` - yarn ```sh # Terminal 2 yarn wrangler dev -c ./api/wrangler.jsonc ``` - pnpm ```sh # Terminal 2 pnpm wrangler dev -c ./api/wrangler.jsonc ``` These Workers run in different dev commands but can still communicate with each other via service bindings or tail consumers **regardless of whether they are started with `wrangler dev` or `vite dev`**. Note You can also combine both approaches — for example, run a group of Workers together through `vite dev` using `auxiliaryWorkers`, while running another Worker separately with `wrangler dev`. This allows you to keep tightly coupled Workers running under a single dev command, while keeping independent or shared Workers in separate ones. However, running `wrangler dev` with multiple configuration files (e.g. `wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc`) does **not** support cross-process bindings at the moment. **Use this approach when:** * You want each Worker to be accessible on its own local URL during development, since only the primary Worker is exposed when using a single dev command * Each Worker has its own build setup or tooling — for example, one uses Vite with custom plugins while another is a vanilla Wrangler project * You need the flexibility to run and develop Workers independently without restructuring your project or consolidating configs This setup is especially useful in larger projects where each team maintains a subset of Workers. Running everything in a single dev command might require significant restructuring or build integration that isn't always practical. --- title: Testing · Cloudflare Workers docs lastUpdated: 2025-06-18T17:02:32.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/testing/ md: https://developers.cloudflare.com/workers/development-testing/testing/index.md --- --- title: Vite Plugin · Cloudflare Workers docs lastUpdated: 2025-06-18T17:02:32.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/vite-plugin/ md: https://developers.cloudflare.com/workers/development-testing/vite-plugin/index.md --- --- title: Choosing between Wrangler & Vite · Cloudflare Workers docs description: Choosing between Wrangler and Vite for local development lastUpdated: 2025-06-18T17:02:32.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/development-testing/wrangler-vs-vite/ md: https://developers.cloudflare.com/workers/development-testing/wrangler-vs-vite/index.md --- # When to use Wrangler vs Vite Deciding between Wrangler and the Cloudflare Vite plugin depends on your project's focus and development workflow. Here are some quick guidelines to help you choose: ## When to use Wrangler * **Backend & Workers-focused:** If you're primarily building APIs, serverless functions, or background tasks, use Wrangler. * **Remote development:** If your project needs the ability to develop and test using production resources and data on Cloudflare's network, use Wrangler's `--remote` flag. * **Simple frontends:** If you have minimal frontend requirements and don’t need hot reloading or advanced bundling, Wrangler may be sufficient. ## When to use the Cloudflare Vite Plugin Use the [Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/) for: * **Frontend-centric development:** If you already use Vite with modern frontend frameworks like React, Vue, Svelte, or Solid, the Vite plugin integrates into your development workflow. * **React Router v7:** If you are using [React Router v7](https://reactrouter.com/) (the successor to Remix), it is officially supported by the Vite plugin as a full-stack SSR framework. * **Rapid iteration (HMR):** If you need near-instant updates in the browser, the Vite plugin provides [Hot Module Replacement (HMR)](https://vite.dev/guide/features.html#hot-module-replacement) during local development. * **Advanced optimizations:** If you require more advanced optimizations (code splitting, efficient bundling, CSS handling, build time transformations, etc.), Vite is a strong fit. * **Greater flexibility:** Due to Vite's advanced configuration options and large ecosystem of plugins, there is more flexibility to customize your development experience and build output. --- title: 103 Early Hints · Cloudflare Workers docs description: Allow a client to request static assets while waiting for the HTML response. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware,Headers source_url: html: https://developers.cloudflare.com/workers/examples/103-early-hints/ md: https://developers.cloudflare.com/workers/examples/103-early-hints/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/103-early-hints) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. `103` Early Hints is an HTTP status code designed to speed up content delivery. When enabled, Cloudflare can cache the `Link` headers marked with preload and/or preconnect from HTML pages and serve them in a `103` Early Hints response before reaching the origin server. Browsers can use these hints to fetch linked assets while waiting for the origin’s final response, dramatically improving page load speeds. To ensure Early Hints are enabled on your zone: 1. Log in to the [Cloudflare Dashboard](https://dash.cloudflare.com) and select your account and website. 2. Go to **Speed** > **Optimization** > **Content Optimization**. 3. Enable the **Early Hints** toggle to on. You can return `Link` headers from a Worker running on your zone to speed up your page load times. * JavaScript ```js const CSS = "body { color: red; }"; const HTML = ` Early Hints test

Early Hints test page

`; export default { async fetch(req) { // If request is for test.css, serve the raw CSS if (/test\.css$/.test(req.url)) { return new Response(CSS, { headers: { "content-type": "text/css", }, }); } else { // Serve raw HTML using Early Hints for the CSS file return new Response(HTML, { headers: { "content-type": "text/html", link: "; rel=preload; as=style", }, }); } }, }; ``` * TypeScript ```js const CSS = "body { color: red; }"; const HTML = ` Early Hints test

Early Hints test page

`; export default { async fetch(req): Promise { // If request is for test.css, serve the raw CSS if (/test\.css$/.test(req.url)) { return new Response(CSS, { headers: { "content-type": "text/css", }, }); } else { // Serve raw HTML using Early Hints for the CSS file return new Response(HTML, { headers: { "content-type": "text/html", link: "; rel=preload; as=style", }, }); } }, } satisfies ExportedHandler; ``` * Python ```py import re from workers import Response CSS = "body { color: red; }" HTML = """ Early Hints test

Early Hints test page

""" def on_fetch(request): if re.search("test.css", request.url): headers = {"content-type": "text/css"} return Response(CSS, headers=headers) else: headers = {"content-type": "text/html","link": "; rel=preload; as=style"} return Response(HTML, headers=headers) ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); const CSS = "body { color: red; }"; const HTML = ` Early Hints test

Early Hints test page

`; // Serve CSS file app.get("/test.css", (c) => { return c.body(CSS, { headers: { "content-type": "text/css", }, }); }); // Serve HTML with early hints app.get("*", (c) => { return c.html(HTML, { headers: { link: "; rel=preload; as=style", }, }); }); export default app; ```
--- title: A/B testing with same-URL direct access · Cloudflare Workers docs description: Set up an A/B test by controlling what response is served based on cookies. This version supports passing the request through to test and control on the origin, bypassing random assignment. lastUpdated: 2025-04-15T13:29:20.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/ab-testing/ md: https://developers.cloudflare.com/workers/examples/ab-testing/index.md --- * JavaScript ```js const NAME = "myExampleWorkersABTest"; export default { async fetch(req) { const url = new URL(req.url); // Enable Passthrough to allow direct access to control and test routes. if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test")) return fetch(req); // Determine which group this requester is in. const cookie = req.headers.get("cookie"); if (cookie && cookie.includes(`${NAME}=control`)) { url.pathname = "/control" + url.pathname; } else if (cookie && cookie.includes(`${NAME}=test`)) { url.pathname = "/test" + url.pathname; } else { // If there is no cookie, this is a new client. Choose a group and set the cookie. const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split if (group === "control") { url.pathname = "/control" + url.pathname; } else { url.pathname = "/test" + url.pathname; } // Reconstruct response to avoid immutability let res = await fetch(url); res = new Response(res.body, res); // Set cookie to enable persistent A/B sessions. res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`); return res; } return fetch(url); }, }; ``` * TypeScript ```ts const NAME = "myExampleWorkersABTest"; export default { async fetch(req): Promise { const url = new URL(req.url); // Enable Passthrough to allow direct access to control and test routes. if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test")) return fetch(req); // Determine which group this requester is in. const cookie = req.headers.get("cookie"); if (cookie && cookie.includes(`${NAME}=control`)) { url.pathname = "/control" + url.pathname; } else if (cookie && cookie.includes(`${NAME}=test`)) { url.pathname = "/test" + url.pathname; } else { // If there is no cookie, this is a new client. Choose a group and set the cookie. const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split if (group === "control") { url.pathname = "/control" + url.pathname; } else { url.pathname = "/test" + url.pathname; } // Reconstruct response to avoid immutability let res = await fetch(url); res = new Response(res.body, res); // Set cookie to enable persistent A/B sessions. res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`); return res; } return fetch(url); }, } satisfies ExportedHandler; ``` * Python ```py import random from urllib.parse import urlparse, urlunparse from workers import Response, fetch NAME = "myExampleWorkersABTest" async def on_fetch(request): url = urlparse(request.url) # Uncomment below when testing locally # url = url._replace(netloc="example.com") if "localhost" in url.netloc else url # Enable Passthrough to allow direct access to control and test routes. if url.path.startswith("/control") or url.path.startswith("/test"): return fetch(urlunparse(url)) # Determine which group this requester is in. cookie = request.headers.get("cookie") if cookie and f'{NAME}=control' in cookie: url = url._replace(path="/control" + url.path) elif cookie and f'{NAME}=test' in cookie: url = url._replace(path="/test" + url.path) else: # If there is no cookie, this is a new client. Choose a group and set the cookie. group = "test" if random.random() < 0.5 else "control" if group == "control": url = url._replace(path="/control" + url.path) else: url = url._replace(path="/test" + url.path) # Reconstruct response to avoid immutability res = await fetch(urlunparse(url)) headers = dict(res.headers) headers["Set-Cookie"] = f'{NAME}={group}; path=/' return Response(res.body, headers=headers) return fetch(urlunparse(url)) ``` * Hono ```ts import { Hono } from "hono"; import { getCookie, setCookie } from "hono/cookie"; const app = new Hono(); const NAME = "myExampleWorkersABTest"; // Enable passthrough to allow direct access to control and test routes app.all("/control/*", (c) => fetch(c.req.raw)); app.all("/test/*", (c) => fetch(c.req.raw)); // Middleware to handle A/B testing logic app.use("*", async (c) => { const url = new URL(c.req.url); // Determine which group this requester is in const abTestCookie = getCookie(c, NAME); if (abTestCookie === "control") { // User is in control group url.pathname = "/control" + c.req.path; } else if (abTestCookie === "test") { // User is in test group url.pathname = "/test" + c.req.path; } else { // If there is no cookie, this is a new client // Choose a group and set the cookie (50/50 split) const group = Math.random() < 0.5 ? "test" : "control"; // Update URL path based on assigned group if (group === "control") { url.pathname = "/control" + c.req.path; } else { url.pathname = "/test" + c.req.path; } // Set cookie to enable persistent A/B sessions setCookie(c, NAME, group, { path: "/", }); } const res = await fetch(url); return c.body(res.body, res); }); export default app; ``` --- title: Accessing the Cloudflare Object · Cloudflare Workers docs description: Access custom Cloudflare properties and control how Cloudflare features are applied to every request. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/ md: https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/accessing-the-cloudflare-object) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(req) { const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; return new Response(JSON.stringify(data, null, 2), { headers: { "content-type": "application/json;charset=UTF-8", }, }); }, }; ``` * TypeScript ```ts export default { async fetch(req): Promise { const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; return new Response(JSON.stringify(data, null, 2), { headers: { "content-type": "application/json;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); app.get("*", async (c) => { // Access the raw request to get the cf object const req = c.req.raw; // Check if the cf object is available const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; // Return the data formatted with 2-space indentation return c.json(data); }); export default app; ``` * Python ```py import json from workers import Response from js import JSON def on_fetch(request): error = json.dumps({ "error": "The `cf` object is not available inside the preview." }) data = request.cf if request.cf is not None else error headers = {"content-type":"application/json"} return Response(JSON.stringify(data, None, 2), headers=headers) ``` --- title: Aggregate requests · Cloudflare Workers docs description: Send two GET request to two urls and aggregates the responses into one response. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/aggregate-requests/ md: https://developers.cloudflare.com/workers/examples/aggregate-requests/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/aggregate-requests) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; const responses = await Promise.all([fetch(url1), fetch(url2)]); const results = await Promise.all(responses.map((r) => r.json())); const options = { headers: { "content-type": "application/json;charset=UTF-8" }, }; return new Response(JSON.stringify(results), options); }, }; ``` * TypeScript ```ts export default { async fetch(request) { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; const responses = await Promise.all([fetch(url1), fetch(url2)]); const results = await Promise.all(responses.map((r) => r.json())); const options = { headers: { "content-type": "application/json;charset=UTF-8" }, }; return new Response(JSON.stringify(results), options); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); app.get("*", async (c) => { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; // Fetch both URLs concurrently const responses = await Promise.all([fetch(url1), fetch(url2)]); // Parse JSON responses concurrently const results = await Promise.all(responses.map((r) => r.json())); // Return aggregated results return c.json(results); }); export default app; ``` * Python ```py from workers import Response, fetch import asyncio import json async def on_fetch(request): # some_host is set up to return JSON responses some_host = "https://jsonplaceholder.typicode.com" url1 = some_host + "/todos/1" url2 = some_host + "/todos/2" responses = await asyncio.gather(fetch(url1), fetch(url2)) results = await asyncio.gather(*(r.json() for r in responses)) headers = {"content-type": "application/json;charset=UTF-8"} return Response.json(results, headers=headers) ``` --- title: Alter headers · Cloudflare Workers docs description: Example of how to add, change, or delete headers sent in a request or returned in a response. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Headers,Middleware source_url: html: https://developers.cloudflare.com/workers/examples/alter-headers/ md: https://developers.cloudflare.com/workers/examples/alter-headers/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/alter-headers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const response = await fetch("https://example.com"); // Clone the response so that it's no longer immutable const newResponse = new Response(response.body, response); // Add a custom header with a value newResponse.headers.append( "x-workers-hello", "Hello from Cloudflare Workers", ); // Delete headers newResponse.headers.delete("x-header-to-delete"); newResponse.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header newResponse.headers.set("x-header-to-change", "NewValue"); return newResponse; }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const response = await fetch(request); // Clone the response so that it's no longer immutable const newResponse = new Response(response.body, response); // Add a custom header with a value newResponse.headers.append( "x-workers-hello", "Hello from Cloudflare Workers", ); // Delete headers newResponse.headers.delete("x-header-to-delete"); newResponse.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header newResponse.headers.set("x-header-to-change", "NewValue"); return newResponse; }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch async def on_fetch(request): response = await fetch("https://example.com") # Grab the response headers so they can be modified new_headers = response.headers # Add a custom header with a value new_headers["x-workers-hello"] = "Hello from Cloudflare Workers" # Delete headers if "x-header-to-delete" in new_headers: del new_headers["x-header-to-delete"] if "x-header2-to-delete" in new_headers: del new_headers["x-header2-to-delete"] # Adjust the value for an existing header new_headers["x-header-to-change"] = "NewValue" return Response(response.body, headers=new_headers) ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.use('*', async (c, next) => { // Process the request with the next middleware/handler await next(); // After the response is generated, we can modify its headers // Add a custom header with a value c.res.headers.append( "x-workers-hello", "Hello from Cloudflare Workers with Hono" ); // Delete headers c.res.headers.delete("x-header-to-delete"); c.res.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header c.res.headers.set("x-header-to-change", "NewValue"); }); app.get('*', async (c) => { // Fetch content from example.com const response = await fetch("https://example.com"); // Return the response body with original headers // (our middleware will modify the headers before sending) return new Response(response.body, { headers: response.headers }); }); export default app; ``` You can also use the [`custom-headers-example` template](https://github.com/kristianfreeman/custom-headers-example) to deploy this code to your custom domain. --- title: Auth with headers · Cloudflare Workers docs description: Allow or deny a request based on a known pre-shared key in a header. This is not meant to replace the WebCrypto API. lastUpdated: 2025-04-16T21:02:18.000Z chatbotDeprioritize: false tags: Authentication,Web Crypto source_url: html: https://developers.cloudflare.com/workers/examples/auth-with-headers/ md: https://developers.cloudflare.com/workers/examples/auth-with-headers/index.md --- Caution when using in production The example code contains a generic header key and value of `X-Custom-PSK` and `mypresharedkey`. To best protect your resources, change the header key and value in the Workers editor before saving your code. * JavaScript ```js export default { async fetch(request) { /** * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Fetch request from origin. return fetch(request); } // Incorrect key supplied. Reject the request. return new Response("Sorry, you have supplied an invalid key.", { status: 403, }); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Fetch request from origin. return fetch(request); } // Incorrect key supplied. Reject the request. return new Response("Sorry, you have supplied an invalid key.", { status: 403, }); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch async def on_fetch(request): PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK" PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey" psk = request.headers[PRESHARED_AUTH_HEADER_KEY] if psk == PRESHARED_AUTH_HEADER_VALUE: # Correct preshared header key supplied. Fetch request from origin. return fetch(request) # Incorrect key supplied. Reject the request. return Response("Sorry, you have supplied an invalid key.", status=403) ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); // Add authentication middleware app.use('*', async (c, next) => { /** * Define authentication constants */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; // Get the pre-shared key from the request header const psk = c.req.header(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Continue to the next handler. await next(); } else { // Incorrect key supplied. Reject the request. return c.text("Sorry, you have supplied an invalid key.", 403); } }); // Handle all authenticated requests by passing through to origin app.all('*', async (c) => { return fetch(c.req.raw); }); export default app; ``` --- title: HTTP Basic Authentication · Cloudflare Workers docs description: Shows how to restrict access using the HTTP Basic schema. lastUpdated: 2025-04-15T13:29:20.000Z chatbotDeprioritize: false tags: Security,Authentication source_url: html: https://developers.cloudflare.com/workers/examples/basic-auth/ md: https://developers.cloudflare.com/workers/examples/basic-auth/index.md --- Note This example Worker makes use of the [Node.js Buffer API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](https://developers.cloudflare.com/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#nodejs-compatibility-flag). Caution when using in production This code is provided as a sample, and is not suitable for production use. Basic Authentication sends credentials unencrypted, and must be used with an HTTPS connection to be considered secure. For a production-ready authentication system, consider using [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-public-app/). * JavaScript ```js /** * Shows how to restrict access using the HTTP Basic schema. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 * */ import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); /** * Protect against timing attacks by safely comparing values using `timingSafeEqual`. * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details * @param {string} a * @param {string} b * @returns {boolean} */ function timingSafeEqual(a, b) { const aBytes = encoder.encode(a); const bBytes = encoder.encode(b); if (aBytes.byteLength !== bBytes.byteLength) { // Strings must be the same length in order to compare // with crypto.subtle.timingSafeEqual return false; } return crypto.subtle.timingSafeEqual(aBytes, bBytes); } export default { /** * * @param {Request} request * @param {{PASSWORD: string}} env * @returns */ async fetch(request, env) { const BASIC_USER = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const BASIC_PASS = env.PASSWORD ?? "password"; const url = new URL(request.url); switch (url.pathname) { case "/": return new Response("Anyone can access the homepage."); case "/logout": // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. return new Response("Logged out.", { status: 401 }); case "/admin": { // The "Authorization" header is sent when authenticated. const authorization = request.headers.get("Authorization"); if (!authorization) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } const [scheme, encoded] = authorization.split(" "); // The Authorization header must start with Basic, followed by a space. if (!encoded || scheme !== "Basic") { return new Response("Malformed authorization header.", { status: 400, }); } const credentials = Buffer.from(encoded, "base64").toString(); // The username & password are split by the first colon. //=> example: "username:password" const index = credentials.indexOf(":"); const user = credentials.substring(0, index); const pass = credentials.substring(index + 1); if ( !timingSafeEqual(BASIC_USER, user) || !timingSafeEqual(BASIC_PASS, pass) ) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } return new Response("🎉 You have private access!", { status: 200, headers: { "Cache-Control": "no-store", }, }); } } return new Response("Not Found.", { status: 404 }); }, }; ``` * TypeScript ```ts /** * Shows how to restrict access using the HTTP Basic schema. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 * */ import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); /** * Protect against timing attacks by safely comparing values using `timingSafeEqual`. * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details */ function timingSafeEqual(a: string, b: string) { const aBytes = encoder.encode(a); const bBytes = encoder.encode(b); if (aBytes.byteLength !== bBytes.byteLength) { // Strings must be the same length in order to compare // with crypto.subtle.timingSafeEqual return false; } return crypto.subtle.timingSafeEqual(aBytes, bBytes); } interface Env { PASSWORD: string; } export default { async fetch(request, env): Promise { const BASIC_USER = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const BASIC_PASS = env.PASSWORD ?? "password"; const url = new URL(request.url); switch (url.pathname) { case "/": return new Response("Anyone can access the homepage."); case "/logout": // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. return new Response("Logged out.", { status: 401 }); case "/admin": { // The "Authorization" header is sent when authenticated. const authorization = request.headers.get("Authorization"); if (!authorization) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } const [scheme, encoded] = authorization.split(" "); // The Authorization header must start with Basic, followed by a space. if (!encoded || scheme !== "Basic") { return new Response("Malformed authorization header.", { status: 400, }); } const credentials = Buffer.from(encoded, "base64").toString(); // The username and password are split by the first colon. //=> example: "username:password" const index = credentials.indexOf(":"); const user = credentials.substring(0, index); const pass = credentials.substring(index + 1); if ( !timingSafeEqual(BASIC_USER, user) || !timingSafeEqual(BASIC_PASS, pass) ) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } return new Response("🎉 You have private access!", { status: 200, headers: { "Cache-Control": "no-store", }, }); } } return new Response("Not Found.", { status: 404 }); }, } satisfies ExportedHandler; ``` * Rust ```rs use base64::prelude::*; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result { let basic_user = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ let basic_pass = match env.secret("PASSWORD") { Ok(s) => s.to_string(), Err(_) => "password".to_string(), }; let url = req.url()?; match url.path() { "/" => Response::ok("Anyone can access the homepage."), // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. "/logout" => Response::error("Logged out.", 401), "/admin" => { // The "Authorization" header is sent when authenticated. let authorization = req.headers().get("Authorization")?; if authorization == None { let mut headers = Headers::new(); // Prompts the user for credentials. headers.set( "WWW-Authenticate", "Basic realm='my scope', charset='UTF-8'", )?; return Ok(Response::error("You need to login.", 401)?.with_headers(headers)); } let authorization = authorization.unwrap(); let auth: Vec<&str> = authorization.split(" ").collect(); let scheme = auth[0]; let encoded = auth[1]; // The Authorization header must start with Basic, followed by a space. if encoded == "" || scheme != "Basic" { return Response::error("Malformed authorization header.", 400); } let buff = BASE64_STANDARD.decode(encoded).unwrap(); let credentials = String::from_utf8_lossy(&buff); // The username & password are split by the first colon. //=> example: "username:password" let credentials: Vec<&str> = credentials.split(':').collect(); let user = credentials[0]; let pass = credentials[1]; if user != basic_user || pass != basic_pass { let mut headers = Headers::new(); // Prompts the user for credentials. headers.set( "WWW-Authenticate", "Basic realm='my scope', charset='UTF-8'", )?; return Ok(Response::error("You need to login.", 401)?.with_headers(headers)); } let mut headers = Headers::new(); headers.set("Cache-Control", "no-store")?; Ok(Response::ok("🎉 You have private access!")?.with_headers(headers)) } _ => Response::error("Not Found.", 404), } } ``` * Hono ```ts /** * Shows how to restrict access using the HTTP Basic schema with Hono. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 */ import { Hono } from "hono"; import { basicAuth } from "hono/basic-auth"; // Define environment interface interface Env { Bindings: { USERNAME: string; PASSWORD: string; }; } const app = new Hono(); // Public homepage - accessible to everyone app.get("/", (c) => { return c.text("Anyone can access the homepage."); }); // Admin route - protected with Basic Auth app.get( "/admin", async (c, next) => { const auth = basicAuth({ username: c.env.USERNAME, password: c.env.PASSWORD }) return await auth(c, next); }, (c) => { return c.text("🎉 You have private access!", 200, { "Cache-Control": "no-store", }); } ); export default app; ``` --- title: Block on TLS · Cloudflare Workers docs description: Inspects the incoming request's TLS version and blocks if under TLSv1.2. lastUpdated: 2025-04-15T13:29:20.000Z chatbotDeprioritize: false tags: Security,Middleware source_url: html: https://developers.cloudflare.com/workers/examples/block-on-tls/ md: https://developers.cloudflare.com/workers/examples/block-on-tls/index.md --- * JavaScript ```js export default { async fetch(request) { try { const tlsVersion = request.cf.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("Please use TLS version 1.2 or higher.", { status: 403, }); } return fetch(request); } catch (err) { console.error( "request.cf does not exist in the previewer, only in production", ); return new Response(`Error in workers script ${err.message}`, { status: 500, }); } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { try { const tlsVersion = request.cf.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("Please use TLS version 1.2 or higher.", { status: 403, }); } return fetch(request); } catch (err) { console.error( "request.cf does not exist in the previewer, only in production", ); return new Response(`Error in workers script ${err.message}`, { status: 500, }); } }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); // Middleware to check TLS version app.use("*", async (c, next) => { // Access the raw request to get the cf object with TLS info const request = c.req.raw; const tlsVersion = request.cf?.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return c.text("Please use TLS version 1.2 or higher.", 403); } await next(); }); app.onError((err, c) => { console.error( "request.cf does not exist in the previewer, only in production", ); return c.text(`Error in workers script: ${err.message}`, 500); }); app.get("/", async (c) => { return c.text(`TLS Version: ${c.req.raw.cf.tlsVersion}`); }); export default app; ``` * Python ```py from workers import Response, fetch async def on_fetch(request): tls_version = request.cf.tlsVersion if tls_version not in ("TLSv1.2", "TLSv1.3"): return Response("Please use TLS version 1.2 or higher.", status=403) return fetch(request) ``` --- title: Bulk origin override · Cloudflare Workers docs description: Resolve requests to your domain to a set of proxy third-party origin URLs. lastUpdated: 2025-04-15T13:29:20.000Z chatbotDeprioritize: false tags: Middleware source_url: html: https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/ md: https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/index.md --- * JavaScript ```js export default { async fetch(request) { /** * An object with different URLs to fetch * @param {Object} ORIGINS */ const ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const url = new URL(request.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return fetch(url.toString(), request); } // Otherwise, process request as normal return fetch(request); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * An object with different URLs to fetch * @param {Object} ORIGINS */ const ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const url = new URL(request.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return fetch(url.toString(), request); } // Otherwise, process request as normal return fetch(request); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; import { proxy } from "hono/proxy"; // An object with different URLs to fetch const ORIGINS: Record = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const app = new Hono(); app.all("*", async (c) => { const url = new URL(c.req.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return proxy(url, c.req.raw); } // Otherwise, process request as normal return proxy(c.req.raw); }); export default app; ``` * Python ```py from js import fetch, URL async def on_fetch(request): # A dict with different URLs to fetch ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", } url = URL.new(request.url) # Check if incoming hostname is a key in the ORIGINS object if url.hostname in ORIGINS: url.hostname = ORIGINS[url.hostname] # If it is, proxy request to that third party origin return fetch(url.toString(), request) # Otherwise, process request as normal return fetch(request) ``` --- title: Bulk redirects · Cloudflare Workers docs description: Redirect requests to certain URLs based on a mapped object to the request's URL. lastUpdated: 2025-04-15T13:29:20.000Z chatbotDeprioritize: false tags: Middleware,Redirects source_url: html: https://developers.cloudflare.com/workers/examples/bulk-redirects/ md: https://developers.cloudflare.com/workers/examples/bulk-redirects/index.md --- * JavaScript ```js export default { async fetch(request) { const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", "https://" + externalHostname + "/redirect2"], ["/bulk2", "https://" + externalHostname + "/redirect3"], ["/bulk3", "https://" + externalHostname + "/redirect4"], ["/bulk4", "https://google.com"], ]); const requestURL = new URL(request.url); const path = requestURL.pathname; const location = redirectMap.get(path); if (location) { return Response.redirect(location, 301); } // If request not in map, return the original request return fetch(request); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", "https://" + externalHostname + "/redirect2"], ["/bulk2", "https://" + externalHostname + "/redirect3"], ["/bulk3", "https://" + externalHostname + "/redirect4"], ["/bulk4", "https://google.com"], ]); const requestURL = new URL(request.url); const path = requestURL.pathname; const location = redirectMap.get(path); if (location) { return Response.redirect(location, 301); } // If request not in map, return the original request return fetch(request); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch from urllib.parse import urlparse async def on_fetch(request): external_hostname = "examples.cloudflareworkers.com" redirect_map = { "/bulk1": "https://" + external_hostname + "/redirect2", "/bulk2": "https://" + external_hostname + "/redirect3", "/bulk3": "https://" + external_hostname + "/redirect4", "/bulk4": "https://google.com", } url = urlparse(request.url) location = redirect_map.get(url.path, None) if location: return Response.redirect(location, 301) # If request not in map, return the original request return fetch(request) ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); // Configure your redirects const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", `https://${externalHostname}/redirect2`], ["/bulk2", `https://${externalHostname}/redirect3`], ["/bulk3", `https://${externalHostname}/redirect4`], ["/bulk4", "https://google.com"], ]); // Middleware to handle redirects app.use("*", async (c, next) => { const path = c.req.path; const location = redirectMap.get(path); if (location) { // If path is in our redirect map, perform the redirect return c.redirect(location, 301); } // Otherwise, continue to the next handler await next(); }); // Default handler for requests that don't match any redirects app.all("*", async (c) => { // Pass through to origin return fetch(c.req.raw); }); export default app; ``` --- title: Using the Cache API · Cloudflare Workers docs description: Use the Cache API to store responses in Cloudflare's cache. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware,Caching source_url: html: https://developers.cloudflare.com/workers/examples/cache-api/ md: https://developers.cloudflare.com/workers/examples/cache-api/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-api) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request, env, ctx) { const cacheUrl = new URL(request.url); // Construct the cache key from the cache URL const cacheKey = new Request(cacheUrl.toString(), request); const cache = caches.default; // Check whether the value is already available in the cache // if not, you will need to fetch it from origin, and store it in the cache let response = await cache.match(cacheKey); if (!response) { console.log( `Response for request url: ${request.url} not present in cache. Fetching and caching request.`, ); // If not in cache, get it from origin response = await fetch(request); // Must use Response constructor to inherit all of response's fields response = new Response(response.body, response); // Cache API respects Cache-Control headers. Setting s-max-age to 10 // will limit the response to be in cache for 10 seconds max // Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10"); ctx.waitUntil(cache.put(cacheKey, response.clone())); } else { console.log(`Cache hit for: ${request.url}.`); } return response; }, }; ``` * TypeScript ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { const cacheUrl = new URL(request.url); // Construct the cache key from the cache URL const cacheKey = new Request(cacheUrl.toString(), request); const cache = caches.default; // Check whether the value is already available in the cache // if not, you will need to fetch it from origin, and store it in the cache let response = await cache.match(cacheKey); if (!response) { console.log( `Response for request url: ${request.url} not present in cache. Fetching and caching request.`, ); // If not in cache, get it from origin response = await fetch(request); // Must use Response constructor to inherit all of response's fields response = new Response(response.body, response); // Cache API respects Cache-Control headers. Setting s-max-age to 10 // will limit the response to be in cache for 10 seconds max // Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10"); ctx.waitUntil(cache.put(cacheKey, response.clone())); } else { console.log(`Cache hit for: ${request.url}.`); } return response; }, } satisfies ExportedHandler; ``` * Python ```py from pyodide.ffi import create_proxy from js import Response, Request, URL, caches, fetch async def on_fetch(request, _env, ctx): cache_url = request.url # Construct the cache key from the cache URL cache_key = Request.new(cache_url, request) cache = caches.default # Check whether the value is already available in the cache # if not, you will need to fetch it from origin, and store it in the cache response = await cache.match(cache_key) if response is None: print(f"Response for request url: {request.url} not present in cache. Fetching and caching request.") # If not in cache, get it from origin response = await fetch(request) # Must use Response constructor to inherit all of response's fields response = Response.new(response.body, response) # Cache API respects Cache-Control headers. Setting s-max-age to 10 # will limit the response to be in cache for 10 seconds s-maxage # Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10") ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone()))) else: print(f"Cache hit for: {request.url}.") return response ``` * Hono ```ts import { Hono } from "hono"; import { cache } from "hono/cache"; const app = new Hono(); // We leverage hono built-in cache helper here app.get( "*", cache({ cacheName: "my-cache", cacheControl: "max-age=3600", // 1 hour }), ); // Add a route to handle the request if it's not in cache app.get("*", (c) => { return c.text("Hello from Hono!"); }); export default app; ``` --- title: Cache POST requests · Cloudflare Workers docs description: Cache POST requests using the Cache API. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware,Caching source_url: html: https://developers.cloudflare.com/workers/examples/cache-post-request/ md: https://developers.cloudflare.com/workers/examples/cache-post-request/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-post-request) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request, env, ctx) { async function sha256(message) { // encode as UTF-8 const msgBuffer = await new TextEncoder().encode(message); // hash the message const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer); // convert bytes to hex string return [...new Uint8Array(hashBuffer)] .map((b) => b.toString(16).padStart(2, "0")) .join(""); } try { if (request.method.toUpperCase() === "POST") { const body = await request.clone().text(); // Hash the request body to use it as a part of the cache key const hash = await sha256(body); const cacheUrl = new URL(request.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: request.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // Otherwise, fetch response to POST request from origin if (!response) { response = await fetch(request); ctx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } return fetch(request); } catch (e) { return new Response("Error thrown " + e.message); } }, }; ``` * TypeScript ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { async function sha256(message) { // encode as UTF-8 const msgBuffer = await new TextEncoder().encode(message); // hash the message const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer); // convert bytes to hex string return [...new Uint8Array(hashBuffer)] .map((b) => b.toString(16).padStart(2, "0")) .join(""); } try { if (request.method.toUpperCase() === "POST") { const body = await request.clone().text(); // Hash the request body to use it as a part of the cache key const hash = await sha256(body); const cacheUrl = new URL(request.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: request.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // Otherwise, fetch response to POST request from origin if (!response) { response = await fetch(request); ctx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } return fetch(request); } catch (e) { return new Response("Error thrown " + e.message); } }, } satisfies ExportedHandler; ``` * Python ```py import hashlib from pyodide.ffi import create_proxy from js import fetch, URL, Headers, Request, caches async def on_fetch(request, _, ctx): if 'POST' in request.method: # Hash the request body to use it as a part of the cache key body = await request.clone().text() body_hash = hashlib.sha256(body.encode('UTF-8')).hexdigest() # Store the URL in cache by prepending the body's hash cache_url = URL.new(request.url) cache_url.pathname = "/posts" + cache_url.pathname + body_hash # Convert to a GET to be able to cache headers = Headers.new(dict(request.headers).items()) cache_key = Request.new(cache_url.toString(), method='GET', headers=headers) # Find the cache key in the cache cache = caches.default response = await cache.match(cache_key) # Otherwise, fetch response to POST request from origin if response is None: response = await fetch(request) ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone()))) return response return fetch(request) ``` * Hono ```ts import { Hono } from "hono"; import { sha256 } from "hono/utils/crypto"; const app = new Hono(); // Middleware for caching POST requests app.post("*", async (c) => { try { // Get the request body const body = await c.req.raw.clone().text(); // Hash the request body to use it as part of the cache key const hash = await sha256(body); // Create the cache URL const cacheUrl = new URL(c.req.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: c.req.raw.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // If not in cache, fetch response to POST request from origin if (!response) { response = await fetch(c.req.raw); c.executionCtx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } catch (e) { return c.text("Error thrown " + e.message, 500); } }); // Handle all other HTTP methods app.all("*", (c) => { return fetch(c.req.raw); }); export default app; ``` --- title: Cache Tags using Workers · Cloudflare Workers docs description: Send Additional Cache Tags using Workers lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Caching source_url: html: https://developers.cloudflare.com/workers/examples/cache-tags/ md: https://developers.cloudflare.com/workers/examples/cache-tags/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-tags) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const requestUrl = new URL(request.url); const params = requestUrl.searchParams; const tags = params && params.has("tags") ? params.get("tags").split(",") : []; const url = params && params.has("uri") ? params.get("uri") : ""; if (!url) { const errorObject = { error: "URL cannot be empty", }; return new Response(JSON.stringify(errorObject), { status: 400 }); } const init = { cf: { cacheTags: tags, }, }; return fetch(url, init) .then((result) => { const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return new Response(JSON.stringify(response), { status: result.status, }); }) .catch((err) => { const errorObject = { error: err.message, }; return new Response(JSON.stringify(errorObject), { status: 500 }); }); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const requestUrl = new URL(request.url); const params = requestUrl.searchParams; const tags = params && params.has("tags") ? params.get("tags").split(",") : []; const url = params && params.has("uri") ? params.get("uri") : ""; if (!url) { const errorObject = { error: "URL cannot be empty", }; return new Response(JSON.stringify(errorObject), { status: 400 }); } const init = { cf: { cacheTags: tags, }, }; return fetch(url, init) .then((result) => { const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return new Response(JSON.stringify(response), { status: result.status, }); }) .catch((err) => { const errorObject = { error: err.message, }; return new Response(JSON.stringify(errorObject), { status: 500 }); }); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); app.all("*", async (c) => { const tags = c.req.query("tags") ? c.req.query("tags").split(",") : []; const uri = c.req.query("uri") ? c.req.query("uri") : ""; if (!uri) { return c.json({ error: "URL cannot be empty" }, 400); } const init = { cf: { cacheTags: tags, }, }; const result = await fetch(uri, init); const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return c.json(response, result.status); }); app.onError((err, c) => { return c.json({ error: err.message }, 500); }); export default app; ``` * Python ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, Object, fetch def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): request_url = URL.new(request.url) params = request_url.searchParams tags = params["tags"].split(",") if "tags" in params else [] url = params["uri"] or None if url is None: error = {"error": "URL cannot be empty"} return Response.json(to_js(error), status=400) options = {"cf": {"cacheTags": tags}} result = await fetch(url, to_js(options)) cache_status = result.headers["cf-cache-status"] last_modified = result.headers["last-modified"] response = {"cache": cache_status, "lastModified": last_modified} return Response.json(to_js(response), status=result.status) ``` --- title: Cache using fetch · Cloudflare Workers docs description: Determine how to cache a resource by setting TTLs, custom cache keys, and cache headers in a fetch request. lastUpdated: 2025-05-13T11:59:34.000Z chatbotDeprioritize: false tags: Caching,Middleware source_url: html: https://developers.cloudflare.com/workers/examples/cache-using-fetch/ md: https://developers.cloudflare.com/workers/examples/cache-using-fetch/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-using-fetch) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const url = new URL(request.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; let response = await fetch(request, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, //Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable. response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const url = new URL(request.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; let response = await fetch(request, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, //Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable. response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from 'hono'; type Bindings = {}; const app = new Hono<{ Bindings: Bindings }>(); app.all('*', async (c) => { const url = new URL(c.req.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; // Fetch the request with custom cache settings let response = await fetch(c.req.raw, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, // Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }); export default app; ``` * Python ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, Object, fetch def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): url = URL.new(request.url) # Only use the path for the cache key, removing query strings # and always store using HTTPS, for example, https://www.example.com/file-uri-here some_custom_key = f"https://{url.hostname}{url.pathname}" response = await fetch( request, cf=to_js({ # Always cache this fetch regardless of content type # for a max of 5 seconds before revalidating the resource "cacheTtl": 5, "cacheEverything": True, # Enterprise only feature, see Cache API for other plans "cacheKey": some_custom_key, }), ) # Reconstruct the Response object to make its headers mutable response = Response.new(response.body, response) # Set cache control headers to cache on browser for 25 minutes response.headers["Cache-Control"] = "max-age=1500" return response ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { let url = req.url()?; // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here let custom_key = format!( "https://{host}{path}", host = url.host_str().unwrap(), path = url.path() ); let request = Request::new_with_init( url.as_str(), &RequestInit { headers: req.headers().clone(), method: req.method(), cf: CfProperties { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cache_ttl: Some(5), cache_everything: Some(true), // Enterprise only feature, see Cache API for other plans cache_key: Some(custom_key), ..CfProperties::default() }, ..RequestInit::default() }, )?; let mut response = Fetch::Request(request).send().await?; // Set cache control headers to cache on browser for 25 minutes let _ = response.headers_mut().set("Cache-Control", "max-age=1500"); Ok(response) } ``` ## Caching HTML resources ```js // Force Cloudflare to cache an asset fetch(event.request, { cf: { cacheEverything: true } }); ``` Setting the cache level to **Cache Everything** will override the default cacheability of the asset. For time-to-live (TTL), Cloudflare will still rely on headers set by the origin. ## Custom cache keys Note This feature is available only to Enterprise customers. A request's cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. For more about cache keys, refer to the [Create custom cache keys](https://developers.cloudflare.com/cache/how-to/cache-keys/#create-custom-cache-keys) documentation. ```js // Set cache key for this request to "some-string". fetch(event.request, { cf: { cacheKey: "some-string" } }); ``` Normally, Cloudflare computes the cache key for a request based on the request's URL. Sometimes, though, you may like different URLs to be treated as if they were the same for caching purposes. For example, if your website content is hosted from both Amazon S3 and Google Cloud Storage - you have the same content in both places, and you can use a Worker to randomly balance between the two. However, you do not want to end up caching two copies of your content. You could utilize custom cache keys to cache based on the original request URL rather than the subrequest URL: * JavaScript ```js export default { async fetch(request) { let url = new URL(request.url); if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } let newRequest = new Request(url, request); return fetch(newRequest, { cf: { cacheKey: request.url }, }); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { let url = new URL(request.url); if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } let newRequest = new Request(url, request); return fetch(newRequest, { cf: { cacheKey: request.url }, }); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from 'hono'; type Bindings = {}; const app = new Hono<{ Bindings: Bindings }>(); app.all('*', async (c) => { const originalUrl = c.req.url; const url = new URL(originalUrl); // Randomly select a storage backend if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } // Create a new request to the selected backend const newRequest = new Request(url, c.req.raw); // Fetch using the original URL as the cache key return fetch(newRequest, { cf: { cacheKey: originalUrl }, }); }); export default app; ``` Workers operating on behalf of different zones cannot affect each other's cache. You can only override cache keys when making requests within your own zone (in the above example `event.request.url` was the key stored), or requests to hosts that are not on Cloudflare. When making a request to another Cloudflare zone (for example, belonging to a different Cloudflare customer), that zone fully controls how its own content is cached within Cloudflare; you cannot override it. ## Override based on origin response code ```js // Force response to be cached for 86400 seconds for 200 status // codes, 1 second for 404, and do not cache 500 errors. fetch(request, { cf: { cacheTtlByStatus: { "200-299": 86400, 404: 1, "500-599": 0 } }, }); ``` This option is a version of the `cacheTtl` feature which chooses a TTL based on the response's status code and does not automatically set `cacheEverything: true`. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time, and override cache directives sent by the origin. You can review [details on the `cacheTtl` feature on the Request page](https://developers.cloudflare.com/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties). ## Customize cache behavior based on request file type Using custom cache keys and overrides based on response code, you can write a Worker that sets the TTL based on the response status code from origin, and request file type. The following example demonstrates how you might use this to cache requests for streaming media assets: * Module Worker ```js export default { async fetch(request) { // Instantiate new URL to make it mutable const newRequest = new URL(request.url); const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`; const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`; // Different asset types usually have different caching strategies. Most of the time media content such as audio, videos and images that are not user-generated content would not need to be updated often so a long TTL would be best. However, with HLS streaming, manifest files usually are set with short TTLs so that playback will not be affected, as this files contain the data that the player would need. By setting each caching strategy for categories of asset types in an object within an array, you can solve complex needs when it comes to media content for your application const cacheAssets = [ { asset: "video", key: customCacheKey, regex: /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "image", key: queryCacheKey, regex: /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "frontEnd", key: queryCacheKey, regex: /^.*\.(css|js)/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "audio", key: customCacheKey, regex: /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "directPlay", key: customCacheKey, regex: /.*(\/Download)/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "manifest", key: customCacheKey, regex: /^.*\.(m3u8|mpd)/, info: 0, ok: 3, redirects: 2, clientError: 1, serverError: 0, }, ]; const { asset, regex, ...cache } = cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {}; const newResponse = await fetch(request, { cf: { cacheKey: cache.key, polish: false, cacheEverything: true, cacheTtlByStatus: { "100-199": cache.info, "200-299": cache.ok, "300-399": cache.redirects, "400-499": cache.clientError, "500-599": cache.serverError, }, cacheTags: ["static"], }, }); const response = new Response(newResponse.body, newResponse); // For debugging purposes response.headers.set("debug", JSON.stringify(cache)); return response; }, }; ``` * Service Worker Service Workers are deprecated Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers. ```js addEventListener("fetch", (event) => { return event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { // Instantiate new URL to make it mutable const newRequest = new URL(request.url); // Set `const` to be used in the array later on const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`; const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`; // Set all variables needed to manipulate Cloudflare's cache using the fetch API in the `cf` object. You will be passing these variables in the objects down below. const cacheAssets = [ { asset: "video", key: customCacheKey, regex: /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "image", key: queryCacheKey, regex: /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "frontEnd", key: queryCacheKey, regex: /^.*\.(css|js)/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "audio", key: customCacheKey, regex: /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "directPlay", key: customCacheKey, regex: /.*(\/Download)/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "manifest", key: customCacheKey, regex: /^.*\.(m3u8|mpd)/, info: 0, ok: 3, redirects: 2, clientError: 1, serverError: 0, }, ]; // the `.find` method is used to find elements in an array (`cacheAssets`), in this case, `regex`, which can passed to the .`match` method to match on file extensions to cache, since they are many media types in the array. If you want to add more types, update the array. Refer to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find for more information. const { asset, regex, ...cache } = cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {}; const newResponse = await fetch(request, { cf: { cacheKey: cache.key, polish: false, cacheEverything: true, cacheTtlByStatus: { "100-199": cache.info, "200-299": cache.ok, "300-399": cache.redirects, "400-499": cache.clientError, "500-599": cache.serverError, }, cacheTags: ["static"], }, }); const response = new Response(newResponse.body, newResponse); // For debugging purposes response.headers.set("debug", JSON.stringify(cache)); return response; } ``` ## Using the HTTP Cache API The `cache` mode can be set in `fetch` options. Currently Workers only support the `no-store` mode for controlling the cache. When `no-store` is supplied the cache is bypassed on the way to the origin and the request is not cacheable. ```js fetch(request, { cache: 'no-store'}); ``` --- title: Conditional response · Cloudflare Workers docs description: Return a response based on the incoming request's URL, HTTP method, User Agent, IP address, ASN or device type. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware source_url: html: https://developers.cloudflare.com/workers/examples/conditional-response/ md: https://developers.cloudflare.com/workers/examples/conditional-response/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/conditional-response) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; // Return a new Response based on a URL's hostname const url = new URL(request.url); if (BLOCKED_HOSTNAMES.includes(url.hostname)) { return new Response("Blocked Host", { status: 403 }); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(url.pathname)) { return new Response("Blocked Extension", { status: 403 }); } // On HTTP method if (request.method === "POST") { return new Response("Response for POST"); } // On User Agent const userAgent = request.headers.get("User-Agent") || ""; if (userAgent.includes("bot")) { return new Response("Block User Agent containing bot", { status: 403 }); } // On Client's IP address const clientIP = request.headers.get("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return new Response("Block the IP 1.2.3.4", { status: 403 }); } // On ASN if (request.cf && request.cf.asn == 64512) { return new Response("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = request.headers.get("CF-Device-Type"); if (device === "mobile") { return Response.redirect("https://mobile.example.com"); } console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); return fetch(request); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; // Return a new Response based on a URL's hostname const url = new URL(request.url); if (BLOCKED_HOSTNAMES.includes(url.hostname)) { return new Response("Blocked Host", { status: 403 }); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(url.pathname)) { return new Response("Blocked Extension", { status: 403 }); } // On HTTP method if (request.method === "POST") { return new Response("Response for POST"); } // On User Agent const userAgent = request.headers.get("User-Agent") || ""; if (userAgent.includes("bot")) { return new Response("Block User Agent containing bot", { status: 403 }); } // On Client's IP address const clientIP = request.headers.get("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return new Response("Block the IP 1.2.3.4", { status: 403 }); } // On ASN if (request.cf && request.cf.asn == 64512) { return new Response("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = request.headers.get("CF-Device-Type"); if (device === "mobile") { return Response.redirect("https://mobile.example.com"); } console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); return fetch(request); }, } satisfies ExportedHandler; ``` * Python ```py import re from workers import Response from urllib.parse import urlparse async def on_fetch(request): blocked_hostnames = ["nope.mywebsite.com", "bye.website.com"] url = urlparse(request.url) # Block on hostname if url.hostname in blocked_hostnames: return Response("Blocked Host", status=403) # On paths ending in .doc or .xml if re.search(r'\.(doc|xml)$', url.path): return Response("Blocked Extension", status=403) # On HTTP method if "POST" in request.method: return Response("Response for POST") # On User Agent user_agent = request.headers["User-Agent"] or "" if "bot" in user_agent: return Response("Block User Agent containing bot", status=403) # On Client's IP address client_ip = request.headers["CF-Connecting-IP"] if client_ip == "1.2.3.4": return Response("Block the IP 1.2.3.4", status=403) # On ASN if request.cf and request.cf.asn == 64512: return Response("Block the ASN 64512 response") # On Device Type # Requires Enterprise "CF-Device-Type Header" zone setting or # Page Rule with "Cache By Device Type" setting applied. device = request.headers["CF-Device-Type"] if device == "mobile": return Response.redirect("https://mobile.example.com") return fetch(request) ``` * Hono ```ts import { Hono } from "hono"; import { HTTPException } from "hono/http-exception"; const app = new Hono(); // Middleware to handle all conditions before reaching the main handler app.use("*", async (c, next) => { const request = c.req.raw; const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; const hostname = new URL(c.req.url)?.hostname; // Return a new Response based on a URL's hostname if (BLOCKED_HOSTNAMES.includes(hostname)) { return c.text("Blocked Host", 403); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(c.req.pathname)) { return c.text("Blocked Extension", 403); } // On User Agent const userAgent = c.req.header("User-Agent") || ""; if (userAgent.includes("bot")) { return c.text("Block User Agent containing bot", 403); } // On Client's IP address const clientIP = c.req.header("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return c.text("Block the IP 1.2.3.4", 403); } // On ASN if (request.cf && request.cf.asn === 64512) { return c.text("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = c.req.header("CF-Device-Type"); if (device === "mobile") { return c.redirect("https://mobile.example.com"); } // Continue to the next handler await next(); }); // Handle POST requests differently app.post("*", (c) => { return c.text("Response for POST"); }); // Default handler for other methods app.get("*", async (c) => { console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); // Fetch the original request return fetch(c.req.raw); }); export default app; ``` --- title: CORS header proxy · Cloudflare Workers docs description: Add the necessary CORS headers to a third party API response. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Security,Headers source_url: html: https://developers.cloudflare.com/workers/examples/cors-header-proxy/ md: https://developers.cloudflare.com/workers/examples/cors-header-proxy/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cors-header-proxy) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const corsHeaders = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", }; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; // The rest of this snippet for the demo page function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } const DEMO_PAGE = `

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting `; async function handleRequest(request) { const url = new URL(request.url); let apiUrl = url.searchParams.get("apiurl"); if (apiUrl == null) { apiUrl = API_URL; } // Rewrite request to point to API URL. This also makes the request mutable // so you can add the correct Origin header to make the API server think // that this request is not cross-site. request = new Request(apiUrl, request); request.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(request); // Recreate the response so you can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; } async function handleOptions(request) { if ( request.headers.get("Origin") !== null && request.headers.get("Access-Control-Request-Method") !== null && request.headers.get("Access-Control-Request-Headers") !== null ) { // Handle CORS preflight requests. return new Response(null, { headers: { ...corsHeaders, "Access-Control-Allow-Headers": request.headers.get( "Access-Control-Request-Headers", ), }, }); } else { // Handle standard OPTIONS request. return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } const url = new URL(request.url); if (url.pathname.startsWith(PROXY_ENDPOINT)) { if (request.method === "OPTIONS") { // Handle CORS preflight requests return handleOptions(request); } else if ( request.method === "GET" || request.method === "HEAD" || request.method === "POST" ) { // Handle requests to the API server return handleRequest(request); } else { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); } } else { return rawHtmlResponse(DEMO_PAGE); } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const corsHeaders = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", }; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; // The rest of this snippet for the demo page function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } const DEMO_PAGE = `

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting `; async function handleRequest(request) { const url = new URL(request.url); let apiUrl = url.searchParams.get("apiurl"); if (apiUrl == null) { apiUrl = API_URL; } // Rewrite request to point to API URL. This also makes the request mutable // so you can add the correct Origin header to make the API server think // that this request is not cross-site. request = new Request(apiUrl, request); request.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(request); // Recreate the response so you can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; } async function handleOptions(request) { if ( request.headers.get("Origin") !== null && request.headers.get("Access-Control-Request-Method") !== null && request.headers.get("Access-Control-Request-Headers") !== null ) { // Handle CORS preflight requests. return new Response(null, { headers: { ...corsHeaders, "Access-Control-Allow-Headers": request.headers.get( "Access-Control-Request-Headers", ), }, }); } else { // Handle standard OPTIONS request. return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } const url = new URL(request.url); if (url.pathname.startsWith(PROXY_ENDPOINT)) { if (request.method === "OPTIONS") { // Handle CORS preflight requests return handleOptions(request); } else if ( request.method === "GET" || request.method === "HEAD" || request.method === "POST" ) { // Handle requests to the API server return handleRequest(request); } else { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); } } else { return rawHtmlResponse(DEMO_PAGE); } }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; import { cors } from "hono/cors"; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; const app = new Hono(); // Demo page handler app.get("*", async (c) => { // Only handle non-proxy requests with this handler if (c.req.path.startsWith(PROXY_ENDPOINT)) { return next(); } // Create the demo page HTML const DEMO_PAGE = `

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting `; return c.html(DEMO_PAGE); }); // CORS proxy routes app.on(["GET", "HEAD", "POST", "OPTIONS"], PROXY_ENDPOINT + "*", async (c) => { const url = new URL(c.req.url); // Handle OPTIONS preflight requests if (c.req.method === "OPTIONS") { const origin = c.req.header("Origin"); const requestMethod = c.req.header("Access-Control-Request-Method"); const requestHeaders = c.req.header("Access-Control-Request-Headers"); if (origin && requestMethod && requestHeaders) { // Handle CORS preflight requests return new Response(null, { headers: { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", "Access-Control-Allow-Headers": requestHeaders, }, }); } else { // Handle standard OPTIONS request return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } // Handle actual requests let apiUrl = url.searchParams.get("apiurl") || API_URL; // Rewrite request to point to API URL const modifiedRequest = new Request(apiUrl, c.req.raw); modifiedRequest.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(modifiedRequest); // Recreate the response so we can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; }); // Handle method not allowed for proxy endpoint app.all(PROXY_ENDPOINT + "*", (c) => { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); }); export default app; ``` * Python ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, fetch, Object, Request def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): cors_headers = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", } api_url = "https://examples.cloudflareworkers.com/demos/demoapi" proxy_endpoint = "/corsproxy/" def raw_html_response(html): return Response.new(html, headers=to_js({"content-type": "text/html;charset=UTF-8"})) demo_page = f'''

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting ''' async def handle_request(request): url = URL.new(request.url) api_url2 = url.searchParams["apiurl"] if not api_url2: api_url2 = api_url request = Request.new(api_url2, request) request.headers["Origin"] = (URL.new(api_url2)).origin print(request.headers) response = await fetch(request) response = Response.new(response.body, response) response.headers["Access-Control-Allow-Origin"] = url.origin response.headers["Vary"] = "Origin" return response async def handle_options(request): if "Origin" in request.headers and "Access-Control-Request-Method" in request.headers and "Access-Control-Request-Headers" in request.headers: return Response.new(None, headers=to_js({ **cors_headers, "Access-Control-Allow-Headers": request.headers["Access-Control-Request-Headers"] })) return Response.new(None, headers=to_js({"Allow": "GET, HEAD, POST, OPTIONS"})) url = URL.new(request.url) if url.pathname.startswith(proxy_endpoint): if request.method == "OPTIONS": return handle_options(request) if request.method in ("GET", "HEAD", "POST"): return handle_request(request) return Response.new(None, status=405, statusText="Method Not Allowed") return raw_html_response(demo_page) ``` * Rust ```rs use std::{borrow::Cow, collections::HashMap}; use worker::*; fn raw*html_response(html: &str) -> Result { Response::from_html(html) } async fn handle_request(req: Request, api_url: &str) -> Result { let url = req.url().unwrap(); let mut api_url2 = url .query_pairs() .find(|x| x.0 == Cow::Borrowed("apiurl")) .unwrap() .1 .to_string(); if api_url2 == String::from("") { api_url2 = api_url.to_string(); } let mut request = req.clone_mut()?; \*request.path_mut()? = api_url2.clone(); if let url::Origin::Tuple(origin, *, _) = Url::parse(&api_url2)?.origin() { (\*request.headers_mut()?).set("Origin", &origin)?; } let mut response = Fetch::Request(request).send().await?.cloned()?; let headers = response.headers_mut(); if let url::Origin::Tuple(origin, _, \_) = url.origin() { headers.set("Access-Control-Allow-Origin", &origin)?; headers.set("Vary", "Origin")?; } Ok(response) } fn handle*options(req: Request, cors_headers: &HashMap<&str, &str>) -> Result { let headers: Vec<*> = req.headers().keys().collect(); if [ "access-control-request-method", "access-control-request-headers", "origin", ] .iter() .all(|i| headers.contains(&i.to_string())) { let mut headers = Headers::new(); for (k, v) in cors_headers.iter() { headers.set(k, v)?; } return Ok(Response::empty()?.with_headers(headers)); } Response::empty() } #[event(fetch)] async fn fetch(req: Request, \_env: Env, \_ctx: Context) -> Result { let cors_headers = HashMap::from([ ("Access-Control-Allow-Origin", "*"), ("Access-Control-Allow-Methods", "GET,HEAD,POST,OPTIONS"), ("Access-Control-Max-Age", "86400"), ]); let api_url = "https://examples.cloudflareworkers.com/demos/demoapi"; let proxy_endpoint = "/corsproxy/"; let demo_page = format!( r#"

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting "# ); if req.url()?.path().starts_with(proxy_endpoint) { match req.method() { Method::Options => return handle_options(req, &cors_headers), Method::Get | Method::Head | Method::Post => return handle_request(req, api_url).await, _ => return Response::error("Method Not Allowed", 405), } } raw_html_response(&demo_page) } ``` ```plaintext ``` --- title: Country code redirect · Cloudflare Workers docs description: Redirect a response based on the country code in the header of a visitor. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Redirects,Geolocation source_url: html: https://developers.cloudflare.com/workers/examples/country-code-redirect/ md: https://developers.cloudflare.com/workers/examples/country-code-redirect/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/country-code-redirect) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { /** * A map of the URLs to redirect to * @param {Object} countryMap */ const countryMap = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; // Remove this logging statement from your final output. console.log( `Based on ${country}-based request, your user would go to ${url}.`, ); return Response.redirect(url); } else { return fetch("https://example.com", request); } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * A map of the URLs to redirect to * @param {Object} countryMap */ const countryMap = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; return Response.redirect(url); } else { return fetch(request); } }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch async def on_fetch(request): countries = { "US": "https://example.com/us", "EU": "https://example.com/eu", } # Use the cf object to obtain the country of the request # more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties country = request.cf.country if country and country in countries: url = countries[country] return Response.redirect(url) return fetch("https://example.com", request) ``` * Hono ```ts import { Hono } from 'hono'; // Define the RequestWithCf interface to add Cloudflare-specific properties interface RequestWithCf extends Request { cf: { country: string; // Other CF properties can be added as needed }; } const app = new Hono(); app.get('*', async (c) => { /** * A map of the URLs to redirect to */ const countryMap: Record = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Cast the raw request to include Cloudflare-specific properties const request = c.req.raw as RequestWithCf; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; // Redirect using Hono's redirect helper return c.redirect(url); } else { // Default fallback return fetch("https://example.com", request); } }); export default app; ``` --- title: Setting Cron Triggers · Cloudflare Workers docs description: Set a Cron Trigger for your Worker. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware source_url: html: https://developers.cloudflare.com/workers/examples/cron-trigger/ md: https://developers.cloudflare.com/workers/examples/cron-trigger/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cron-trigger) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async scheduled(controller, env, ctx) { console.log("cron processed"); }, }; ``` * TypeScript ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); }, }; ``` * Python ```python from workers import handler @handler async def on_scheduled(controller, env, ctx): print("cron processed") ``` * Hono ```ts import { Hono } from 'hono'; interface Env {} // Create Hono app const app = new Hono<{ Bindings: Env }>(); // Regular routes for normal HTTP requests app.get('/', (c) => c.text('Hello World!')); // Export both the app and a scheduled function export default { // The Hono app handles regular HTTP requests fetch: app.fetch, // The scheduled function handles Cron triggers async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); // You could also perform actions like: // - Fetching data from external APIs // - Updating KV or Durable Object storage // - Running maintenance tasks // - Sending notifications }, }; ``` ## Set Cron Triggers in Wrangler Refer to [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) for more information on how to add a Cron Trigger. If you are deploying with Wrangler, set the cron syntax (once per hour as shown below) by adding this to your Wrangler file: * wrangler.jsonc ```jsonc { "name": "worker", "triggers": { "crons": [ "0 * * * *" ] } } ``` * wrangler.toml ```toml name = "worker" # ... [triggers] crons = ["0 * * * *"] ``` You also can set a different Cron Trigger for each [environment](https://developers.cloudflare.com/workers/wrangler/environments/) in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example: * wrangler.jsonc ```jsonc { "env": { "dev": { "triggers": { "crons": [ "0 * * * *" ] } } } } ``` * wrangler.toml ```toml [env.dev.triggers] crons = ["0 * * * *"] ``` ## Test Cron Triggers using Wrangler The recommended way of testing Cron Triggers is using Wrangler. Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=0+*+*+*+*" curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers ``` --- title: Data loss prevention · Cloudflare Workers docs description: Protect sensitive data to prevent data loss, and send alerts to a webhooks server in the event of a data breach. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Security source_url: html: https://developers.cloudflare.com/workers/examples/data-loss-prevention/ md: https://developers.cloudflare.com/workers/examples/data-loss-prevention/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/data-loss-prevention) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } /** * Define personal data with regular expressions. * Respond with block if credit card data, and strip * emails and phone numbers from the response. * Execution will be limited to MIME type "text/*". */ const response = await fetch(request); // Return origin response, if response wasn’t text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } let text = await response.text(); // When debugging replace the response // from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = await sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(request); // Respond with a block if credit card, // otherwise replace sensitive text with `*`s return kind === "creditCard" ? new Response(kind + " found\nForbidden\n", { status: 403, statusText: "Forbidden", }) : new Response(text.replace(sensitiveRegex, "**********"), response); } } return new Response(text, response); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } /** * Define personal data with regular expressions. * Respond with block if credit card data, and strip * emails and phone numbers from the response. * Execution will be limited to MIME type "text/*". */ const response = await fetch(request); // Return origin response, if response wasn’t text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } let text = await response.text(); // When debugging replace the response // from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = await sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(request); // Respond with a block if credit card, // otherwise replace sensitive text with `*`s return kind === "creditCard" ? new Response(kind + " found\nForbidden\n", { status: 403, statusText: "Forbidden", }) : new Response(text.replace(sensitiveRegex, "**********"), response); } } return new Response(text, response); }, } satisfies ExportedHandler; ``` * Python ```py import re from datetime import datetime from js import Response, fetch, JSON, Headers # Alert a data breach by posting to a webhook server async def post_data_breach(request): some_hook_server = "https://webhook.flow-wolf.io/hook" headers = Headers.new({"content-type": "application/json"}.items()) body = JSON.stringify({ "ip": request.headers["cf-connecting-ip"], "time": datetime.now(), "request": request, }) return await fetch(some_hook_server, method="POST", headers=headers, body=body) async def on_fetch(request): debug = True # Define personal data with regular expressions. # Respond with block if credit card data, and strip # emails and phone numbers from the response. # Execution will be limited to MIME type "text/*". response = await fetch(request) # Return origin response, if response wasn’t text content_type = response.headers["content-type"] or "" if "text" not in content_type: return response text = await response.text() # When debugging replace the response from the origin with an email text = text.replace("You may use this", "me@example.com may use this") if debug else text sensitive_regex = [ ("credit_card", r'\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b'), ("email", r'\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b'), ("phone", r'\b07\d{9}\b'), ] for (kind, regex) in sensitive_regex: match = re.search(regex, text, flags=re.IGNORECASE) if match: # Alert a data breach await post_data_breach(request) # Respond with a block if credit card, otherwise replace sensitive text with `*`s card_resp = Response.new(kind + " found\nForbidden\n", status=403,statusText="Forbidden") sensitive_resp = Response.new(re.sub(regex, "*"*10, text, flags=re.IGNORECASE), response) return card_resp if kind == "credit_card" else sensitive_resp return Response.new(text, response) ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); // Configuration const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; // Define sensitive data patterns const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request: Request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } // Main middleware to handle data loss prevention app.use('*', async (c) => { // Fetch the origin response const response = await fetch(c.req.raw); // Return origin response if response wasn't text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } // Get the response text let text = await response.text(); // When debugging, replace the response from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; // Check for sensitive data for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(c.req.raw); // Respond with a block if credit card, otherwise replace sensitive text with `*`s if (kind === "creditCard") { return c.text(`${kind} found\nForbidden\n`, 403); } else { return new Response(text.replace(sensitiveRegex, "**********"), { status: response.status, statusText: response.statusText, headers: response.headers, }); } } } // Return the modified response return new Response(text, { status: response.status, statusText: response.statusText, headers: response.headers, }); }); export default app; ``` --- title: Debugging logs · Cloudflare Workers docs description: Send debugging information in an errored response to a logging service. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Debugging source_url: html: https://developers.cloudflare.com/workers/examples/debugging-logs/ md: https://developers.cloudflare.com/workers/examples/debugging-logs/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/debugging-logs) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request, env, ctx) { // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; async function postLog(data) { return await fetch(LOG_URL, { method: "POST", body: data, }); } let response; try { response = await fetch(request); if (!response.ok && !response.redirected) { const body = await response.text(); throw new Error( "Bad response at origin. Status: " + response.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10), ); } } catch (err) { // Without ctx.waitUntil(), your fetch() to Cloudflare's // logging service may or may not complete ctx.waitUntil(postLog(err.toString())); const stack = JSON.stringify(err.stack) || err; // Copy the response and initialize body to the stack trace response = new Response(stack, response); // Add the error stack into a header to find out what happened response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err); } return response; }, }; ``` * TypeScript ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; async function postLog(data) { return await fetch(LOG_URL, { method: "POST", body: data, }); } let response; try { response = await fetch(request); if (!response.ok && !response.redirected) { const body = await response.text(); throw new Error( "Bad response at origin. Status: " + response.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10), ); } } catch (err) { // Without ctx.waitUntil(), your fetch() to Cloudflare's // logging service may or may not complete ctx.waitUntil(postLog(err.toString())); const stack = JSON.stringify(err.stack) || err; // Copy the response and initialize body to the stack trace response = new Response(stack, response); // Add the error stack into a header to find out what happened response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err); } return response; }, } satisfies ExportedHandler; ``` * Python ```py import json import traceback from pyodide.ffi import create_once_callable from js import Response, fetch, Headers async def on_fetch(request, _env, ctx): # Service configured to receive logs log_url = "https://log-service.example.com/" async def post_log(data): return await fetch(log_url, method="POST", body=data) response = await fetch(request) try: if not response.ok and not response.redirected: body = await response.text() # Simulating an error. Ensure the string is small enough to be a header raise Exception(f'Bad response at origin. Status:{response.status} Body:{body.strip()[:10]}') except Exception as e: # Without ctx.waitUntil(), your fetch() to Cloudflare's # logging service may or may not complete ctx.waitUntil(create_once_callable(post_log(e))) stack = json.dumps(traceback.format_exc()) or e # Copy the response and add to header response = Response.new(stack, response) response.headers["X-Debug-stack"] = stack response.headers["X-Debug-err"] = e return response ``` * Hono ```ts import { Hono } from 'hono'; // Define the environment with appropriate types interface Env {} const app = new Hono<{ Bindings: Env }>(); // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; // Function to post logs to an external service async function postLog(data: string) { return await fetch(LOG_URL, { method: "POST", body: data, }); } // Middleware to handle error logging app.use('*', async (c, next) => { try { // Process the request with the next handler await next(); // After processing, check if the response indicates an error if (c.res && (!c.res.ok && !c.res.redirected)) { const body = await c.res.clone().text(); throw new Error( "Bad response at origin. Status: " + c.res.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10) ); } } catch (err) { // Without waitUntil, the fetch to the logging service may not complete c.executionCtx.waitUntil( postLog(err.toString()) ); // Get the error stack or error itself const stack = JSON.stringify(err.stack) || err.toString(); // Create a new response with the error information const response = c.res ? new Response(stack, { status: c.res.status, headers: c.res.headers }) : new Response(stack, { status: 500 }); // Add debug headers response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err.toString()); // Set the modified response c.res = response; } }); // Default route handler that passes requests through app.all('*', async (c) => { return fetch(c.req.raw); }); export default app; ``` --- title: Cookie parsing · Cloudflare Workers docs description: Given the cookie name, get the value of a cookie. You can also use cookies for A/B testing. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Headers source_url: html: https://developers.cloudflare.com/workers/examples/extract-cookie-value/ md: https://developers.cloudflare.com/workers/examples/extract-cookie-value/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/extract-cookie-value) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js import { parse } from "cookie"; export default { async fetch(request) { // The name of the cookie const COOKIE_NAME = "__uid"; const cookie = parse(request.headers.get("Cookie") || ""); if (cookie[COOKIE_NAME] != null) { // Respond with the cookie value return new Response(cookie[COOKIE_NAME]); } return new Response("No cookie with name: " + COOKIE_NAME); }, }; ``` * TypeScript ```ts import { parse } from "cookie"; export default { async fetch(request): Promise { // The name of the cookie const COOKIE_NAME = "__uid"; const cookie = parse(request.headers.get("Cookie") || ""); if (cookie[COOKIE_NAME] != null) { // Respond with the cookie value return new Response(cookie[COOKIE_NAME]); } return new Response("No cookie with name: " + COOKIE_NAME); }, } satisfies ExportedHandler; ``` * Python ```py from http.cookies import SimpleCookie from workers import Response async def on_fetch(request): # Name of the cookie cookie_name = "__uid" cookies = SimpleCookie(request.headers["Cookie"] or "") if cookie_name in cookies: # Respond with cookie value return Response(cookies[cookie_name].value) return Response("No cookie with name: " + cookie_name) ``` * Hono ```ts import { Hono } from 'hono'; import { getCookie } from 'hono/cookie'; const app = new Hono(); app.get('*', (c) => { // The name of the cookie const COOKIE_NAME = "__uid"; // Get the specific cookie value using Hono's cookie helper const cookieValue = getCookie(c, COOKIE_NAME); if (cookieValue) { // Respond with the cookie value return c.text(cookieValue); } return c.text("No cookie with name: " + COOKIE_NAME); }); export default app; ``` External dependencies This example requires the npm package [`cookie`](https://www.npmjs.com/package/cookie) to be installed in your JavaScript project. The Hono example uses the built-in cookie utilities provided by Hono, so no external dependencies are needed for that implementation. --- title: Fetch HTML · Cloudflare Workers docs description: Send a request to a remote server, read HTML from the response, and serve that HTML. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/fetch-html/ md: https://developers.cloudflare.com/workers/examples/fetch-html/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/fetch-html) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { /** * Replace `remote` with the host you wish to send requests to */ const remote = "https://example.com"; return await fetch(remote, request); }, }; ``` [Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwBOACxiATKIBsARmkAOWQC4WLNsA5wuNPgJHjRUuYtkBYAFABhdFQgBTO9gAiUAM4x0bqNFvKSGngExCRUcMD2DABEUDT2AB4AdABWblGkqFBgjuGRMXFJqVGWNnaOENgAKnQw9v5wMDBgfARQtsjJcABucG68CLAQANTA6Ljg9paWCZ5IJLj2qHDgECQA3hYkJL10VLwB9hC8ABYAFAj2AI4g9m4QAJTrm1skyABUb88vbyQASvZNOC8ewkAAGF1GDlBJAA7j5jiQIMcQccvKs6JRYe4ERB0CQ3I5cCQLtdbhA3Ij0F8tm9kNTeLY7sT7JCQQwSFFjhAIDA3MpkMgEuEmvZEgzgOkLNSLhAQAgqNsYXAfAcjmcIegHAAaZmku73IjPAC+WosRqIljUzA0Wh0PH4QjEkhk8iUJVsDicrg8Xh8bSo-kCWlIYQi0QihC06QCWRyYaiZDA6DIxWsHvKVRqdW2jWavFa7VStimFjWUWAyqoAH1RuNslFlPkFoU0kbLVabcE7XpHYZjK7ZMwgA) * TypeScript ```ts export default { async fetch(request: Request): Promise { /** * Replace `remote` with the host you wish to send requests to */ const remote = "https://example.com"; return await fetch(remote, request); }, }; ``` * Python ```py from js import fetch async def on_fetch(request): # Replace `remote` with the host you wish to send requests to remote = "https://example.com" return await fetch(remote, request) ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.all('*', async (c) => { /** * Replace `remote` with the host you wish to send requests to */ const remote = "https://example.com"; // Forward the request to the remote server return await fetch(remote, c.req.raw); }); export default app; ``` --- title: Fetch JSON · Cloudflare Workers docs description: Send a GET request and read in JSON from the response. Use to fetch external data. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: JSON source_url: html: https://developers.cloudflare.com/workers/examples/fetch-json/ md: https://developers.cloudflare.com/workers/examples/fetch-json/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/fetch-json) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request, env, ctx) { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: await response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); const options = { headers: { "content-type": contentType } }; return new Response(result, options); }, }; ``` * TypeScript ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: await response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); const options = { headers: { "content-type": contentType } }; return new Response(result, options); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch import json async def on_fetch(request): url = "https://jsonplaceholder.typicode.com/todos/1" # gather_response returns both content-type & response body as a string async def gather_response(response): headers = response.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return (content_type, json.dumps(await response.json())) return (content_type, await response.text()) response = await fetch(url) content_type, result = await gather_response(response) headers = {"content-type": content_type} return Response(result, headers=headers) ``` * Hono ```ts import { Hono } from 'hono'; type Env = {}; const app = new Hono<{ Bindings: Env }>(); app.get('*', async (c) => { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response: Response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: await response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); return new Response(result, { headers: { "content-type": contentType } }); }); export default app; ``` --- title: "Geolocation: Weather application · Cloudflare Workers docs" description: Fetch weather data from an API using the user's geolocation data. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Geolocation source_url: html: https://developers.cloudflare.com/workers/examples/geolocation-app-weather/ md: https://developers.cloudflare.com/workers/examples/geolocation-app-weather/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-app-weather) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; //Use a token from https://aqicn.org/api/ let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; let html_content = "

Weather 🌦

"; const latitude = request.cf.latitude; const longitude = request.cf.longitude; endpoint += `${latitude};${longitude}/?token=${token}`; const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json(); html_content += `

This is a demo using Workers geolocation data.

`; html_content += `You are located at: ${latitude},${longitude}.

`; html_content += `

Based off sensor data from ${content.data.city.name}:

`; html_content += `

The AQI level is: ${content.data.aqi}.

`; html_content += `

The N02 level is: ${content.data.iaqi.no2?.v}.

`; html_content += `

The O3 level is: ${content.data.iaqi.o3?.v}.

`; html_content += `

The temperature is: ${content.data.iaqi.t?.v}°C.

`; let html = ` Geolocation: Weather
${html_content}
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; //Use a token from https://aqicn.org/api/ let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; let html_content = "

Weather 🌦

"; const latitude = request.cf.latitude; const longitude = request.cf.longitude; endpoint += `${latitude};${longitude}/?token=${token}`; const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json(); html_content += `

This is a demo using Workers geolocation data.

`; html_content += `You are located at: ${latitude},${longitude}.

`; html_content += `

Based off sensor data from ${content.data.city.name}:

`; html_content += `

The AQI level is: ${content.data.aqi}.

`; html_content += `

The N02 level is: ${content.data.iaqi.no2?.v}.

`; html_content += `

The O3 level is: ${content.data.iaqi.o3?.v}.

`; html_content += `

The temperature is: ${content.data.iaqi.t?.v}°C.

`; let html = ` Geolocation: Weather
${html_content}
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from 'hono'; import { html } from 'hono/html'; type Bindings = {}; interface WeatherApiResponse { data: { aqi: number; city: { name: string; url: string; }; iaqi: { no2?: { v: number }; o3?: { v: number }; t?: { v: number }; }; }; } const app = new Hono<{ Bindings: Bindings }>(); app.get('*', async (c) => { // Get API endpoint let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; // Use a token from https://aqicn.org/api/ // Define styles const html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; // Get geolocation from Cloudflare request const req = c.req.raw; const latitude = req.cf?.latitude; const longitude = req.cf?.longitude; // Create complete API endpoint with coordinates endpoint += `${latitude};${longitude}/?token=${token}`; // Fetch weather data const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json() as WeatherApiResponse; // Build HTML content const weatherContent = html`

Weather 🌦

This is a demo using Workers geolocation data.

You are located at: ${latitude},${longitude}.

Based off sensor data from ${content.data.city.name}:

The AQI level is: ${content.data.aqi}.

The N02 level is: ${content.data.iaqi.no2?.v}.

The O3 level is: ${content.data.iaqi.o3?.v}.

The temperature is: ${content.data.iaqi.t?.v}°C.

`; // Complete HTML document const htmlDocument = html` Geolocation: Weather
${weatherContent}
`; // Return HTML response return c.html(htmlDocument); }); export default app; ``` * Python ```py from workers import Response, fetch async def on_fetch(request): endpoint = "https://api.waqi.info/feed/geo:" token = "" # Use a token from https://aqicn.org/api/ html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}" html_content = "

Weather 🌦

" latitude = request.cf.latitude longitude = request.cf.longitude endpoint += f"{latitude};{longitude}/?token={token}" response = await fetch(endpoint) content = await response.json() html_content += "

This is a demo using Workers geolocation data.

" html_content += f"You are located at: {latitude},{longitude}.

" html_content += f"

Based off sensor data from {content['data']['city']['name']}:

" html_content += f"

The AQI level is: {content['data']['aqi']}.

" html_content += f"

The N02 level is: {content['data']['iaqi']['no2']['v']}.

" html_content += f"

The O3 level is: {content['data']['iaqi']['o3']['v']}.

" html_content += f"

The temperature is: {content['data']['iaqi']['t']['v']}°C.

" html = f""" Geolocation: Weather
{html_content}
""" headers = {"content-type": "text/html;charset=UTF-8"} return Response(html, headers=headers) ```
--- title: "Geolocation: Custom Styling · Cloudflare Workers docs" description: Personalize website styling based on localized user time. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Geolocation source_url: html: https://developers.cloudflare.com/workers/examples/geolocation-custom-styling/ md: https://developers.cloudflare.com/workers/examples/geolocation-custom-styling/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-custom-styling) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { let grads = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; async function toCSSGradient(hour) { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } let html_content = ""; let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; const timezone = request.cf.timezone; console.log(timezone); let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }), ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); html_content += "

" + hour + ":" + minutes + "

"; html_content += "

" + timezone + "

"; html_style += "body{background:" + (await toCSSGradient(hour)) + ";}"; let html = ` Geolocation: Customized Design
${html_content}
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8" }, }); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { let grads = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; async function toCSSGradient(hour) { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } let html_content = ""; let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; const timezone = request.cf.timezone; console.log(timezone); let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }), ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); html_content += "

" + hour + ":" + minutes + "

"; html_content += "

" + timezone + "

"; html_style += "body{background:" + (await toCSSGradient(hour)) + ";}"; let html = ` Geolocation: Customized Design
${html_content}
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8" }, }); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from 'hono'; type Bindings = {}; type ColorStop = { color: string; position: number }; const app = new Hono<{ Bindings: Bindings }>(); // Gradient configurations for each hour of the day (0-23) const grads: ColorStop[][] = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; // Convert hour to CSS gradient async function toCSSGradient(hour: number): Promise { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } app.get('*', async (c) => { const request = c.req.raw; // Base HTML style let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; // Get timezone from Cloudflare request const timezone = request.cf?.timezone || 'UTC'; console.log(timezone); // Get localized time let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }) ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); // Generate HTML content let html_content = `

${hour}:${minutes}

`; html_content += `

${timezone}

`; // Add background gradient based on hour html_style += `body{background:${await toCSSGradient(hour)};}`; // Complete HTML document let html = ` Geolocation: Customized Design
${html_content}
`; return c.html(html); }); export default app; ```
--- title: "Geolocation: Hello World · Cloudflare Workers docs" description: Get all geolocation data fields and display them in HTML. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Geolocation source_url: html: https://developers.cloudflare.com/workers/examples/geolocation-hello-world/ md: https://developers.cloudflare.com/workers/examples/geolocation-hello-world/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-hello-world) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { let html_content = ""; let html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; html_content += "

Colo: " + request.cf.colo + "

"; html_content += "

Country: " + request.cf.country + "

"; html_content += "

City: " + request.cf.city + "

"; html_content += "

Continent: " + request.cf.continent + "

"; html_content += "

Latitude: " + request.cf.latitude + "

"; html_content += "

Longitude: " + request.cf.longitude + "

"; html_content += "

PostalCode: " + request.cf.postalCode + "

"; html_content += "

MetroCode: " + request.cf.metroCode + "

"; html_content += "

Region: " + request.cf.region + "

"; html_content += "

RegionCode: " + request.cf.regionCode + "

"; html_content += "

Timezone: " + request.cf.timezone + "

"; let html = ` Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

${html_content} `; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { let html_content = ""; let html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; html_content += "

Colo: " + request.cf.colo + "

"; html_content += "

Country: " + request.cf.country + "

"; html_content += "

City: " + request.cf.city + "

"; html_content += "

Continent: " + request.cf.continent + "

"; html_content += "

Latitude: " + request.cf.latitude + "

"; html_content += "

Longitude: " + request.cf.longitude + "

"; html_content += "

PostalCode: " + request.cf.postalCode + "

"; html_content += "

MetroCode: " + request.cf.metroCode + "

"; html_content += "

Region: " + request.cf.region + "

"; html_content += "

RegionCode: " + request.cf.regionCode + "

"; html_content += "

Timezone: " + request.cf.timezone + "

"; let html = ` Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

${html_content} `; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response async def on_fetch(request): html_content = "" html_style = "body{padding:6em font-family: sans-serif;} h1{color:#f6821f;}" html_content += "

Colo: " + request.cf.colo + "

" html_content += "

Country: " + request.cf.country + "

" html_content += "

City: " + request.cf.city + "

" html_content += "

Continent: " + request.cf.continent + "

" html_content += "

Latitude: " + request.cf.latitude + "

" html_content += "

Longitude: " + request.cf.longitude + "

" html_content += "

PostalCode: " + request.cf.postalCode + "

" html_content += "

Region: " + request.cf.region + "

" html_content += "

RegionCode: " + request.cf.regionCode + "

" html_content += "

Timezone: " + request.cf.timezone + "

" html = f""" Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

{html_content} """ headers = {"content-type": "text/html;charset=UTF-8"} return Response(html, headers=headers) ``` * Hono ```ts import { Hono } from "hono"; import { html } from "hono/html"; // Define the RequestWithCf interface to add Cloudflare-specific properties interface RequestWithCf extends Request { cf: { // Cloudflare-specific properties for geolocation colo: string; country: string; city: string; continent: string; latitude: string; longitude: string; postalCode: string; metroCode: string; region: string; regionCode: string; timezone: string; // Add other CF properties as needed }; } const app = new Hono(); app.get("*", (c) => { // Cast the raw request to include Cloudflare-specific properties const request = c.req.raw; // Define styles const html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; // Create content with geolocation data let html_content = html`

Colo: ${request.cf.colo}

Country: ${request.cf.country}

City: ${request.cf.city}

Continent: ${request.cf.continent}

Latitude: ${request.cf.latitude}

Longitude: ${request.cf.longitude}

PostalCode: ${request.cf.postalCode}

MetroCode: ${request.cf.metroCode}

Region: ${request.cf.region}

RegionCode: ${request.cf.regionCode}

Timezone: ${request.cf.timezone}

`; // Compose the full HTML const htmlContent = html` Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

${html_content} `; // Return the HTML response return c.html(htmlContent); }); export default app; ```
--- title: Hot-link protection · Cloudflare Workers docs description: Block other websites from linking to your content. This is useful for protecting images. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Security,Headers source_url: html: https://developers.cloudflare.com/workers/examples/hot-link-protection/ md: https://developers.cloudflare.com/workers/examples/hot-link-protection/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/hot-link-protection) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Fetch the original request const response = await fetch(request); // If it's an image, engage hotlink protection based on the // Referer header. const referer = request.headers.get("Referer"); const contentType = response.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(request.url).hostname) { // Redirect the user to your website return Response.redirect(HOMEPAGE_URL, 302); } } // Everything is fine, return the response normally. return response; }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Fetch the original request const response = await fetch(request); // If it's an image, engage hotlink protection based on the // Referer header. const referer = request.headers.get("Referer"); const contentType = response.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(request.url).hostname) { // Redirect the user to your website return Response.redirect(HOMEPAGE_URL, 302); } } // Everything is fine, return the response normally. return response; }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch from urllib.parse import urlparse async def on_fetch(request): homepage_url = "https://tutorial.cloudflareworkers.com/" protected_type = "image/" # Fetch the original request response = await fetch(request) # If it's an image, engage hotlink protection based on the referer header referer = request.headers["Referer"] content_type = response.headers["Content-Type"] or "" if referer and content_type.startswith(protected_type): # If the hostnames don't match, it's a hotlink if urlparse(referer).hostname != urlparse(request.url).hostname: # Redirect the user to your website return Response.redirect(homepage_url, 302) # Everything is fine, return the response normally return response ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); // Middleware for hot-link protection app.use('*', async (c, next) => { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Continue to the next handler to get the response await next(); // If we have a response, check for hotlinking if (c.res) { // If it's an image, engage hotlink protection based on the Referer header const referer = c.req.header("Referer"); const contentType = c.res.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(c.req.url).hostname) { // Redirect the user to your website c.res = c.redirect(HOMEPAGE_URL, 302); } } } }); // Default route handler that passes through the request to the origin app.all('*', async (c) => { // Fetch the original request return fetch(c.req.raw); }); export default app; ``` --- title: Custom Domain with Images · Cloudflare Workers docs description: Set up custom domain for Images using a Worker or serve images using a prefix path and Cloudflare registered domain. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/images-workers/ md: https://developers.cloudflare.com/workers/examples/images-workers/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/images-workers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. To serve images from a custom domain: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com). 2. Select your account > select **Workers & Pages**. 3. Select **Create application** > **Workers** > **Create Worker** and create your Worker. 4. In your Worker, select **Quick edit** and paste the following code. * JavaScript ```js export default { async fetch(request) { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA const accountHash = ""; const { pathname } = new URL(request.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${pathname}`); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA const accountHash = ""; const { pathname } = new URL(request.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${pathname}`); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from 'hono'; interface Env { // You can store your account hash as a binding variable ACCOUNT_HASH?: string; } const app = new Hono<{ Bindings: Env }>(); app.get('*', async (c) => { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA // Either get it from environment or hardcode it here const accountHash = c.env.ACCOUNT_HASH || ""; const url = new URL(c.req.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${url.pathname}`); }); export default app; ``` * Python ```py from js import URL, fetch async def on_fetch(request): # You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA account_hash = "" url = URL.new(request.url) # A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public # will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(f'https://imagedelivery.net/{account_hash}{url.pathname}') ``` Another way you can serve images from a custom domain is by using the `cdn-cgi/imagedelivery` prefix path which is used as path to trigger `cdn-cgi` image proxy. Below is an example showing the hostname as a Cloudflare proxied domain under the same account as the Image, followed with the prefix path and the image ``, `` and `` which can be found in the **Images** on the Cloudflare dashboard. ```js https://example.com/cdn-cgi/imagedelivery/// ``` --- title: Logging headers to console · Cloudflare Workers docs description: Examine the contents of a Headers object by logging to console with a Map. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Debugging,Headers source_url: html: https://developers.cloudflare.com/workers/examples/logging-headers/ md: https://developers.cloudflare.com/workers/examples/logging-headers/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/logging-headers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { console.log(new Map(request.headers)); return new Response("Hello world"); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { console.log(new Map(request.headers)); return new Response("Hello world"); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response async def on_fetch(request): print(dict(request.headers)) return Response('Hello world') ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { console_log!("{:?}", req.headers()); Response::ok("hello world") } ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', (c) => { // Different ways to log headers in Hono: // 1. Using Map to display headers in console console.log('Headers as Map:', new Map(c.req.raw.headers)); // 2. Using spread operator to log headers console.log('Headers spread:', [...c.req.raw.headers]); // 3. Using Object.fromEntries to convert to an object console.log('Headers as Object:', Object.fromEntries(c.req.raw.headers)); // 4. Hono's built-in header accessor (for individual headers) console.log('User-Agent:', c.req.header('User-Agent')); // 5. Using c.req.headers to get all headers console.log('All headers from Hono context:', c.req.header()); return c.text('Hello world'); }); export default app; ``` *** ## Console-logging headers Use a `Map` if you need to log a `Headers` object to the console: ```js console.log(new Map(request.headers)); ``` Use the `spread` operator if you need to quickly stringify a `Headers` object: ```js let requestHeaders = JSON.stringify([...request.headers]); ``` Use `Object.fromEntries` to convert the headers to an object: ```js let requestHeaders = Object.fromEntries(request.headers); ``` ### The problem When debugging Workers, examine the headers on a request or response. A common mistake is to try to log headers to the developer console via code like this: ```js console.log(request.headers); ``` Or this: ```js console.log(`Request headers: ${JSON.stringify(request.headers)}`); ``` Both attempts result in what appears to be an empty object — the string `"{}"` — even though calling `request.headers.has("Your-Header-Name")` might return true. This is the same behavior that browsers implement. The reason this happens is because [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers) objects do not store headers in enumerable JavaScript properties, so the developer console and JSON stringifier do not know how to read the names and values of the headers. It is not actually an empty object, but rather an opaque object. `Headers` objects are iterable, which you can take advantage of to develop a couple of quick one-liners for debug-printing headers. ### Pass headers through a Map The first common idiom for making Headers `console.log()`-friendly is to construct a `Map` object from the `Headers` object and log the `Map` object. ```js console.log(new Map(request.headers)); ``` This works because: * `Map` objects can be constructed from iterables, like `Headers`. * The `Map` object does store its entries in enumerable JavaScript properties, so the developer console can see into it. ### Spread headers into an array The `Map` approach works for calls to `console.log()`. If you need to stringify your headers, you will discover that stringifying a `Map` yields nothing more than `[object Map]`. Even though a `Map` stores its data in enumerable properties, those properties are [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol)-keyed. Because of this, `JSON.stringify()` will [ignore Symbol-keyed properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol#symbols_and_json.stringify) and you will receive an empty `{}`. Instead, you can take advantage of the iterability of the `Headers` object in a new way by applying the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) (`...`) to it. ```js let requestHeaders = JSON.stringify([...request.headers], null, 2); console.log(`Request headers: ${requestHeaders}`); ``` ### Convert headers into an object with Object.fromEntries (ES2019) ES2019 provides [`Object.fromEntries`](https://github.com/tc39/proposal-object-from-entries) which is a call to convert the headers into an object: ```js let headersObject = Object.fromEntries(request.headers); let requestHeaders = JSON.stringify(headersObject, null, 2); console.log(`Request headers: ${requestHeaders}`); ``` This results in something like: ```js Request headers: { "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "accept-encoding": "gzip", "accept-language": "en-US,en;q=0.9", "cf-ipcountry": "US", // ... }" ``` --- title: Modify request property · Cloudflare Workers docs description: Create a modified request with edited properties based off of an incoming request. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware,Headers source_url: html: https://developers.cloudflare.com/workers/examples/modify-request-property/ md: https://developers.cloudflare.com/workers/examples/modify-request-property/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/modify-request-property) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { /** * Example someHost is set up to return raw JSON * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied * @param {string} someHost the host the request will resolve too */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; /** * The best practice is to only assign new RequestInit properties * on the request object using either a method or the constructor */ const newRequestInit = { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode. redirect: "follow", // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }; // Change just the host const url = new URL(someUrl); url.hostname = someHost; // Best practice is to always use the original request to construct the new request // to clone all the attributes. Applying the URL also requires a constructor // since once a Request has been constructed, its URL is immutable. const newRequest = new Request( url.toString(), new Request(request, newRequestInit), ); // Set headers using method newRequest.headers.set("X-Example", "bar"); newRequest.headers.set("Content-Type", "application/json"); try { return await fetch(newRequest); } catch (e) { return new Response(JSON.stringify({ error: e.message }), { status: 500, }); } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * Example someHost is set up to return raw JSON * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied * @param {string} someHost the host the request will resolve too */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; /** * The best practice is to only assign new RequestInit properties * on the request object using either a method or the constructor */ const newRequestInit = { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode. redirect: "follow", // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }; // Change just the host const url = new URL(someUrl); url.hostname = someHost; // Best practice is to always use the original request to construct the new request // to clone all the attributes. Applying the URL also requires a constructor // since once a Request has been constructed, its URL is immutable. const newRequest = new Request( url.toString(), new Request(request, newRequestInit), ); // Set headers using method newRequest.headers.set("X-Example", "bar"); newRequest.headers.set("Content-Type", "application/json"); try { return await fetch(newRequest); } catch (e) { return new Response(JSON.stringify({ error: e.message }), { status: 500, }); } }, } satisfies ExportedHandler; ``` * Python ```py import json from pyodide.ffi import to_js as _to_js from js import Object, URL, Request, fetch, Response def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) async def on_fetch(request): some_host = "example.com" some_url = "https://foo.example.com/api.js" # The best practice is to only assign new_request_init properties # on the request object using either a method or the constructor new_request_init = { "method": "POST", # Change method "body": json.dumps({ "bar": "foo" }), # Change body "redirect": "follow", # Change the redirect mode # Change headers, note this method will erase existing headers "headers": { "Content-Type": "application/json", }, # Change a Cloudflare feature on the outbound response "cf": { "apps": False }, } # Change just the host url = URL.new(some_url) url.hostname = some_host # Best practice is to always use the original request to construct the new request # to clone all the attributes. Applying the URL also requires a constructor # since once a Request has been constructed, its URL is immutable. org_request = Request.new(request, new_request_init) new_request = Request.new(url.toString(),org_request) new_request.headers["X-Example"] = "bar" new_request.headers["Content-Type"] = "application/json" try: return await fetch(new_request) except Exception as e: return Response.new({"error": str(e)}, status=500) ``` * Hono ```ts import { Hono } from "hono"; const app = new Hono(); app.all("*", async (c) => { /** * Example someHost is set up to return raw JSON */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; // Create a URL object to modify the hostname const url = new URL(someUrl); url.hostname = someHost; // Create a new request // First create a clone of the original request with the new properties const requestClone = new Request(c.req.raw, { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode redirect: "follow" as RequestRedirect, // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", "X-Example": "bar", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }); // Then create a new request with the modified URL const newRequest = new Request(url.toString(), requestClone); // Send the modified request const response = await fetch(newRequest); // Return the response return response; }); // Handle errors app.onError((err, c) => { return err.getResponse(); }); export default app; ``` --- title: Modify response · Cloudflare Workers docs description: Fetch and modify response properties which are immutable by creating a copy first. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware,Headers source_url: html: https://developers.cloudflare.com/workers/examples/modify-response/ md: https://developers.cloudflare.com/workers/examples/modify-response/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/modify-response) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { /** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo"; //"Orig-Header" const headerNameDst = "Last-Modified"; /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request); // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Change response body by adding the foo prop const originalBody = await originalResponse.json(); const body = JSON.stringify({ foo: "bar", ...originalBody }); response = new Response(body, response); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ); } return response; }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo"; //"Orig-Header" const headerNameDst = "Last-Modified"; /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request); // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Change response body by adding the foo prop const originalBody = await originalResponse.json(); const body = JSON.stringify({ foo: "bar", ...originalBody }); response = new Response(body, response); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ); } return response; }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch import json async def on_fetch(request): header_name_src = "foo" # Header to get the new value from header_name_dst = "Last-Modified" # Header to set based off of value in src # Response properties are immutable. To change them, construct a new response original_response = await fetch(request) # Change status and statusText, but preserve body and headers response = Response(original_response.body, status=500, status_text="some message", headers=original_response.headers) # Change response body by adding the foo prop new_body = await original_response.json() new_body["foo"] = "bar" response.replace_body(json.dumps(new_body)) # Add a new header response.headers["foo"] = "bar" # Set destination header to the value of the source header src = response.headers[header_name_src] if src is not None: response.headers[header_name_dst] = src print(f'Response header {header_name_dst} was set to {response.headers[header_name_dst]}') return response ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', async (c) => { /** * Header configuration */ const headerNameSrc = "foo"; // Header to get the new value from const headerNameDst = "Last-Modified"; // Header to set based off of value in src /** * Response properties are immutable. With Hono, we can modify the response * by creating custom response objects. */ const originalResponse = await fetch(c.req.raw); // Get the JSON body from the original response const originalBody = await originalResponse.json(); // Modify the body by adding a new property const modifiedBody = { foo: "bar", ...originalBody }; // Create a new custom response with modified status, headers, and body const response = new Response(JSON.stringify(modifiedBody), { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get(headerNameDst)}"` ); } return response; }); export default app; ``` --- title: Multiple Cron Triggers · Cloudflare Workers docs description: Set multiple Cron Triggers on three different schedules. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware source_url: html: https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/ md: https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/multiple-cron-triggers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async scheduled(event, env, ctx) { // Write code for updating your API switch (event.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` * TypeScript ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { // Write code for updating your API switch (controller.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` * Hono ```ts import { Hono } from "hono"; interface Env {} // Create Hono app const app = new Hono<{ Bindings: Env }>(); // Regular routes for normal HTTP requests app.get("/", (c) => c.text("Multiple Cron Trigger Example")); // Export both the app and a scheduled function export default { // The Hono app handles regular HTTP requests fetch: app.fetch, // The scheduled function handles Cron triggers async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { // Check which cron schedule triggered this execution switch (controller.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` ## Test Cron Triggers using Wrangler The recommended way of testing Cron Triggers is using Wrangler. Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=*%2F3+*+*+*+*" curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers ``` --- title: Stream OpenAI API Responses · Cloudflare Workers docs description: Use the OpenAI v4 SDK to stream responses from OpenAI. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: AI source_url: html: https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/ md: https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/openai-sdk-streaming) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. In order to run this code, you must install the OpenAI SDK by running `npm i openai`. Note For analytics, caching, rate limiting, and more, you can also send requests like this through Cloudflare's [AI Gateway](https://developers.cloudflare.com/ai-gateway/providers/openai/). * TypeScript ```ts import OpenAI from "openai"; export default { async fetch(request, env, ctx): Promise { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); // Create a TransformStream to handle streaming data let { readable, writable } = new TransformStream(); let writer = writable.getWriter(); const textEncoder = new TextEncoder(); ctx.waitUntil( (async () => { const stream = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Tell me a story" }], stream: true, }); // loop over the data as it is streamed and write to the writeable for await (const part of stream) { writer.write( textEncoder.encode(part.choices[0]?.delta?.content || ""), ); } writer.close(); })(), ); // Send the readable back to the browser return new Response(readable); }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; import { streamText } from "hono/streaming"; import OpenAI from "openai"; interface Env { OPENAI_API_KEY: string; } const app = new Hono<{ Bindings: Env }>(); app.get("*", async (c) => { const openai = new OpenAI({ apiKey: c.env.OPENAI_API_KEY, }); const chatStream = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Tell me a story" }], stream: true, }); return streamText(c, async (stream) => { for await (const message of chatStream) { await stream.write(message.choices[0].delta.content || ""); } stream.close(); }); }); export default app; ``` --- title: Post JSON · Cloudflare Workers docs description: Send a POST request with JSON data. Use to share data with external servers. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: JSON source_url: html: https://developers.cloudflare.com/workers/examples/post-json/ md: https://developers.cloudflare.com/workers/examples/post-json/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/post-json) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to * @param {string} url the URL to send the request to * @param {BodyInit} body the JSON data to send in the request */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body * @param {Response} response */ async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return JSON.stringify(await response.json()); } else if (contentType.includes("application/text")) { return response.text(); } else if (contentType.includes("text/html")) { return response.text(); } else { return response.text(); } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const results = await gatherResponse(response); return new Response(results, init); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to * @param {string} url the URL to send the request to * @param {BodyInit} body the JSON data to send in the request */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body * @param {Response} response */ async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return JSON.stringify(await response.json()); } else if (contentType.includes("application/text")) { return response.text(); } else if (contentType.includes("text/html")) { return response.text(); } else { return response.text(); } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const results = await gatherResponse(response); return new Response(results, init); }, } satisfies ExportedHandler; ``` * Python ```py import json from pyodide.ffi import to_js as _to_js from js import Object, fetch, Response, Headers def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) # gather_response returns both content-type & response body as a string async def gather_response(response): headers = response.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return (content_type, json.dumps(dict(await response.json()))) return (content_type, await response.text()) async def on_fetch(_request): url = "https://jsonplaceholder.typicode.com/todos/1" body = { "results": ["default data to send"], "errors": None, "msg": "I sent this to the fetch", } options = { "body": json.dumps(body), "method": "POST", "headers": { "content-type": "application/json;charset=UTF-8", }, } response = await fetch(url, to_js(options)) content_type, result = await gather_response(response) headers = Headers.new({"content-type": content_type}.items()) return Response.new(result, headers=headers) ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', async (c) => { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body */ async function gatherResponse(response: Response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } else if (contentType.includes("application/text")) { return { contentType, result: await response.text() }; } else if (contentType.includes("text/html")) { return { contentType, result: await response.text() }; } else { return { contentType, result: await response.text() }; } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const { contentType, result } = await gatherResponse(response); return new Response(result, { headers: { "content-type": contentType, }, }); }); export default app; ``` --- title: Using timingSafeEqual · Cloudflare Workers docs description: Protect against timing attacks by safely comparing values using `timingSafeEqual`. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Security,Web Crypto source_url: html: https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/ md: https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/protect-against-timing-attacks) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. The [`crypto.subtle.timingSafeEqual`](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal) function compares two values using a constant-time algorithm. The time taken is independent of the contents of the values. When strings are compared using the equality operator (`==` or `===`), the comparison will end at the first mismatched character. By using `timingSafeEqual`, an attacker would not be able to use timing to find where at which point in the two strings there is a difference. The `timingSafeEqual` function takes two `ArrayBuffer` or `TypedArray` values to compare. These buffers must be of equal length, otherwise an exception is thrown. Note that this function is not constant time with respect to the length of the parameters and also does not guarantee constant time for the surrounding code. Handling of secrets should be taken with care to not introduce timing side channels. In order to compare two strings, you must use the [`TextEncoder`](https://developers.cloudflare.com/workers/runtime-apis/encoding/#textencoder) API. * TypeScript ```ts interface Environment { MY_SECRET_VALUE?: string; } export default { async fetch(req: Request, env: Environment) { if (!env.MY_SECRET_VALUE) { return new Response("Missing secret binding", { status: 500 }); } const authToken = req.headers.get("Authorization") || ""; if (authToken.length !== env.MY_SECRET_VALUE.length) { return new Response("Unauthorized", { status: 401 }); } const encoder = new TextEncoder(); const a = encoder.encode(authToken); const b = encoder.encode(env.MY_SECRET_VALUE); if (a.byteLength !== b.byteLength) { return new Response("Unauthorized", { status: 401 }); } if (!crypto.subtle.timingSafeEqual(a, b)) { return new Response("Unauthorized", { status: 401 }); } return new Response("Welcome!"); }, }; ``` * Python ```py from workers import Response from js import TextEncoder, crypto async def on_fetch(request, env): auth_token = request.headers["Authorization"] or "" secret = env.MY_SECRET_VALUE if secret is None: return Response("Missing secret binding", status=500) if len(auth_token) != len(secret): return Response("Unauthorized", status=401) encoder = TextEncoder.new() a = encoder.encode(auth_token) b = encoder.encode(secret) if a.byteLength != b.byteLength: return Response("Unauthorized", status=401) if not crypto.subtle.timingSafeEqual(a, b): return Response("Unauthorized", status=401) return Response("Welcome!") ``` * Hono ```ts import { Hono } from 'hono'; interface Environment { Bindings: { MY_SECRET_VALUE?: string; } } const app = new Hono(); // Middleware to handle authentication with timing-safe comparison app.use('*', async (c, next) => { const secret = c.env.MY_SECRET_VALUE; if (!secret) { return c.text("Missing secret binding", 500); } const authToken = c.req.header("Authorization") || ""; // Early length check to avoid unnecessary processing if (authToken.length !== secret.length) { return c.text("Unauthorized", 401); } const encoder = new TextEncoder(); const a = encoder.encode(authToken); const b = encoder.encode(secret); if (a.byteLength !== b.byteLength) { return c.text("Unauthorized", 401); } // Perform timing-safe comparison if (!crypto.subtle.timingSafeEqual(a, b)) { return c.text("Unauthorized", 401); } // If we got here, the auth token is valid await next(); }); // Protected route app.get('*', (c) => { return c.text("Welcome!"); }); export default app; ``` --- title: Read POST · Cloudflare Workers docs description: Serve an HTML form, then read POST requests. Use also to read JSON or POST data from an incoming request. lastUpdated: 2025-04-28T16:08:27.000Z chatbotDeprioritize: false tags: JSON source_url: html: https://developers.cloudflare.com/workers/examples/read-post/ md: https://developers.cloudflare.com/workers/examples/read-post/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/read-post) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { /** * rawHtmlResponse returns HTML inputted directly * into the worker script * @param {string} html */ function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } /** * readRequestBody reads in the incoming request body * Use await readRequestBody(..) in an async function to get the string * @param {Request} request the incoming request to read from */ async function readRequestBody(request) { const contentType = request.headers.get("content-type"); if (contentType.includes("application/json")) { return JSON.stringify(await request.json()); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body = {}; for (const entry of formData.entries()) { body[entry[0]] = entry[1]; } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const { url } = request; if (url.includes("form")) { return rawHtmlResponse(someForm); } if (request.method === "POST") { const reqBody = await readRequestBody(request); const retBody = `The request body sent in was ${reqBody}`; return new Response(retBody); } else if (request.method === "GET") { return new Response("The request was a GET"); } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { /** * rawHtmlResponse returns HTML inputted directly * into the worker script * @param {string} html */ function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } /** * readRequestBody reads in the incoming request body * Use await readRequestBody(..) in an async function to get the string * @param {Request} request the incoming request to read from */ async function readRequestBody(request: Request) { const contentType = request.headers.get("content-type"); if (contentType.includes("application/json")) { return JSON.stringify(await request.json()); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body = {}; for (const entry of formData.entries()) { body[entry[0]] = entry[1]; } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const { url } = request; if (url.includes("form")) { return rawHtmlResponse(someForm); } if (request.method === "POST") { const reqBody = await readRequestBody(request); const retBody = `The request body sent in was ${reqBody}`; return new Response(retBody); } else if (request.method === "GET") { return new Response("The request was a GET"); } }, } satisfies ExportedHandler; ``` * Python ```py from js import Object, Response, Headers, JSON async def read_request_body(request): headers = request.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return JSON.stringify(await request.json()) if "form" in content_type: form = await request.formData() data = Object.fromEntries(form.entries()) return JSON.stringify(data) return await request.text() async def on_fetch(request): def raw_html_response(html): headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items()) return Response.new(html, headers=headers) if "form" in request.url: return raw_html_response("") if "POST" in request.method: req_body = await read_request_body(request) ret_body = f"The request body sent in was {req_body}" return Response.new(ret_body) return Response.new("The request was not POST") ``` * Rust ```rs use serde::{Deserialize, Serialize}; use worker::*; fn raw_html_response(html: &str) -> Result { Response::from_html(html) } #[derive(Deserialize, Serialize, Debug)] struct Payload { msg: String, } async fn read_request_body(mut req: Request) -> String { let ctype = req.headers().get("content-type").unwrap().unwrap(); match ctype.as_str() { "application/json" => format!("{:?}", req.json::().await.unwrap()), "text/html" => req.text().await.unwrap(), "multipart/form-data" => format!("{:?}", req.form_data().await.unwrap()), _ => String::from("a file"), } } #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { if String::from(req.url()?).contains("form") { return raw_html_response("some html form"); } match req.method() { Method::Post => { let req_body = read_request_body(req).await; Response::ok(format!("The request body sent in was {}", req_body)) } _ => Response::ok(format!("The result was a {:?}", req.method())), } } ``` * Hono ```ts import { Hono } from "hono"; import { html } from "hono/html"; const app = new Hono(); /** * readRequestBody reads in the incoming request body * @param {Request} request the incoming request to read from */ async function readRequestBody(request: Request): Promise { const contentType = request.headers.get("content-type") || ""; if (contentType.includes("application/json")) { const body = await request.json(); return JSON.stringify(body); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body: Record = {}; for (const [key, value] of formData.entries()) { body[key] = value.toString(); } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const someForm = html`
`; app.get("*", async (c) => { const url = c.req.url; if (url.includes("form")) { return c.html(someForm); } return c.text("The request was a GET"); }); app.post("*", async (c) => { const reqBody = await readRequestBody(c.req.raw); const retBody = `The request body sent in was ${reqBody}`; return c.text(retBody); }); export default app; ``` Prevent potential errors when accessing request.body The body of a [Request](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`. To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128 MB per Worker](https://developers.cloudflare.com/workers/platform/limits#worker-limits) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).
--- title: Redirect · Cloudflare Workers docs description: Redirect requests from one URL to another or from one set of URLs to another set. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware,Redirects source_url: html: https://developers.cloudflare.com/workers/examples/redirect/ md: https://developers.cloudflare.com/workers/examples/redirect/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/redirect) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. ## Redirect all requests to one URL * JavaScript ```js export default { async fetch(request) { const destinationURL = "https://example.com"; const statusCode = 301; return Response.redirect(destinationURL, statusCode); }, }; ``` [Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwB2AMzCAHIIAsAJgmzp0gFwsWbYBzhcafASPFS5CpQFgAUAGF0VCAFNb2ACJQAzjHSuo0G8pIa8AmISKjhgOwYAIigaOwAPADoAK1dI0lQoMAcwiOjYxJTIi2tbBwhsABU6GDs-OBgYMD4CKBtkJLgANzhXXgRYCABqYHRccDsLC3iPJBJcO1Q4cAgSAG9zEhIeuipefzsIXgALAAoEOwBHEDtXCABKNY3Nkl4bW7mb6FCfKgBVACUADIkBgkSJHCAQGCuZTIZDxMKNOwJV7ANJPTavKjvW4EECuazzEEkUSCACMRAxJHOEBACCoJH+Nw82OR5x4514EBO81uMRaNgBgIANCRcbSCaM7HdKZsAL7C8xyogWNTMDRaHQ8fhCMSSGTyRTSYo2eyOFzuTzeVpUPwBLSkULhKLhQhaNL+TLZZ2RMhgdBkIpWU1lSrVWpbBpNXgCqjtVw2SbmVaRYBwGIAfRGYyykWUeXmBVSctVao1QS1el1hgNJmkzCAA) * TypeScript ```ts export default { async fetch(request): Promise { const destinationURL = "https://example.com"; const statusCode = 301; return Response.redirect(destinationURL, statusCode); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response def on_fetch(request): destinationURL = "https://example.com" statusCode = 301 return Response.redirect(destinationURL, statusCode) ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result { let destination_url = Url::parse("https://example.com")?; let status_code = 301; Response::redirect_with_status(destination_url, status_code) } ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.all('*', (c) => { const destinationURL = "https://example.com"; const statusCode = 301; return c.redirect(destinationURL, statusCode); }); export default app; ``` ## Redirect requests from one domain to another * JavaScript ```js export default { async fetch(request) { const base = "https://example.com"; const statusCode = 301; const url = new URL(request.url); const { pathname, search } = url; const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return Response.redirect(destinationURL, statusCode); }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const base = "https://example.com"; const statusCode = 301; const url = new URL(request.url); const { pathname, search } = url; const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return Response.redirect(destinationURL, statusCode); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response from urllib.parse import urlparse async def on_fetch(request): base = "https://example.com" statusCode = 301 url = urlparse(request.url) destinationURL = f'{base}{url.path}{url.query}' print(destinationURL) return Response.redirect(destinationURL, statusCode) ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { let mut base = Url::parse("https://example.com")?; let status_code = 301; let url = req.url()?; base.set_path(url.path()); base.set_query(url.query()); console_log!("{:?}", base.to_string()); Response::redirect_with_status(base, status_code) } ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.all('*', (c) => { const base = "https://example.com"; const statusCode = 301; const { pathname, search } = new URL(c.req.url); const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return c.redirect(destinationURL, statusCode); }); export default app; ``` --- title: Respond with another site · Cloudflare Workers docs description: Respond to the Worker request with the response from another website (example.com in this example). lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Middleware source_url: html: https://developers.cloudflare.com/workers/examples/respond-with-another-site/ md: https://developers.cloudflare.com/workers/examples/respond-with-another-site/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/respond-with-another-site) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { async function MethodNotAllowed(request) { return new Response(`Method ${request.method} not allowed.`, { status: 405, headers: { Allow: "GET", }, }); } // Only GET requests work with this proxy. if (request.method !== "GET") return MethodNotAllowed(request); return fetch(`https://example.com`); }, }; ``` [Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwB2EYIDMw4QFYAnOICMALhYs2wDnC40+AsaMkz5CgLAAoAMLoqEAKY3sAESgBnGOhdRo1pSXV4CYhIqOGBbBgAiKBpbAA8AOgArFwjSVCgwe1DwqJiE5IjzKxt7CGwAFToYW184GBgwPgIoa2REuAA3OBdeBFgIAGpgdFxwW3NzOPckElxbVDhwCBIAbzMSEm66Kl4-WwheAAsACgRbAEcQWxcIAEpV9Y3Nl23d1GpebyoSAFl9w5GADl0BAAIJgMDoADutlwpwuVxu9zWTyeZwgIAQ3yotihJAAStd3FQXLZjgADP4QAG4EgAEhWZ0u1wg8TC1JGAF9giDNhDobD4uSADQPVGom4EEAuXwAFkE0mFj3FJEOtjgcwQMrFKqe4MhUN8EQA4gBRcoRJW6kicq3izm3IjKm3O5DIEgAeSoYDoJDN5RITMREBcJChmAA1mGvIcSNTXCQYAh0LE6PFnVBUCR4cybmz-iMSABCBgMEgm80Re7ozHfKk04Fg-kwuFBlmO501rF7A4ncmHCAQGAyt1xUINWzxXjoYDkjsbW1mTlEcyqZjqTTaHj8ISiAxSOSKIrWOwOZxuDxeFpUXz+TSkEJhSLsjWBVJ+DJZJ8RMiQsiFSwT1KCoqhqTZ6kaXhmlaZJrAmMwVgiYA4GiAB9YZRkyCIlFyOZ8hSTlVzXDdAi3XRdzEQxDwUZggA) * TypeScript ```ts export default { async fetch(request): Promise { async function MethodNotAllowed(request) { return new Response(`Method ${request.method} not allowed.`, { status: 405, headers: { Allow: "GET", }, }); } // Only GET requests work with this proxy. if (request.method !== "GET") return MethodNotAllowed(request); return fetch(`https://example.com`); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch def on_fetch(request): def method_not_allowed(request): msg = f'Method {request.method} not allowed.' headers = {"Allow": "GET"} return Response(msg, headers=headers, status=405) # Only GET requests work with this proxy. if request.method != "GET": return method_not_allowed(request) return fetch("https://example.com") ``` --- title: Return small HTML page · Cloudflare Workers docs description: Deliver an HTML page from an HTML string directly inside the Worker script. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/return-html/ md: https://developers.cloudflare.com/workers/examples/return-html/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/return-html) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const html = `

Hello World

This markup was generated by a Cloudflare Worker.

`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ``` [Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwBmABwAmAIwBOQQDZJgyQFYAXCxZtgHOFxp8BIiTPmKVAWABQAYXRUIAU3vYAIlADOMdO6jQ7qki08AmISKjhgBwYAIigaBwAPADoAK3do0lQoMCcIqNj45LToq1t7JwhsABU6GAcAuBgYMD4CKDtkFLgANzh3XgRYCABqYHRccAcrK0SvJBJcB1Q4cAgSAG9LEhI+uipeQIcIXgALAAoEBwBHEAd3CABKDa3tkl47e5ITiGAwEgYSAADAA8AEIXAB5axVACaAAUAKJfH5gAB8L22wIouDo6Ner2BJ0kqIAEg4wGB0CQAOqYMC4YHIIl4-EkYEwVFVE4eEjARAAaxAMBIAHc+iQAOZOBwIAgOXDkOg7EjWSkgXCoMCIBw0zD8mVJRkcjFs5DY3GAoiWE2XCAgBBUMIOEUkABKdy8VHcDjO31+ABpnqyvg44IsEO4Aptg9tou9ys4ILUHNEAtFHAkUH6wERTohvRAGABVKoAMWwomi-pN2wAvtX8bWHla69Xa0QrBpmFodHoePwhGIpLIFEplKU7I5nG5PN5fO0qAEgjpSOFIjFIoQdBlAtlcuvomRKWQSjZJxVqsmGk0Wrw2h00nZppZ1tE+XEAPpjCY5VMFRZFOktadl2PYhH2BiDsYI5mMozBAA) * TypeScript ```ts export default { async fetch(request): Promise { const html = `

Hello World

This markup was generated by a Cloudflare Worker.

`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response def on_fetch(request): html = """

Hello World

This markup was generated by a Cloudflare Worker.

""" headers = {"content-type": "text/html;charset=UTF-8"} return Response(html, headers=headers) ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result { let html = r#"

Hello World

This markup was generated by a Cloudflare Worker.

"#; Response::from_html(html) } ``` * Hono ```ts import { Hono } from "hono"; import { html } from "hono/html"; const app = new Hono(); app.get("*", (c) => { const doc = html`

Hello World

This markup was generated by a Cloudflare Worker with Hono.

`; return c.html(doc); }); export default app; ```
--- title: Return JSON · Cloudflare Workers docs description: Return JSON directly from a Worker script, useful for building APIs and middleware. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: JSON source_url: html: https://developers.cloudflare.com/workers/examples/return-json/ md: https://developers.cloudflare.com/workers/examples/return-json/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/return-json) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const data = { hello: "world", }; return Response.json(data); }, }; ``` [Run Worker in Playground](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbPYZb6HbW5QDGU2AAwAmAGwAWAJwB2AKyjRs6QA4AXCxZtgHOFxp8BIiTPmKVAWABQAYXRUIAU3vYAIlADOMdO6jQ7qki08AmISKjhgBwYAIigaBwAPADoAK3do0lQoMCcIqNj45LToq1t7JwhsABU6GAcAuBgYMD4CKDtkFLgANzh3XgRYCABqYHRccAcrK0SvJBJcB1Q4cAgSAG9LEhI+uipeQIcIXgALAAoEBwBHEAd3CABKDa3tkl47e4WQkgZn19eTg4wGB0AFogB3TBgXDRAA0L22AF8iJYESRLhAQAgqCQAEp3LxUdwOVLuOxnHQPFFI+HIqwaZhaHR6Hj8IRiKRyBRKZSlOyOZxuTzeXztKgBII6UjhSIxSKEHQZQLZXKy6JkEFkEo2fkVaq1eo7JotXhtDppOzTSzraLAOBxAD6YwmOWiqgKiyK6UR9IZTJCLIM7OMXLMymYQA) * TypeScript ```ts export default { async fetch(request): Promise { const data = { hello: "world", }; return Response.json(data); }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response import json def on_fetch(request): data = json.dumps({"hello": "world"}) headers = {"content-type": "application/json"} return Response(data, headers=headers) ``` * Rust ```rs use serde::{Deserialize, Serialize}; use worker::*; #[derive(Deserialize, Serialize, Debug)] struct Json { hello: String, } #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result { let data = Json { hello: String::from("world"), }; Response::from_json(&data) } ``` * Hono ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', (c) => { const data = { hello: "world", }; return c.json(data); }); export default app; ``` --- title: Rewrite links · Cloudflare Workers docs description: Rewrite URL links in HTML using the HTMLRewriter. This is useful for JAMstack websites. lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/rewrite-links/ md: https://developers.cloudflare.com/workers/examples/rewrite-links/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/rewrite-links) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { constructor(attributeName) { this.attributeName = attributeName; } element(element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL), ); } } } const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); const res = await fetch(request); const contentType = res.headers.get("Content-Type"); // If the response is HTML, it can be transformed with // HTMLRewriter -- otherwise, it should pass through if (contentType.startsWith("text/html")) { return rewriter.transform(res); } else { return res; } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { constructor(attributeName) { this.attributeName = attributeName; } element(element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL), ); } } } const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); const res = await fetch(request); const contentType = res.headers.get("Content-Type"); // If the response is HTML, it can be transformed with // HTMLRewriter -- otherwise, it should pass through if (contentType.startsWith("text/html")) { return rewriter.transform(res); } else { return res; } }, } satisfies ExportedHandler; ``` * Python ```py from pyodide.ffi import create_proxy from js import HTMLRewriter, fetch async def on_fetch(request): old_url = "developer.mozilla.org" new_url = "mynewdomain.com" class AttributeRewriter: def __init__(self, attr_name): self.attr_name = attr_name def element(self, element): attr = element.getAttribute(self.attr_name) if attr: element.setAttribute(self.attr_name, attr.replace(old_url, new_url)) href = create_proxy(AttributeRewriter("href")) src = create_proxy(AttributeRewriter("src")) rewriter = HTMLRewriter.new().on("a", href).on("img", src) res = await fetch(request) content_type = res.headers["Content-Type"] # If the response is HTML, it can be transformed with # HTMLRewriter -- otherwise, it should pass through if content_type.startswith("text/html"): return rewriter.transform(res) return res ``` * Hono ```ts import { Hono } from 'hono'; import { html } from 'hono/html'; const app = new Hono(); app.get('*', async (c) => { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { attributeName: string; constructor(attributeName: string) { this.attributeName = attributeName; } element(element: Element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL) ); } } } // Make a fetch request using the original request const res = await fetch(c.req.raw); const contentType = res.headers.get("Content-Type") || ""; // If the response is HTML, transform it with HTMLRewriter if (contentType.startsWith("text/html")) { const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); return new Response(rewriter.transform(res).body, { headers: res.headers }); } else { // Pass through the response as is return res; } }); export default app; ``` --- title: Set security headers · Cloudflare Workers docs description: Set common security headers (X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, Permissions-Policy, Referrer-Policy, Strict-Transport-Security, Content-Security-Policy). lastUpdated: 2025-04-28T14:11:18.000Z chatbotDeprioritize: false tags: Security,Middleware source_url: html: https://developers.cloudflare.com/workers/examples/security-headers/ md: https://developers.cloudflare.com/workers/examples/security-headers/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/security-headers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. * JavaScript ```js export default { async fetch(request) { const DEFAULT_SECURITY_HEADERS = { /* Secure your application with Content-Security-Policy headers. Enabling these headers will permit content from a trusted domain and all its subdomains. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", */ /* You can also set Strict-Transport-Security headers. These are not automatically set because your website might get added to Chrome's HSTS preload list. Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", */ /* Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", */ /* X-XSS-Protection header prevents a page from loading if an XSS attack is detected. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection */ "X-XSS-Protection": "0", /* X-Frame-Options header prevents click-jacking attacks. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options */ "X-Frame-Options": "DENY", /* X-Content-Type-Options header prevents MIME-sniffing. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options */ "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", }; const BLOCKED_HEADERS = [ "Public-Key-Pins", "X-Powered-By", "X-AspNet-Version", ]; let response = await fetch(request); let newHeaders = new Headers(response.headers); const tlsVersion = request.cf.tlsVersion; console.log(tlsVersion); // This sets the headers for HTML responses: if ( newHeaders.has("Content-Type") && !newHeaders.get("Content-Type").includes("text/html") ) { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => { newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]); }); BLOCKED_HEADERS.forEach((name) => { newHeaders.delete(name); }); if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("You need to use TLS version 1.2 or higher.", { status: 400, }); } else { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } }, }; ``` * TypeScript ```ts export default { async fetch(request): Promise { const DEFAULT_SECURITY_HEADERS = { /* Secure your application with Content-Security-Policy headers. Enabling these headers will permit content from a trusted domain and all its subdomains. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", */ /* You can also set Strict-Transport-Security headers. These are not automatically set because your website might get added to Chrome's HSTS preload list. Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", */ /* Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", */ /* X-XSS-Protection header prevents a page from loading if an XSS attack is detected. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection */ "X-XSS-Protection": "0", /* X-Frame-Options header prevents click-jacking attacks. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options */ "X-Frame-Options": "DENY", /* X-Content-Type-Options header prevents MIME-sniffing. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options */ "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", }; const BLOCKED_HEADERS = [ "Public-Key-Pins", "X-Powered-By", "X-AspNet-Version", ]; let response = await fetch(request); let newHeaders = new Headers(response.headers); const tlsVersion = request.cf.tlsVersion; console.log(tlsVersion); // This sets the headers for HTML responses: if ( newHeaders.has("Content-Type") && !newHeaders.get("Content-Type").includes("text/html") ) { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => { newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]); }); BLOCKED_HEADERS.forEach((name) => { newHeaders.delete(name); }); if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("You need to use TLS version 1.2 or higher.", { status: 400, }); } else { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } }, } satisfies ExportedHandler; ``` * Python ```py from workers import Response, fetch async def on_fetch(request): default_security_headers = { # Secure your application with Content-Security-Policy headers. #Enabling these headers will permit content from a trusted domain and all its subdomains. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", #You can also set Strict-Transport-Security headers. #These are not automatically set because your website might get added to Chrome's HSTS preload list. #Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", #Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", #X-XSS-Protection header prevents a page from loading if an XSS attack is detected. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection "X-XSS-Protection": "0", #X-Frame-Options header prevents click-jacking attacks. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options "X-Frame-Options": "DENY", #X-Content-Type-Options header prevents MIME-sniffing. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", } blocked_headers = ["Public-Key-Pins", "X-Powered-By" ,"X-AspNet-Version"] res = await fetch(request) new_headers = res.headers # This sets the headers for HTML responses if "text/html" in new_headers["Content-Type"]: return Response(res.body, status=res.status, statusText=res.statusText, headers=new_headers) for name in default_security_headers: new_headers[name] = default_security_headers[name] for name in blocked_headers: del new_headers["name"] tls = request.cf.tlsVersion if not tls in ("TLSv1.2", "TLSv1.3"): return Response("You need to use TLS version 1.2 or higher.", status=400) return Response(res.body, status=res.status, statusText=res.statusText, headers=new_headers) ``` * Rust ```rs use std::collections::HashMap; use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { let default_security_headers = HashMap::from([ //Secure your application with Content-Security-Policy headers. //Enabling these headers will permit content from a trusted domain and all its subdomains. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy ( "Content-Security-Policy", "default-src 'self' example.com *.example.com", ), //You can also set Strict-Transport-Security headers. //These are not automatically set because your website might get added to Chrome's HSTS preload list. //Here's the code if you want to apply it: ( "Strict-Transport-Security", "max-age=63072000; includeSubDomains; preload", ), //Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: ("Permissions-Policy", "interest-cohort=()"), //X-XSS-Protection header prevents a page from loading if an XSS attack is detected. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection ("X-XSS-Protection", "0"), //X-Frame-Options header prevents click-jacking attacks. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options ("X-Frame-Options", "DENY"), //X-Content-Type-Options header prevents MIME-sniffing. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options ("X-Content-Type-Options", "nosniff"), ("Referrer-Policy", "strict-origin-when-cross-origin"), ( "Cross-Origin-Embedder-Policy", "require-corp; report-to='default';", ), ( "Cross-Origin-Opener-Policy", "same-site; report-to='default';", ), ("Cross-Origin-Resource-Policy", "same-site"), ]); let blocked_headers = ["Public-Key-Pins", "X-Powered-By", "X-AspNet-Version"]; let tls = req.cf().unwrap().tls_version(); let res = Fetch::Request(req).send().await?; let mut new_headers = res.headers().clone(); // This sets the headers for HTML responses if Some(String::from("text/html")) == new_headers.get("Content-Type")? { return Ok(Response::from_body(res.body().clone())? .with_headers(new_headers) .with_status(res.status_code())); } for (k, v) in default_security_headers { new_headers.set(k, v)?; } for k in blocked_headers { new_headers.delete(k)?; } if !vec!["TLSv1.2", "TLSv1.3"].contains(&tls.as_str()) { return Response::error("You need to use TLS version 1.2 or higher.", 400); } Ok(Response::from_body(res.body().clone())? .with_headers(new_headers) .with_status(res.status_code())) } ``` * Hono ```ts import { Hono } from 'hono'; import { secureHeaders } from 'hono/secure-headers'; const app = new Hono(); app.use(secureHeaders()); // Handle all other requests by passing through to origin app.all('*', async (c) => { return fetch(c.req.raw); }); export default app; ``` --- title: Sign requests · Cloudflare Workers docs description: Verify a signed request using the HMAC and SHA-256 algorithms or return a 403. lastUpdated: 2025-07-09T14:24:57.000Z chatbotDeprioritize: false tags: Security,Web Crypto source_url: html: https://developers.cloudflare.com/workers/examples/signing-requests/ md: https://developers.cloudflare.com/workers/examples/signing-requests/index.md --- If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/signing-requests) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. Note This example Worker makes use of the [Node.js Buffer API](https://developers.cloudflare.com/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](https://developers.cloudflare.com/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/#get-started). You can both verify and generate signed requests from within a Worker using the [Web Crypto APIs](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/subtle). The following Worker will: * For request URLs beginning with `/generate/`, replace `/generate/` with `/`, sign the resulting path with its timestamp, and return the full, signed URL in the response body. * For all other request URLs, verify the signed URL and allow the request through. - JavaScript ```js import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; export default { /** * * @param {Request} request * @param {{SECRET_DATA: string}} env * @returns */ async fetch(request, env) { // You will need some secret data to use as a symmetric key. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import your secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); const url = new URL(request.url); // This is a demonstration Worker that allows unauthenticated access to /generate // In a real application you would want to make sure that // users could only generate signed URLs when authenticated if (url.pathname.startsWith("/generate/")) { url.pathname = url.pathname.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // This contains all the data about the request that you want to be able to verify // Here we only sign the timestamp and the pathname, but often you will want to // include more data (for instance, the URL hostname or query parameters) const dataToAuthenticate = `${url.pathname}${timestamp}`; const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/ // for more details on using Node.js APIs in Workers const base64Mac = Buffer.from(mac).toString("base64"); url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return new Response(`${url.pathname}${url.search}`); // Verify all non /generate requests } else { // Make sure you have the minimum necessary query parameters. if (!url.searchParams.has("verify")) { return new Response("Missing query parameter", { status: 403 }); } const [timestamp, hmac] = url.searchParams.get("verify").split("-"); const assertedTimestamp = Number(timestamp); const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`; const receivedMac = Buffer.from(hmac, "base64"); // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use // symmetric keys, you could implement this by calling crypto.subtle.sign() and // then doing a string comparison -- this is insecure, as string comparisons // bail out on the first mismatch, which leaks information to potential // attackers. const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); if (!verified) { return new Response("Invalid MAC", { status: 403 }); } // Signed requests expire after one minute. Note that this value should depend on your specific use case if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return new Response( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, { status: 403 }, ); } } return fetch(new URL(url.pathname, "https://example.com"), request); }, }; ``` - TypeScript ```ts import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; interface Env { SECRET_DATA: string; } export default { async fetch(request, env): Promise { // You will need some secret data to use as a symmetric key. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import your secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); const url = new URL(request.url); // This is a demonstration Worker that allows unauthenticated access to /generate // In a real application you would want to make sure that // users could only generate signed URLs when authenticated if (url.pathname.startsWith("/generate/")) { url.pathname = url.pathname.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // This contains all the data about the request that you want to be able to verify // Here we only sign the timestamp and the pathname, but often you will want to // include more data (for instance, the URL hostname or query parameters) const dataToAuthenticate = `${url.pathname}${timestamp}`; const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/ // for more details on using NodeJS APIs in Workers const base64Mac = Buffer.from(mac).toString("base64"); url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return new Response(`${url.pathname}${url.search}`); // Verify all non /generate requests } else { // Make sure you have the minimum necessary query parameters. if (!url.searchParams.has("verify")) { return new Response("Missing query parameter", { status: 403 }); } const [timestamp, hmac] = url.searchParams.get("verify").split("-"); const assertedTimestamp = Number(timestamp); const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`; const receivedMac = Buffer.from(hmac, "base64"); // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use // symmetric keys, you could implement this by calling crypto.subtle.sign() and // then doing a string comparison -- this is insecure, as string comparisons // bail out on the first mismatch, which leaks information to potential // attackers. const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); if (!verified) { return new Response("Invalid MAC", { status: 403 }); } // Signed requests expire after one minute. Note that this value should depend on your specific use case if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return new Response( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, { status: 403 }, ); } } return fetch(new URL(url.pathname, "https://example.com"), request); }, } satisfies ExportedHandler; ``` - Hono ```ts import { Buffer } from "node:buffer"; import { Hono } from "hono"; import { proxy } from "hono/proxy"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; interface Env { SECRET_DATA: string; } const app = new Hono(); // Handle URL generation requests app.get("/generate/*", async (c) => { const env = c.env; // You will need some secret data to use as a symmetric key const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import the secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); // Replace "/generate/" prefix with "/" let pathname = c.req.path.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // Data to authenticate: pathname + timestamp const dataToAuthenticate = `${pathname}${timestamp}`; // Sign the data const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Convert the signature to base64 const base64Mac = Buffer.from(mac).toString("base64"); // Add verification parameter to URL url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return c.text(`${pathname}${url.search}`); }); // Handle verification for all other requests app.all("*", async (c) => { const env = c.env; const url = c.req.url; // You will need some secret data to use as a symmetric key const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import the secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); // Make sure the request has the verification parameter if (!c.req.query("verify")) { return c.text("Missing query parameter", 403); } // Extract timestamp and signature const [timestamp, hmac] = c.req.query("verify")!.split("-"); const assertedTimestamp = Number(timestamp); // Recreate the data that should have been signed const dataToAuthenticate = `${c.req.path}${assertedTimestamp}`; // Convert base64 signature back to ArrayBuffer const receivedMac = Buffer.from(hmac, "base64"); // Verify the signature const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); // If verification fails, return 403 if (!verified) { return c.text("Invalid MAC", 403); } // Check if the signature has expired if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return c.text( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, 403, ); } // If verification passes, proxy the request to example.com return proxy(`https://example.com/${c.req.path}`, ...c.req); }); export default app; ``` - Python ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, TextEncoder, Buffer, fetch, Object, crypto def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) encoder = TextEncoder.new() # How long an HMAC token should be valid for, in seconds EXPIRY = 60 async def on_fetch(request, env): # Get the secret key secret_key_data = encoder.encode(env.SECRET_DATA if hasattr(env, "SECRET_DATA") else "my secret symmetric key") # Import the secret as a CryptoKey for both 'sign' and 'verify' operations key = await crypto.subtle.importKey( "raw", secret_key_data, to_js({"name": "HMAC", "hash": "SHA-256"}), False, ["sign", "verify"] ) url = URL.new(request.url) if url.pathname.startswith("/generate/"): url.pathname = url.pathname.replace("/generate/", "/", 1) timestamp = int(Date.now() / 1000) # Data to authenticate data_to_authenticate = f"{url.pathname}{timestamp}" # Sign the data mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(data_to_authenticate) ) # Convert to base64 base64_mac = Buffer.from(mac).toString("base64") # Set the verification parameter url.searchParams.set("verify", f"{timestamp}-{base64_mac}") return Response.new(f"{url.pathname}{url.search}") else: # Verify the request if not "verify" in url.searchParams: return Response.new("Missing query parameter", status=403) verify_param = url.searchParams.get("verify") timestamp, hmac = verify_param.split("-") asserted_timestamp = int(timestamp) data_to_authenticate = f"{url.pathname}{asserted_timestamp}" received_mac = Buffer.from(hmac, "base64") # Verify the signature verified = await crypto.subtle.verify( "HMAC", key, received_mac, encoder.encode(data_to_authenticate) ) if not verified: return Response.new("Invalid MAC", status=403) # Check expiration if Date.now() / 1000 > asserted_timestamp + EXPIRY: expiry_date = Date.new((asserted_timestamp + EXPIRY) * 1000) return Response.new(f"URL expired at {expiry_date}", status=403) # Proxy to example.com if verification passes return fetch(URL.new(f"https://example.com{url.pathname}"), request) ``` ## Validate signed requests using the WAF The provided example code for signing requests is compatible with the [`is_timed_hmac_valid_v0()`](https://developers.cloudflare.com/ruleset-engine/rules-language/functions/#hmac-validation) Rules language function. This means that you can verify requests signed by the Worker script using a [custom rule](https://developers.cloudflare.com/waf/custom-rules/use-cases/configure-token-authentication/#option-2-configure-using-custom-rules). --- title: Turnstile with Workers · Cloudflare Workers docs description: Inject [Turnstile](/turnstile/) implicitly into HTML elements using the HTMLRewriter runtime API. lastUpdated: 2025-06-24T17:41:46.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/ md: https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/index.md --- * JavaScript ```js export default { async fetch(request, env) { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in let res = await fetch(request); // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on("head", { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append( ``, { html: true }, ); }, }) .on("div", { element(element) { // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) { element.append( `
`, { html: true }, ); } }, }) .transform(res); return newRes; }, }; ``` * TypeScript ```ts export default { async fetch(request, env): Promise { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in let res = await fetch(request); // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on("head", { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append( ``, { html: true }, ); }, }) .on("div", { element(element) { // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) { element.append( `
`, { html: true }, ); } }, }) .transform(res); return newRes; }, } satisfies ExportedHandler; ``` * Hono ```ts import { Hono } from "hono"; interface Env { SITE_KEY: string; SECRET_KEY: string; TURNSTILE_ATTR_NAME?: string; } const app = new Hono<{ Bindings: Env }>(); // Middleware to inject Turnstile widget app.use("*", async (c, next) => { const SITE_KEY = c.env.SITE_KEY; // The Turnstile Sitekey from environment const TURNSTILE_ATTR_NAME = c.env.TURNSTILE_ATTR_NAME || "your_id_to_replace"; // The target element ID // Process the request through the original endpoint await next(); // Only process HTML responses const contentType = c.res.headers.get("content-type"); if (!contentType || !contentType.includes("text/html")) { return; } // Clone the response to make it modifiable const originalResponse = c.res; const responseBody = await originalResponse.text(); // Create an HTMLRewriter instance to modify the HTML const rewriter = new HTMLRewriter() // Add the Turnstile script to the head .on("head", { element(element) { element.append( ``, { html: true }, ); }, }) // Add the Turnstile widget to the target div .on("div", { element(element) { if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) { element.append( `
`, { html: true }, ); } }, }); // Create a new response with the same properties as the original const modifiedResponse = new Response(responseBody, { status: originalResponse.status, statusText: originalResponse.statusText, headers: originalResponse.headers, }); // Transform the response using HTMLRewriter c.res = rewriter.transform(modifiedResponse); }); // Handle POST requests for form submission with Turnstile validation app.post("*", async (c) => { const formData = await c.req.formData(); const token = formData.get("cf-turnstile-response"); const ip = c.req.header("CF-Connecting-IP"); // If no token, return an error if (!token) { return c.text("Missing Turnstile token", 400); } // Prepare verification data const verifyFormData = new FormData(); verifyFormData.append("secret", c.env.SECRET_KEY || ""); verifyFormData.append("response", token.toString()); if (ip) verifyFormData.append("remoteip", ip); // Verify the token with Turnstile API const verifyResult = await fetch( "https://challenges.cloudflare.com/turnstile/v0/siteverify", { method: "POST", body: verifyFormData, }, ); const outcome = await verifyResult.json<{ success: boolean }>; // If verification fails, return an error if (!outcome.success) { return c.text("The provided Turnstile token was not valid!", 401); } // If verification succeeds, proceed with the original request // You would typically handle the form submission logic here // For this example, we'll just send a success response return c.text("Form submission successful!"); }); // Default handler for GET requests app.get("*", async (c) => { // Fetch the original content (you'd replace this with your actual content source) return await fetch(c.req.raw); }); export default app; ``` * Python ```py from pyodide.ffi import create_proxy from js import HTMLRewriter, fetch async def on_fetch(request, env): site_key = env.SITE_KEY attr_name = env.TURNSTILE_ATTR_NAME res = await fetch(request) class Append: def element(self, element): s = '' element.append(s, {"html": True}) class AppendOnID: def __init__(self, name): self.name = name def element(self, element): # You are using the `getAttribute` method here to retrieve the `id` or `class` of an element if element.getAttribute("id") == self.name: div = f'
' element.append(div, { "html": True }) # Instantiate the API to run on specific elements, for example, `head`, `div` head = create_proxy(Append()) div = create_proxy(AppendOnID(attr_name)) new_res = HTMLRewriter.new().on("head", head).on("div", div).transform(res) return new_res ``` Note This is only half the implementation for Turnstile. The corresponding token that is a result of a widget being rendered also needs to be verified using the [Siteverify API](https://developers.cloudflare.com/turnstile/get-started/server-side-validation/). Refer to the example below for one such implementation. Prevent potential errors when accessing request.body The body of a [Request](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`. To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128 MB per Worker](https://developers.cloudflare.com/workers/platform/limits#worker-limits) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/).
--- title: Using the WebSockets API · Cloudflare Workers docs description: Use the WebSockets API to communicate in real time with your Cloudflare Workers. lastUpdated: 2025-04-15T13:29:20.000Z chatbotDeprioritize: false tags: WebSockets source_url: html: https://developers.cloudflare.com/workers/examples/websockets/ md: https://developers.cloudflare.com/workers/examples/websockets/index.md --- WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. In this guide, you will learn the basics of WebSockets on Cloudflare Workers, both from the perspective of writing WebSocket servers in your Workers functions, as well as connecting to and working with those WebSocket servers as a client. WebSockets are open connections sustained between the client and the origin server. Inside a WebSocket connection, the client and the origin can pass data back and forth without having to reestablish sessions. This makes exchanging data within a WebSocket connection fast. WebSockets are often used for real-time applications such as live chat and gaming. Note WebSockets utilize an event-based system for receiving and sending messages, much like the Workers runtime model of responding to events. Note If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](https://developers.cloudflare.com/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/). ## Write a WebSocket Server WebSocket servers in Cloudflare Workers allow you to receive messages from a client in real time. This guide will show you how to set up a WebSocket server in Workers. A client can make a WebSocket request in the browser by instantiating a new instance of `WebSocket`, passing in the URL for your Workers function: ```js // In client-side JavaScript, connect to your Workers function using WebSockets: const websocket = new WebSocket( "wss://example-websocket.signalnerve.workers.dev", ); ``` Note For more details about creating and working with WebSockets in the client, refer to [Writing a WebSocket client](#write-a-websocket-client). When an incoming WebSocket request reaches the Workers function, it will contain an `Upgrade` header, set to the string value `websocket`. Check for this header before continuing to instantiate a WebSocket: * JavaScript ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } } ``` * Rust ```rs use worker::\*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } } ``` After you have appropriately checked for the `Upgrade` header, you can create a new instance of `WebSocketPair`, which contains server and client WebSockets. One of these WebSockets should be handled by the Workers function and the other should be returned as part of a `Response` with the [`101` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/101), indicating the request is switching protocols: * JavaScript ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const client = webSocketPair[0], server = webSocketPair[1]; return new Response(null, { status: 101, webSocket: client, }); } ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; worker::Response::from_websocket(client) } ``` The `WebSocketPair` constructor returns an Object, with the `0` and `1` keys each holding a `WebSocket` instance as its value. It is common to grab the two WebSockets from this pair using [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_objects/Object/values) and [ES6 destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment), as seen in the below example. In order to begin communicating with the `client` WebSocket in your Worker, call `accept` on the `server` WebSocket. This will tell the Workers runtime that it should listen for WebSocket data and keep the connection open with your `client` WebSocket: * JavaScript ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); server.accept(); return new Response(null, { status: 101, webSocket: client, }); } ``` * Rust ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; worker::Response::from_websocket(client) } ``` WebSockets emit a number of [Events](https://developers.cloudflare.com/workers/runtime-apis/websockets/#events) that can be connected to using `addEventListener`. The below example hooks into the `message` event and emits a `console.log` with the data from it: * JavaScript ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); server.accept(); server.addEventListener('message', event => { console.log(event.data); }); return new Response(null, { status: 101, webSocket: client, }); } ``` * Rust ```rs use futures::StreamExt; use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; wasm_bindgen_futures::spawn_local(async move { let mut event_stream = server.events().expect("could not open stream"); while let Some(event) = event_stream.next().await { match event.expect("received error in websocket") { WebsocketEvent::Message(msg) => server.send(&msg.text()).unwrap(), WebsocketEvent::Close(event) => console_log!("{:?}", event), } } }); worker::Response::from_websocket(client) } ``` * Hono ```ts import { Hono } from 'hono' import { upgradeWebSocket } from 'hono/cloudflare-workers' const app = new Hono() app.get( '*', upgradeWebSocket((c) => { return { onMessage(event, ws) { console.log('Received message from client:', event.data) ws.send(`Echo: ${event.data}`) }, onClose: () => { console.log('WebSocket closed:', event) }, onError: () => { console.error('WebSocket error:', event) }, } }) ) export default app; ``` ### Connect to the WebSocket server from a client Writing WebSocket clients that communicate with your Workers function is a two-step process: first, create the WebSocket instance, and then attach event listeners to it: ```js const websocket = new WebSocket( "wss://websocket-example.signalnerve.workers.dev", ); websocket.addEventListener("message", (event) => { console.log("Message received from server"); console.log(event.data); }); ``` WebSocket clients can send messages back to the server using the [`send`](https://developers.cloudflare.com/workers/runtime-apis/websockets/#send) function: ```js websocket.send("MESSAGE"); ``` When the WebSocket interaction is complete, the client can close the connection using [`close`](https://developers.cloudflare.com/workers/runtime-apis/websockets/#close): ```js websocket.close(); ``` For an example of this in practice, refer to the [`websocket-template`](https://github.com/cloudflare/websocket-template) to get started with WebSockets. ## Write a WebSocket client Cloudflare Workers supports the `new WebSocket(url)` constructor. A Worker can establish a WebSocket connection to a remote server in the same manner as the client implementation described above. Additionally, Cloudflare supports establishing WebSocket connections by making a fetch request to a URL with the `Upgrade` header set. ```js async function websocket(url) { // Make a fetch request including `Upgrade: websocket` header. // The Workers Runtime will automatically handle other requirements // of the WebSocket protocol, like the Sec-WebSocket-Key header. let resp = await fetch(url, { headers: { Upgrade: "websocket", }, }); // If the WebSocket handshake completed successfully, then the // response has a `webSocket` property. let ws = resp.webSocket; if (!ws) { throw new Error("server didn't accept WebSocket"); } // Call accept() to indicate that you'll be handling the socket here // in JavaScript, as opposed to returning it on to a client. ws.accept(); // Now you can send and receive messages like before. ws.send("hello"); ws.addEventListener("message", (msg) => { console.log(msg.data); }); } ``` ## WebSocket compression Cloudflare Workers supports WebSocket compression. Refer to [WebSocket Compression](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#websocket-compression) for more information. --- title: AI & agents · Cloudflare Workers docs lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/framework-guides/ai-and-agents/ md: https://developers.cloudflare.com/workers/framework-guides/ai-and-agents/index.md --- Create full-stack applications deployed to Cloudflare Workers with AI & agent frameworks. * [Agents SDK](https://developers.cloudflare.com/agents/) * [LangChain](https://developers.cloudflare.com/workers/languages/python/packages/langchain/) --- title: APIs · Cloudflare Workers docs lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/framework-guides/apis/ md: https://developers.cloudflare.com/workers/framework-guides/apis/index.md --- Create full-stack applications deployed to Cloudflare Workers using APIs. * [FastAPI](https://developers.cloudflare.com/workers/languages/python/packages/fastapi/) * [Hono](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/hono/) --- title: Mobile applications · Cloudflare Workers docs lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/framework-guides/mobile-apps/ md: https://developers.cloudflare.com/workers/framework-guides/mobile-apps/index.md --- Create full-stack mobile applications deployed to Cloudflare Workers. * [Expo](https://docs.expo.dev/eas/hosting/reference/worker-runtime/) --- title: Web applications · Cloudflare Workers docs lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/framework-guides/web-apps/ md: https://developers.cloudflare.com/workers/framework-guides/web-apps/index.md --- Create full-stack web applications deployed to Cloudflare Workers. * [React + Vite](https://developers.cloudflare.com/workers/framework-guides/web-apps/react/) * [Astro](https://developers.cloudflare.com/workers/framework-guides/web-apps/astro/) * [React Router (formerly Remix)](https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/) * [Next.js](https://developers.cloudflare.com/workers/framework-guides/web-apps/nextjs/) * [Vue](https://developers.cloudflare.com/workers/framework-guides/web-apps/vue/) * [RedwoodSDK](https://developers.cloudflare.com/workers/framework-guides/web-apps/redwoodsdk/) * [TanStack](https://developers.cloudflare.com/workers/framework-guides/web-apps/tanstack/) * [Svelte](https://developers.cloudflare.com/workers/framework-guides/web-apps/svelte/) * [More guides...](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/) * [Angular](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/angular/) * [Docusaurus](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/docusaurus/) * [Gatsby](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/gatsby/) * [Hono](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/hono/) * [Nuxt](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/nuxt/) * [Qwik](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/qwik/) * [Solid](https://developers.cloudflare.com/workers/framework-guides/web-apps/more-web-frameworks/solid/) --- title: Get started - Dashboard · Cloudflare Workers docs description: Follow this guide to create a Workers application using the Cloudflare dashboard. lastUpdated: 2025-06-18T17:02:32.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/get-started/dashboard/ md: https://developers.cloudflare.com/workers/get-started/dashboard/index.md --- Follow this guide to create a Workers application using [the Cloudflare dashboard](https://dash.cloudflare.com). Try the Playground The quickest way to experiment with Cloudflare Workers is in the [Playground](https://workers.cloudflare.com/playground). The Playground does not require any setup. It is an instant way to preview and test a Worker directly in the browser. ## Prerequisites [Create a Cloudflare account](https://developers.cloudflare.com/fundamentals/account/create-account/), if you have not already. ## Setup To get started with a new Workers application: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to the **Workers & Pages** section of the dashboard. 3. Select [Create](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create). From here, you can: * You can select from the gallery of production-ready templates * Import an existing Git repository on your own account * Let Cloudflare clone and bootstrap a public repository containing a Workers application. 4. Once you've connected to your chosen [Git provider](https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/), configure your project and click `Deploy`. 5. Cloudflare will kick off a new build and deployment. Once deployed, preview your Worker at its provided `workers.dev` subdomain. ## Continue development Applications started in the dashboard are set up with Git to help kickstart your development workflow. To continue developing on your repository, you can run: ```bash # clone you repository locally git clone # be sure you are in the root directory cd ``` Now, you can preview and test your changes by [running Wrangler in your local development environment](https://developers.cloudflare.com/workers/development-testing/). Once you are ready to deploy you can run: ```bash # adds the files to git tracking git add . # commits the changes git commit -m "your message" # push the changes to your Git provider git push origin main ``` To do more: * Review our [Examples](https://developers.cloudflare.com/workers/examples/) and [Tutorials](https://developers.cloudflare.com/workers/tutorials/) for inspiration. * Set up [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality. * Learn how to [test and debug](https://developers.cloudflare.com/workers/testing/) your Workers. * Read about [Workers limits and pricing](https://developers.cloudflare.com/workers/platform/). --- title: Get started - CLI · Cloudflare Workers docs description: Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI. lastUpdated: 2025-05-26T07:51:19.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/get-started/guide/ md: https://developers.cloudflare.com/workers/get-started/guide/index.md --- Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI. This guide will instruct you through setting up and deploying your first Worker. ## Prerequisites 1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages). 2. Install [`Node.js`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm). Node.js version manager Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), discussed later in this guide, requires a Node version of `16.17.0` or later. ## 1. Create a new Worker project Open a terminal window and run C3 to create your Worker project. [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. * npm ```sh npm create cloudflare@latest -- my-first-worker ``` * yarn ```sh yarn create cloudflare my-first-worker ``` * pnpm ```sh pnpm create cloudflare@latest my-first-worker ``` For setup, select the following options: * For *What would you like to start with?*, choose `Hello World example`. * For *Which template would you like to use?*, choose `Worker only`. * For *Which language do you want to use?*, choose `JavaScript`. * For *Do you want to use git for version control?*, choose `Yes`. * For *Do you want to deploy your application?*, choose `No` (we will be making some changes before deploying). Now, you have a new project set up. Move into that project folder. ```sh cd my-first-worker ``` What files did C3 create? In your project directory, C3 will have generated the following: * `wrangler.jsonc`: Your [Wrangler](https://developers.cloudflare.com/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file. * `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) syntax. * `package.json`: A minimal Node dependencies configuration file. * `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json). * `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules). What if I already have a project in a git repository? In addition to creating new projects from C3 templates, C3 also supports creating new projects from existing Git repositories. To create a new project from an existing Git repository, open your terminal and run: ```sh npm create cloudflare@latest -- --template ``` `` may be any of the following: * `user/repo` (GitHub) * `git@github.com:user/repo` * `https://github.com/user/repo` * `user/repo/some-template` (subdirectories) * `user/repo#canary` (branches) * `user/repo#1234abcd` (commit hash) * `bitbucket:user/repo` (Bitbucket) * `gitlab:user/repo` (GitLab) Your existing template folder must contain the following files, at a minimum, to meet the requirements for Cloudflare Workers: * `package.json` * `wrangler.jsonc` [See sample Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#sample-wrangler-configuration) * `src/` containing a worker script referenced from `wrangler.jsonc` ## 2. Develop with Wrangler CLI C3 installs [Wrangler](https://developers.cloudflare.com/workers/wrangler/install-and-update/), the Workers command-line interface, in Workers projects by default. Wrangler lets you to [create](https://developers.cloudflare.com/workers/wrangler/commands/#init), [test](https://developers.cloudflare.com/workers/wrangler/commands/#dev), and [deploy](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) your Workers projects. After you have created your first Worker, run the [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to preview your Worker locally during development. ```sh npx wrangler dev ``` If you have never used Wrangler before, it will open your web browser so you can login to your Cloudflare account. Go to to view your Worker. Browser issues? If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](https://developers.cloudflare.com/workers/wrangler/commands/#login) documentation. ## 3. Write code With your new project generated and running, you can begin to write and edit your code. Find the `src/index.js` file. `index.js` will be populated with the code below: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` Code explanation This code block consists of a few different parts. ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` `export default` is JavaScript syntax required for defining [JavaScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#default_exports_versus_named_exports). Your Worker has to have a default export of an object, with properties corresponding to the events your Worker should handle. ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` This [`fetch()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) will be called when your Worker receives an HTTP request. You can define additional event handlers in the exported object to respond to different types of events. For example, add a [`scheduled()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/) to respond to Worker invocations via a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/). Additionally, the `fetch` handler will always be passed three parameters: [`request`, `env` and `context`](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/). ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` The Workers runtime expects `fetch` handlers to return a `Response` object or a Promise which resolves with a `Response` object. In this example, you will return a new `Response` with the string `"Hello World!"`. Replace the content in your current `index.js` file with the content below, which changes the text output. ```js export default { async fetch(request, env, ctx) { return new Response("Hello Worker!"); }, }; ``` Then, save the file and reload the page. Your Worker's output will have changed to the new text. No visible changes? If the output for your Worker does not change, make sure that: 1. You saved the changes to `index.js`. 2. You have `wrangler dev` running. 3. You reloaded your browser. ## 4. Deploy your project Deploy your Worker via Wrangler to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/). ```sh npx wrangler deploy ``` If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up. Preview your Worker at `..workers.dev`. Seeing 523 errors? If you see [`523` errors](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-523/) when pushing your `*.workers.dev` subdomain for the first time, wait a minute or so and the errors will resolve themselves. ## Next steps To do more: * Push your project to a GitHub or GitLab repository then [connect to builds](https://developers.cloudflare.com/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments. * Visit the [Cloudflare dashboard](https://dash.cloudflare.com/) for simpler editing. * Review our [Examples](https://developers.cloudflare.com/workers/examples/) and [Tutorials](https://developers.cloudflare.com/workers/tutorials/) for inspiration. * Set up [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality. * Learn how to [test and debug](https://developers.cloudflare.com/workers/testing/) your Workers. * Read about [Workers limits and pricing](https://developers.cloudflare.com/workers/platform/). --- title: Prompting · Cloudflare Workers docs description: One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output. lastUpdated: 2025-04-16T21:02:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/get-started/prompting/ md: https://developers.cloudflare.com/workers/get-started/prompting/index.md --- One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output. Below is an extensive example prompt that can help you build applications using Cloudflare Workers and your preferred AI model. ### Build Workers using a prompt To use the prompt: 1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard 2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude) 3. Make sure to enter your part of the prompt at the end between the `` and `` tags. Base prompt: ```md You are an advanced assistant specialized in generating Cloudflare Workers code. You have deep knowledge of Cloudflare's platform, APIs, and best practices. - Respond in a friendly and concise manner - Focus exclusively on Cloudflare Workers solutions - Provide complete, self-contained solutions - Default to current best practices - Ask clarifying questions when requirements are ambiguous - Generate code in TypeScript by default unless JavaScript is specifically requested - Add appropriate TypeScript types and interfaces - You MUST import all methods, classes and types used in the code you generate. - Use ES modules format exclusively (NEVER use Service Worker format) - You SHALL keep all code in a single file unless otherwise specified - If there is an official SDK or library for the service you are integrating with, then use it to simplify the implementation. - Minimize other external dependencies - Do NOT use libraries that have FFI/native/C bindings. - Follow Cloudflare Workers security best practices - Never bake in secrets into the code - Include proper error handling and logging - Include comments explaining complex logic 1340 collapsed lines - Use Markdown code blocks to separate code from explanations - Provide separate blocks for: 1. Main worker code (index.ts/index.js) 2. Configuration (wrangler.jsonc) 3. Type definitions (if applicable) 4. Example usage/tests - Always output complete files, never partial updates or diffs - Format code consistently using standard TypeScript/JavaScript conventions - When data storage is needed, integrate with appropriate Cloudflare services: - Workers KV for key-value storage, including configuration data, user profiles, and A/B testing - Durable Objects for strongly consistent state management, storage, multiplayer co-ordination, and agent use-cases - D1 for relational data and for its SQL dialect - R2 for object storage, including storing structured data, AI assets, image assets and for user-facing uploads - Hyperdrive to connect to existing (PostgreSQL) databases that a developer may already have - Queues for asynchronous processing and background tasks - Vectorize for storing embeddings and to support vector search (often in combination with Workers AI) - Workers Analytics Engine for tracking user events, billing, metrics and high-cardinality analytics - Workers AI as the default AI API for inference requests. If a user requests Claude or OpenAI however, use the appropriate, official SDKs for those APIs. - Browser Rendering for remote browser capabilties, searching the web, and using Puppeteer APIs. - Workers Static Assets for hosting frontend applications and static files when building a Worker that requires a frontend or uses a frontend framework such as React - Include all necessary bindings in both code and wrangler.jsonc - Add appropriate environment variable definitions - Always provide a wrangler.jsonc (not wrangler.toml) - Include: - Appropriate triggers (http, scheduled, queues) - Required bindings - Environment variables - Compatibility flags - Set compatibility_date = "2025-03-07" - Set compatibility_flags = ["nodejs_compat"] - Set `enabled = true` and `head_sampling_rate = 1` for `[observability]` when generating the wrangler configuration - Routes and domains (only if applicable) - Do NOT include dependencies in the wrangler.jsonc file - Only include bindings that are used in the code // wrangler.jsonc { "name": "app-name-goes-here", // name of the app "main": "src/index.ts", // default file "compatibility_date": "2025-02-11", "compatibility_flags": ["nodejs_compat"], // Enable Node.js compatibility "observability": { // Enable logging by default "enabled": true, } } - Defines a name for the app the user is building - Sets `src/index.ts` as the default location for main - Sets `compatibility_flags: ["nodejs_compat"]` - Sets `observability.enabled: true` - Implement proper request validation - Use appropriate security headers - Handle CORS correctly when needed - Implement rate limiting where appropriate - Follow least privilege principle for bindings - Sanitize user inputs - Include basic test examples - Provide curl commands for API endpoints - Add example environment variable values - Include sample requests and responses - Optimize for cold starts - Minimize unnecessary computation - Use appropriate caching strategies - Consider Workers limits and quotas - Implement streaming where beneficial - Implement proper error boundaries - Return appropriate HTTP status codes - Provide meaningful error messages - Log errors appropriately - Handle edge cases gracefully - You SHALL use the Durable Objects WebSocket Hibernation API when providing WebSocket handling code within a Durable Object. - Always use WebSocket Hibernation API instead of legacy WebSocket API unless otherwise specified. - Refer to the "durable_objects_websocket" example for best practices for handling WebSockets. - Use `this.ctx.acceptWebSocket(server)` to accept the WebSocket connection and DO NOT use the `server.accept()` method. - Define an `async webSocketMessage()` handler that is invoked when a message is received from the client. - Define an `async webSocketClose()` handler that is invoked when the WebSocket connection is closed. - Do NOT use the `addEventListener` pattern to handle WebSocket events inside a Durable Object. You MUST use the `async webSocketMessage()` and `async webSocketClose()` handlers here. - Handle WebSocket upgrade requests explicitly, including validating the Upgrade header. - Strongly prefer the `agents` to build AI Agents when asked. - Refer to the for Agents. - Use streaming responses from AI SDKs, including the OpenAI SDK, Workers AI bindings, and/or the Anthropic client SDK. - Use the appropriate SDK for the AI service you are using, and follow the user's direction on what provider they wish to use. - Prefer the `this.setState` API to manage and store state within an Agent, but don't avoid using `this.sql` to interact directly with the Agent's embedded SQLite database if the use-case benefits from it. - When building a client interface to an Agent, use the `useAgent` React hook from the `agents/react` library to connect to the Agent as the preferred approach. - When extending the `Agent` class, ensure you provide the `Env` and the optional state as type parameters - for example, `class AIAgent extends Agent { ... }`. - Include valid Durable Object bindings in the `wrangler.jsonc` configuration for an Agent. - You MUST set the value of `migrations[].new_sqlite_classes` to the name of the Agent class in `wrangler.jsonc`. Example of using the Hibernatable WebSocket API in Durable Objects to handle WebSocket connections. import { DurableObject } from "cloudflare:workers"; interface Env { WEBSOCKET_HIBERNATION_SERVER: DurableObject; } // Durable Object export class WebSocketHibernationServer extends DurableObject { async fetch(request) { // Creates two ends of a WebSocket connection. const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); // Calling `acceptWebSocket()` informs the runtime that this WebSocket is to begin terminating // request within the Durable Object. It has the effect of "accepting" the connection, // and allowing the WebSocket to send and receive messages. // Unlike `ws.accept()`, `state.acceptWebSocket(ws)` informs the Workers Runtime that the WebSocket // is "hibernatable", so the runtime does not need to pin this Durable Object to memory while // the connection is open. During periods of inactivity, the Durable Object can be evicted // from memory, but the WebSocket connection will remain open. If at some later point the // WebSocket receives a message, the runtime will recreate the Durable Object // (run the `constructor`) and deliver the message to the appropriate handler. this.ctx.acceptWebSocket(server); return new Response(null, { status: 101, webSocket: client, }); }, async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): void | Promise { // Upon receiving a message from the client, reply with the same message, // but will prefix the message with "[Durable Object]: " and return the // total number of connections. ws.send( `[Durable Object] message: ${message}, connections: ${this.ctx.getWebSockets().length}`, ); }, async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) void | Promise { // If the client closes the connection, the runtime will invoke the webSocketClose() handler. ws.close(code, "Durable Object is closing WebSocket"); }, async webSocketError(ws: WebSocket, error: unknown): void | Promise { console.error("WebSocket error:", error); ws.close(1011, "WebSocket error"); } } { "name": "websocket-hibernation-server", "durable_objects": { "bindings": [ { "name": "WEBSOCKET_HIBERNATION_SERVER", "class_name": "WebSocketHibernationServer" } ] }, "migrations": [ { "tag": "v1", "new_classes": ["WebSocketHibernationServer"] } ] } - Uses the WebSocket Hibernation API instead of the legacy WebSocket API - Calls `this.ctx.acceptWebSocket(server)` to accept the WebSocket connection - Has a `webSocketMessage()` handler that is invoked when a message is received from the client - Has a `webSocketClose()` handler that is invoked when the WebSocket connection is closed - Does NOT use the `server.addEventListener` API unless explicitly requested. - Don't over-use the "Hibernation" term in code or in bindings. It is an implementation detail. Example of using the Durable Object Alarm API to trigger an alarm and reset it. import { DurableObject } from "cloudflare:workers"; interface Env { ALARM_EXAMPLE: DurableObject; } export default { async fetch(request, env) { let url = new URL(request.url); let userId = url.searchParams.get("userId") || crypto.randomUUID(); let id = env.ALARM_EXAMPLE.idFromName(userId); return await env.ALARM_EXAMPLE.get(id).fetch(request); }, }; const SECONDS = 1000; export class AlarmExample extends DurableObject { constructor(ctx, env) { this.ctx = ctx; this.storage = ctx.storage; } async fetch(request) { // If there is no alarm currently set, set one for 10 seconds from now let currentAlarm = await this.storage.getAlarm(); if (currentAlarm == null) { this.storage.setAlarm(Date.now() + 10 \_ SECONDS); } } async alarm(alarmInfo) { // The alarm handler will be invoked whenever an alarm fires. // You can use this to do work, read from the Storage API, make HTTP calls // and set future alarms to run using this.storage.setAlarm() from within this handler. if (alarmInfo?.retryCount != 0) { console.log("This alarm event has been attempted ${alarmInfo?.retryCount} times before."); } // Set a new alarm for 10 seconds from now before exiting the handler this.storage.setAlarm(Date.now() + 10 \_ SECONDS); } } { "name": "durable-object-alarm", "durable_objects": { "bindings": [ { "name": "ALARM_EXAMPLE", "class_name": "DurableObjectAlarm" } ] }, "migrations": [ { "tag": "v1", "new_classes": ["DurableObjectAlarm"] } ] } - Uses the Durable Object Alarm API to trigger an alarm - Has a `alarm()` handler that is invoked when the alarm is triggered - Sets a new alarm for 10 seconds from now before exiting the handler Using Workers KV to store session data and authenticate requests, with Hono as the router and middleware. // src/index.ts import { Hono } from 'hono' import { cors } from 'hono/cors' interface Env { AUTH_TOKENS: KVNamespace; } const app = new Hono<{ Bindings: Env }>() // Add CORS middleware app.use('\*', cors()) app.get('/', async (c) => { try { // Get token from header or cookie const token = c.req.header('Authorization')?.slice(7) || c.req.header('Cookie')?.match(/auth_token=([^;]+)/)?.[1]; if (!token) { return c.json({ authenticated: false, message: 'No authentication token provided' }, 403) } // Check token in KV const userData = await c.env.AUTH_TOKENS.get(token) if (!userData) { return c.json({ authenticated: false, message: 'Invalid or expired token' }, 403) } return c.json({ authenticated: true, message: 'Authentication successful', data: JSON.parse(userData) }) } catch (error) { console.error('Authentication error:', error) return c.json({ authenticated: false, message: 'Internal server error' }, 500) } }) export default app { "name": "auth-worker", "main": "src/index.ts", "compatibility_date": "2025-02-11", "kv_namespaces": [ { "binding": "AUTH_TOKENS", "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "preview_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" } ] } - Uses Hono as the router and middleware - Uses Workers KV to store session data - Uses the Authorization header or Cookie to get the token - Checks the token in Workers KV - Returns a 403 if the token is invalid or expired Use Cloudflare Queues to produce and consume messages. // src/producer.ts interface Env { REQUEST_QUEUE: Queue; UPSTREAM_API_URL: string; UPSTREAM_API_KEY: string; } export default { async fetch(request: Request, env: Env) { const info = { timestamp: new Date().toISOString(), method: request.method, url: request.url, headers: Object.fromEntries(request.headers), }; await env.REQUEST_QUEUE.send(info); return Response.json({ message: 'Request logged', requestId: crypto.randomUUID() }); }, async queue(batch: MessageBatch, env: Env) { const requests = batch.messages.map(msg => msg.body); const response = await fetch(env.UPSTREAM_API_URL, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${env.UPSTREAM_API_KEY}` }, body: JSON.stringify({ timestamp: new Date().toISOString(), batchSize: requests.length, requests }) }); if (!response.ok) { throw new Error(`Upstream API error: ${response.status}`); } } }; { "name": "request-logger-consumer", "main": "src/index.ts", "compatibility_date": "2025-02-11", "queues": { "producers": [{ "name": "request-queue", "binding": "REQUEST_QUEUE" }], "consumers": [{ "name": "request-queue", "dead_letter_queue": "request-queue-dlq", "retry_delay": 300 }] }, "vars": { "UPSTREAM_API_URL": "https://api.example.com/batch-logs", "UPSTREAM_API_KEY": "" } } - Defines both a producer and consumer for the queue - Uses a dead letter queue for failed messages - Uses a retry delay of 300 seconds to delay the re-delivery of failed messages - Shows how to batch requests to an upstream API Connect to and query a Postgres database using Cloudflare Hyperdrive. // Postgres.js 3.4.5 or later is recommended import postgres from "postgres"; export interface Env { // If you set another name in the Wrangler config file as the value for 'binding', // replace "HYPERDRIVE" with the variable name you defined. HYPERDRIVE: Hyperdrive; } export default { async fetch(request, env, ctx): Promise { console.log(JSON.stringify(env)); // Create a database client that connects to your database via Hyperdrive. // // Hyperdrive generates a unique connection string you can pass to // supported drivers, including node-postgres, Postgres.js, and the many // ORMs and query builders that use these drivers. const sql = postgres(env.HYPERDRIVE.connectionString) try { // Test query const results = await sql`SELECT * FROM pg_tables`; // Clean up the client, ensuring we don't kill the worker before that is // completed. ctx.waitUntil(sql.end()); // Return result rows as JSON return Response.json(results); } catch (e) { console.error(e); return Response.json( { error: e instanceof Error ? e.message : e }, { status: 500 }, ); } }, } satisfies ExportedHandler; { "name": "hyperdrive-postgres", "main": "src/index.ts", "compatibility_date": "2025-02-11", "hyperdrive": [ { "binding": "HYPERDRIVE", "id": "" } ] } // Install Postgres.js npm install postgres // Create a Hyperdrive configuration npx wrangler hyperdrive create --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name" - Installs and uses Postgres.js as the database client/driver. - Creates a Hyperdrive configuration using wrangler and the database connection string. - Uses the Hyperdrive connection string to connect to the database. - Calling `sql.end()` is optional, as Hyperdrive will handle the connection pooling. Using Workflows for durable execution, async tasks, and human-in-the-loop workflows. import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers'; type Env = { // Add your bindings here, e.g. Workers KV, D1, Workers AI, etc. MY_WORKFLOW: Workflow; }; // User-defined params passed to your workflow type Params = { email: string; metadata: Record; }; export class MyWorkflow extends WorkflowEntrypoint { async run(event: WorkflowEvent, step: WorkflowStep) { // Can access bindings on `this.env` // Can access params on `event.payload` const files = await step.do('my first step', async () => { // Fetch a list of files from $SOME_SERVICE return { files: [ 'doc_7392_rev3.pdf', 'report_x29_final.pdf', 'memo_2024_05_12.pdf', 'file_089_update.pdf', 'proj_alpha_v2.pdf', 'data_analysis_q2.pdf', 'notes_meeting_52.pdf', 'summary_fy24_draft.pdf', ], }; }); const apiResponse = await step.do('some other step', async () => { let resp = await fetch('https://api.cloudflare.com/client/v4/ips'); return await resp.json(); }); await step.sleep('wait on something', '1 minute'); await step.do( 'make a call to write that could maybe, just might, fail', // Define a retry strategy { retries: { limit: 5, delay: '5 second', backoff: 'exponential', }, timeout: '15 minutes', }, async () => { // Do stuff here, with access to the state from our previous steps if (Math.random() > 0.5) { throw new Error('API call to $STORAGE_SYSTEM failed'); } }, ); } } export default { async fetch(req: Request, env: Env): Promise { let url = new URL(req.url); if (url.pathname.startsWith('/favicon')) { return Response.json({}, { status: 404 }); } // Get the status of an existing instance, if provided let id = url.searchParams.get('instanceId'); if (id) { let instance = await env.MY_WORKFLOW.get(id); return Response.json({ status: await instance.status(), }); } const data = await req.json() // Spawn a new instance and return the ID and status let instance = await env.MY_WORKFLOW.create({ // Define an ID for the Workflow instance id: crypto.randomUUID(), // Pass data to the Workflow instance // Available on the WorkflowEvent params: data, }); return Response.json({ id: instance.id, details: await instance.status(), }); }, }; { "name": "workflows-starter", "main": "src/index.ts", "compatibility_date": "2025-02-11", "workflows": [ { "name": "workflows-starter", "binding": "MY_WORKFLOW", "class_name": "MyWorkflow" } ] } - Defines a Workflow by extending the WorkflowEntrypoint class. - Defines a run method on the Workflow that is invoked when the Workflow is started. - Ensures that `await` is used before calling `step.do` or `step.sleep` - Passes a payload (event) to the Workflow from a Worker - Defines a payload type and uses TypeScript type arguments to ensure type safety Using Workers Analytics Engine for writing event data. interface Env { USER_EVENTS: AnalyticsEngineDataset; } export default { async fetch(req: Request, env: Env): Promise { let url = new URL(req.url); let path = url.pathname; let userId = url.searchParams.get("userId"); // Write a datapoint for this visit, associating the data with // the userId as our Analytics Engine 'index' env.USER_EVENTS.writeDataPoint({ // Write metrics data: counters, gauges or latency statistics doubles: [], // Write text labels - URLs, app names, event_names, etc blobs: [path], // Provide an index that groups your data correctly. indexes: [userId], }); return Response.json({ hello: "world", }); , }; { "name": "analytics-engine-example", "main": "src/index.ts", "compatibility_date": "2025-02-11", "analytics_engine_datasets": [ { "binding": "", "dataset": "" } ] } } // Query data within the 'temperatures' dataset // This is accessible via the REST API at https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql SELECT timestamp, blob1 AS location_id, double1 AS inside_temp, double2 AS outside_temp FROM temperatures WHERE timestamp > NOW() - INTERVAL '1' DAY // List the datasets (tables) within your Analytics Engine curl "" \ --header "Authorization: Bearer " \ --data "SHOW TABLES" - Binds an Analytics Engine dataset to the Worker - Uses the `AnalyticsEngineDataset` type when using TypeScript for the binding - Writes event data using the `writeDataPoint` method and writes an `AnalyticsEngineDataPoint` - Does NOT `await` calls to `writeDataPoint`, as it is non-blocking - Defines an index as the key representing an app, customer, merchant or tenant. - Developers can use the GraphQL or SQL APIs to query data written to Analytics Engine Use the Browser Rendering API as a headless browser to interact with websites from a Cloudflare Worker. import puppeteer from "@cloudflare/puppeteer"; interface Env { BROWSER_RENDERING: Fetcher; } export default { async fetch(request, env): Promise { const { searchParams } = new URL(request.url); let url = searchParams.get("url"); if (url) { url = new URL(url).toString(); // normalize const browser = await puppeteer.launch(env.MYBROWSER); const page = await browser.newPage(); await page.goto(url); // Parse the page content const content = await page.content(); // Find text within the page content const text = await page.$eval("body", (el) => el.textContent); // Do something with the text // e.g. log it to the console, write it to KV, or store it in a database. console.log(text); // Ensure we close the browser session await browser.close(); return Response.json({ bodyText: text, }) } else { return Response.json({ error: "Please add an ?url=https://example.com/ parameter" }, { status: 400 }) } }, } satisfies ExportedHandler; { "name": "browser-rendering-example", "main": "src/index.ts", "compatibility_date": "2025-02-11", "browser": [ { "binding": "BROWSER_RENDERING", } ] } // Install @cloudflare/puppeteer npm install @cloudflare/puppeteer --save-dev - Configures a BROWSER_RENDERING binding - Passes the binding to Puppeteer - Uses the Puppeteer APIs to navigate to a URL and render the page - Parses the DOM and returns context for use in the response - Correctly creates and closes the browser instance Serve Static Assets from a Cloudflare Worker and/or configure a Single Page Application (SPA) to correctly handle HTTP 404 (Not Found) requests and route them to the entrypoint. // src/index.ts interface Env { ASSETS: Fetcher; } export default { fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return Response.json({ name: "Cloudflare", }); } return env.ASSETS.fetch(request); }, } satisfies ExportedHandler; { "name": "my-app", "main": "src/index.ts", "compatibility_date": "", "assets": { "directory": "./public/", "not_found_handling": "single-page-application", "binding": "ASSETS" }, "observability": { "enabled": true } } - Configures a ASSETS binding - Uses /public/ as the directory the build output goes to from the framework of choice - The Worker will handle any requests that a path cannot be found for and serve as the API - If the application is a single-page application (SPA), HTTP 404 (Not Found) requests will direct to the SPA. Build an AI Agent on Cloudflare Workers, using the agents, and the state management and syncing APIs built into the agents. // src/index.ts import { Agent, AgentNamespace, Connection, ConnectionContext, getAgentByName, routeAgentRequest, WSMessage } from 'agents'; import { OpenAI } from "openai"; interface Env { AIAgent: AgentNamespace; OPENAI_API_KEY: string; } export class AIAgent extends Agent { // Handle HTTP requests with your Agent async onRequest(request) { // Connect with AI capabilities const ai = new OpenAI({ apiKey: this.env.OPENAI_API_KEY, }); // Process and understand const response = await ai.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: await request.text() }], }); return new Response(response.choices[0].message.content); } async processTask(task) { await this.understand(task); await this.act(); await this.reflect(); } // Handle WebSockets async onConnect(connection: Connection) { await this.initiate(connection); connection.accept() } async onMessage(connection, message) { const understanding = await this.comprehend(message); await this.respond(connection, understanding); } async evolve(newInsight) { this.setState({ ...this.state, insights: [...(this.state.insights || []), newInsight], understanding: this.state.understanding + 1, }); } onStateUpdate(state, source) { console.log("Understanding deepened:", { newState: state, origin: source, }); } // Scheduling APIs // An Agent can schedule tasks to be run in the future by calling this.schedule(when, callback, data), where when can be a delay, a Date, or a cron string; callback the function name to call, and data is an object of data to pass to the function. // // Scheduled tasks can do anything a request or message from a user can: make requests, query databases, send emails, read+write state: scheduled tasks can invoke any regular method on your Agent. async scheduleExamples() { // schedule a task to run in 10 seconds let task = await this.schedule(10, "someTask", { message: "hello" }); // schedule a task to run at a specific date let task = await this.schedule(new Date("2025-01-01"), "someTask", {}); // schedule a task to run every 10 seconds let { id } = await this.schedule("*/10 * * * *", "someTask", { message: "hello" }); // schedule a task to run every 10 seconds, but only on Mondays let task = await this.schedule("0 0 * * 1", "someTask", { message: "hello" }); // cancel a scheduled task this.cancelSchedule(task.id); // Get a specific schedule by ID // Returns undefined if the task does not exist let task = await this.getSchedule(task.id) // Get all scheduled tasks // Returns an array of Schedule objects let tasks = this.getSchedules(); // Cancel a task by its ID // Returns true if the task was cancelled, false if it did not exist await this.cancelSchedule(task.id); // Filter for specific tasks // e.g. all tasks starting in the next hour let tasks = this.getSchedules({ timeRange: { start: new Date(Date.now()), end: new Date(Date.now() + 60 * 60 * 1000), } }); } async someTask(data) { await this.callReasoningModel(data.message); } // Use the this.sql API within the Agent to access the underlying SQLite database async callReasoningModel(prompt: Prompt) { interface Prompt { userId: string; user: string; system: string; metadata: Record; } interface History { timestamp: Date; entry: string; } let result = this.sql`SELECT * FROM history WHERE user = ${prompt.userId} ORDER BY timestamp DESC LIMIT 1000`; let context = []; for await (const row of result) { context.push(row.entry); } const client = new OpenAI({ apiKey: this.env.OPENAI_API_KEY, }); // Combine user history with the current prompt const systemPrompt = prompt.system || 'You are a helpful assistant.'; const userPrompt = `${prompt.user}\n\nUser history:\n${context.join('\n')}`; try { const completion = await client.chat.completions.create({ model: this.env.MODEL || 'o3-mini', messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt }, ], temperature: 0.7, max_tokens: 1000, }); // Store the response in history this .sql`INSERT INTO history (timestamp, user, entry) VALUES (${new Date()}, ${prompt.userId}, ${completion.choices[0].message.content})`; return completion.choices[0].message.content; } catch (error) { console.error('Error calling reasoning model:', error); throw error; } } // Use the SQL API with a type parameter async queryUser(userId: string) { type User = { id: string; name: string; email: string; }; // Supply the type paramter to the query when calling this.sql // This assumes the results returns one or more User rows with "id", "name", and "email" columns // You do not need to specify an array type (`User[]` or `Array`) as `this.sql` will always return an array of the specified type. const user = await this.sql`SELECT * FROM users WHERE id = ${userId}`; return user } // Run and orchestrate Workflows from Agents async runWorkflow(data) { let instance = await env.MY_WORKFLOW.create({ id: data.id, params: data, }) // Schedule another task that checks the Workflow status every 5 minutes... await this.schedule("*/5 * * * *", "checkWorkflowStatus", { id: instance.id }); } } export default { async fetch(request, env, ctx): Promise { // Routed addressing // Automatically routes HTTP requests and/or WebSocket connections to /agents/:agent/:name // Best for: connecting React apps directly to Agents using useAgent from @cloudflare/agents/react return (await routeAgentRequest(request, env)) || Response.json({ msg: 'no agent here' }, { status: 404 }); // Named addressing // Best for: convenience method for creating or retrieving an agent by name/ID. let namedAgent = getAgentByName(env.AIAgent, 'agent-456'); // Pass the incoming request straight to your Agent let namedResp = (await namedAgent).fetch(request); return namedResp; // Durable Objects-style addressing // Best for: controlling ID generation, associating IDs with your existing systems, // and customizing when/how an Agent is created or invoked const id = env.AIAgent.newUniqueId(); const agent = env.AIAgent.get(id); // Pass the incoming request straight to your Agent let resp = await agent.fetch(request); // return Response.json({ hello: 'visit https://developers.cloudflare.com/agents for more' }); }, } satisfies ExportedHandler; // client.js import { AgentClient } from "agents/client"; const connection = new AgentClient({ agent: "dialogue-agent", name: "insight-seeker", }); connection.addEventListener("message", (event) => { console.log("Received:", event.data); }); connection.send( JSON.stringify({ type: "inquiry", content: "What patterns do you see?", }) ); // app.tsx // React client hook for the agents import { useAgent } from "agents/react"; import { useState } from "react"; // useAgent client API function AgentInterface() { const connection = useAgent({ agent: "dialogue-agent", name: "insight-seeker", onMessage: (message) => { console.log("Understanding received:", message.data); }, onOpen: () => console.log("Connection established"), onClose: () => console.log("Connection closed"), }); const inquire = () => { connection.send( JSON.stringify({ type: "inquiry", content: "What insights have you gathered?", }) ); }; return (
); } // State synchronization function StateInterface() { const [state, setState] = useState({ counter: 0 }); const agent = useAgent({ agent: "thinking-agent", onStateUpdate: (newState) => setState(newState), }); const increment = () => { agent.setState({ counter: state.counter + 1 }); }; return (
Count: {state.counter}
); }
{ "durable_objects": { "bindings": [ { "binding": "AIAgent", "class_name": "AIAgent" } ] }, "migrations": [ { "tag": "v1", // Mandatory for the Agent to store state "new_sqlite_classes": ["AIAgent"] } ] } - Imports the `Agent` class from the `agents` package - Extends the `Agent` class and implements the methods exposed by the `Agent`, including `onRequest` for HTTP requests, or `onConnect` and `onMessage` for WebSockets. - Uses the `this.schedule` scheduling API to schedule future tasks. - Uses the `this.setState` API within the Agent for syncing state, and uses type parameters to ensure the state is typed. - Uses the `this.sql` as a lower-level query API. - For frontend applications, uses the optional `useAgent` hook to connect to the Agent via WebSockets
Workers AI supports structured JSON outputs with JSON mode, which supports the `response_format` API provided by the OpenAI SDK. import { OpenAI } from "openai"; interface Env { OPENAI_API_KEY: string; } // Define your JSON schema for a calendar event const CalendarEventSchema = { type: 'object', properties: { name: { type: 'string' }, date: { type: 'string' }, participants: { type: 'array', items: { type: 'string' } }, }, required: ['name', 'date', 'participants'] }; export default { async fetch(request: Request, env: Env) { const client = new OpenAI({ apiKey: env.OPENAI_API_KEY, // Optional: use AI Gateway to bring logs, evals & caching to your AI requests // https://developers.cloudflare.com/ai-gateway/providers/openai/ // baseUrl: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai" }); const response = await client.chat.completions.create({ model: 'gpt-4o-2024-08-06', messages: [ { role: 'system', content: 'Extract the event information.' }, { role: 'user', content: 'Alice and Bob are going to a science fair on Friday.' }, ], // Use the `response_format` option to request a structured JSON output response_format: { // Set json_schema and provide ra schema, or json_object and parse it yourself type: 'json_schema', schema: CalendarEventSchema, // provide a schema }, }); // This will be of type CalendarEventSchema const event = response.choices[0].message.parsed; return Response.json({ "calendar_event": event, }) } } { "name": "my-app", "main": "src/index.ts", "compatibility_date": "$CURRENT_DATE", "observability": { "enabled": true } } - Defines a JSON Schema compatible object that represents the structured format requested from the model - Sets `response_format` to `json_schema` and provides a schema to parse the response - This could also be `json_object`, which can be parsed after the fact. - Optionally uses AI Gateway to cache, log and instrument requests and responses between a client and the AI provider/API.
Fan-in/fan-out for WebSockets. Uses the Hibernatable WebSockets API within Durable Objects. Does NOT use the legacy addEventListener API. export class WebSocketHibernationServer extends DurableObject { async fetch(request: Request, env: Env, ctx: ExecutionContext) { // Creates two ends of a WebSocket connection. const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); // Call this to accept the WebSocket connection. // Do NOT call server.accept() (this is the legacy approach and is not preferred) this.ctx.acceptWebSocket(server); return new Response(null, { status: 101, webSocket: client, }); }, async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): void | Promise { // Invoked on each WebSocket message. ws.send(message) }, async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) void | Promise { // Invoked when a client closes the connection. ws.close(code, ""); }, async webSocketError(ws: WebSocket, error: unknown): void | Promise { // Handle WebSocket errors } } {user_prompt} ``` The prompt above adopts several best practices, including: * Using `` tags to structure the prompt * API and usage examples for products and use-cases * Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response. * Recommendations on Cloudflare products to use for specific storage or state needs ### Additional uses You can use the prompt in several ways: * Within the user context window, with your own user prompt inserted between the `` tags (**easiest**) * As the `system` prompt for models that support system prompts * Adding it to the prompt library and/or file context within your preferred IDE: * Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai) * Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context. * Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat. * GitHub Copilot: create the [`.github/copilot-instructions.md`](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot) file at the root of your project and add the prompt. Note The prompt(s) here are examples and should be adapted to your specific use case. We'll continue to build out the prompts available here, including additional prompts for specific products. Depending on the model and user prompt, it may generate invalid code, configuration or other errors, and we recommend reviewing and testing the generated code before deploying it. ### Passing a system prompt If you are building an AI application that will itself generate code, you can additionally use the prompt above as a "system prompt", which will give the LLM additional information on how to structure the output code. For example: * JavaScript ```js import workersPrompt from "./workersPrompt.md"; // Llama 3.3 from Workers AI const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast"; export default { async fetch(req, env, ctx) { const openai = new OpenAI({ apiKey: env.WORKERS_AI_API_KEY, }); const stream = await openai.chat.completions.create({ messages: [ { role: "system", content: workersPrompt, }, { role: "user", // Imagine something big! content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ...", }, ], model: PREFERRED_MODEL, stream: true, }); // Stream the response so we're not buffering the entire response in memory, // since it could be very large. const transformStream = new TransformStream(); const writer = transformStream.writable.getWriter(); const encoder = new TextEncoder(); (async () => { try { for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content || ""; await writer.write(encoder.encode(content)); } } finally { await writer.close(); } })(); return new Response(transformStream.readable, { headers: { "Content-Type": "text/plain; charset=utf-8", "Transfer-Encoding": "chunked", }, }); }, }; ``` * TypeScript ```ts import workersPrompt from "./workersPrompt.md" // Llama 3.3 from Workers AI const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast" export default { async fetch(req: Request, env: Env, ctx: ExecutionContext) { const openai = new OpenAI({ apiKey: env.WORKERS_AI_API_KEY }); const stream = await openai.chat.completions.create({ messages: [ { role: "system", content: workersPrompt, }, { role: "user", // Imagine something big! content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ..." } ], model: PREFERRED_MODEL, stream: true, }); // Stream the response so we're not buffering the entire response in memory, // since it could be very large. const transformStream = new TransformStream(); const writer = transformStream.writable.getWriter(); const encoder = new TextEncoder(); (async () => { try { for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content || ''; await writer.write(encoder.encode(content)); } } finally { await writer.close(); } })(); return new Response(transformStream.readable, { headers: { 'Content-Type': 'text/plain; charset=utf-8', 'Transfer-Encoding': 'chunked' } }); } } ``` ## Use docs in your editor AI-enabled editors, including Cursor and Windsurf, can index documentation. Cursor includes the Cloudflare Developer Docs by default: you can use the [`@Docs`](https://docs.cursor.com/context/@-symbols/@-docs) command. In other editors, such as Zed or Windsurf, you can paste in URLs to add to your context. Use the *Copy Page* button to paste in Cloudflare docs directly, or fetch docs for each product by appending `llms-full.txt` to the root URL - for example, `https://developers.cloudflare.com/agents/llms-full.txt` or `https://developers.cloudflare.com/workflows/llms-full.txt`. You can combine these with the Workers system prompt on this page to improve your editor or agent's understanding of the Workers APIs. ## Additional resources To get the most out of AI models and tools, we recommend reading the following guides on prompt engineering and structure: * OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models. * The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic * Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts * Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family. * GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.
--- title: Templates · Cloudflare Workers docs description: GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project. lastUpdated: 2025-05-15T15:33:30.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/get-started/quickstarts/ md: https://developers.cloudflare.com/workers/get-started/quickstarts/index.md --- Templates are GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project. To start any of the projects below, run: ### astro-blog-starter-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template) Build a personal website, blog, or portfolio with Astro. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/astro-blog-starter-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/astro-blog-starter-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/astro-blog-starter-template ``` *** ### chanfana-openapi-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/chanfana-openapi-template) Complete backend API template using Hono + Chanfana + D1 + Vitest. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/chanfana-openapi-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/chanfana-openapi-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/chanfana-openapi-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/chanfana-openapi-template ``` *** ### cli [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/cli) A handy CLI for developing templates. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/cli) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/cli ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/cli ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/cli ``` *** ### containers-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/containers-template) Build a Container-enabled Worker Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/containers-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/containers-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/containers-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/containers-template ``` *** ### d1-starter-sessions-api-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template) D1 starter template using the Sessions API for read replication. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/d1-starter-sessions-api-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/d1-starter-sessions-api-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/d1-starter-sessions-api-template ``` *** ### d1-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-template) Cloudflare's native serverless SQL database. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/d1-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/d1-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/d1-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/d1-template ``` *** ### durable-chat-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/durable-chat-template) Chat with other users in real-time using Durable Objects and PartyKit. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/durable-chat-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/durable-chat-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/durable-chat-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/durable-chat-template ``` *** ### hello-world-do-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/hello-world-do-template) Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/hello-world-do-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/hello-world-do-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/hello-world-do-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/hello-world-do-template ``` *** ### llm-chat-app-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/llm-chat-app-template) A simple chat application powered by Cloudflare Workers AI Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/llm-chat-app-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/llm-chat-app-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/llm-chat-app-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/llm-chat-app-template ``` *** ### multiplayer-globe-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/multiplayer-globe-template) Display website visitor locations in real-time using Durable Objects and PartyKit. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/multiplayer-globe-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/multiplayer-globe-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/multiplayer-globe-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/multiplayer-globe-template ``` *** ### mysql-hyperdrive-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template) Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/mysql-hyperdrive-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/mysql-hyperdrive-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/mysql-hyperdrive-template ``` *** ### next-starter-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/next-starter-template) Build a full-stack web application with Next.js. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/next-starter-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/next-starter-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/next-starter-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/next-starter-template ``` *** ### openauth-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/openauth-template) Deploy an OpenAuth server on Cloudflare Workers. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/openauth-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/openauth-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/openauth-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/openauth-template ``` *** ### postgres-hyperdrive-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/postgres-hyperdrive-template) Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/postgres-hyperdrive-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/postgres-hyperdrive-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/postgres-hyperdrive-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/postgres-hyperdrive-template ``` *** ### r2-explorer-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/r2-explorer-template) A Google Drive Interface for your Cloudflare R2 Buckets! Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/r2-explorer-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/r2-explorer-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/r2-explorer-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/r2-explorer-template ``` *** ### react-postgres-fullstack-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-postgres-fullstack-template) Deploy your own library of books using Postgres and Workers. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-postgres-fullstack-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/react-postgres-fullstack-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/react-postgres-fullstack-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/react-postgres-fullstack-template ``` *** ### react-router-hono-fullstack-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-hono-fullstack-template) A modern full-stack template powered by Cloudflare Workers, using Hono for backend APIs, React Router for frontend routing, and shadcn/ui for beautiful, accessible components styled with Tailwind CSS Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-router-hono-fullstack-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/react-router-hono-fullstack-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/react-router-hono-fullstack-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/react-router-hono-fullstack-template ``` *** ### react-router-postgres-ssr-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-postgres-ssr-template) Deploy your own library of books using Postgres and Workers. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-router-postgres-ssr-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/react-router-postgres-ssr-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/react-router-postgres-ssr-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/react-router-postgres-ssr-template ``` *** ### react-router-starter-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-starter-template) Build a full-stack web application with React Router 7. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/react-router-starter-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/react-router-starter-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/react-router-starter-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/react-router-starter-template ``` *** ### remix-starter-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/remix-starter-template) Build a full-stack web application with Remix. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/remix-starter-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/remix-starter-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/remix-starter-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/remix-starter-template ``` *** ### saas-admin-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/saas-admin-template) Admin dashboard template built with Astro, shadcn/ui, and Cloudflare's developer stack Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/saas-admin-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/saas-admin-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/saas-admin-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/saas-admin-template ``` *** ### text-to-image-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/text-to-image-template) Generate images based on text prompts. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/text-to-image-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/text-to-image-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/text-to-image-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/text-to-image-template ``` *** ### to-do-list-kv-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/to-do-list-kv-template) A simple to-do list app built with Cloudflare Workers Assets and Remix. Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/to-do-list-kv-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/to-do-list-kv-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/to-do-list-kv-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/to-do-list-kv-template ``` *** ### vite-react-template [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/vite-react-template) A template for building a React application with Vite, Hono, and Cloudflare Workers Explore on [GitHub ↗](https://github.com/cloudflare/templates/tree/main/vite-react-template) * npm ```sh npm create cloudflare@latest -- --template=cloudflare/templates/vite-react-template ``` * yarn ```sh yarn create cloudflare --template=cloudflare/templates/vite-react-template ``` * pnpm ```sh pnpm create cloudflare@latest --template=cloudflare/templates/vite-react-template ``` *** *** ## Built with Workers Get inspiration from other sites and projects out there that were built with Cloudflare Workers. [Built with Workers](https://workers.cloudflare.com/built-with) --- title: JavaScript · Cloudflare Workers docs description: The Workers platform is designed to be JavaScript standards compliant and web-interoperable, and supports JavaScript standards, as defined by TC39 (ECMAScript). Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across WinterCG JavaScript runtimes. lastUpdated: 2025-03-13T11:08:22.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/languages/javascript/ md: https://developers.cloudflare.com/workers/languages/javascript/index.md --- The Workers platform is designed to be [JavaScript standards compliant](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable, and supports JavaScript standards, as defined by [TC39](https://tc39.es/) (ECMAScript). Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG](https://wintercg.org/) JavaScript runtimes. Refer to [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/) for more information on specific JavaScript APIs available in Workers. ### Resources * [Getting Started](https://developers.cloudflare.com/workers/get-started/guide/) * [Quickstarts](https://developers.cloudflare.com/workers/get-started/quickstarts/) – More example repos to use as a basis for your projects * [TypeScript type definitions](https://github.com/cloudflare/workers-types) * [JavaScript and web standard APIs](https://developers.cloudflare.com/workers/runtime-apis/web-standards/) * [Tutorials](https://developers.cloudflare.com/workers/tutorials/) * [Examples](https://developers.cloudflare.com/workers/examples/?languages=JavaScript) --- title: Write Cloudflare Workers in Python · Cloudflare Workers docs description: Write Workers in 100% Python lastUpdated: 2025-03-24T17:07:01.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/languages/python/ md: https://developers.cloudflare.com/workers/languages/python/index.md --- Cloudflare Workers provides first-class support for Python, including support for: * The majority of Python's [Standard library](https://developers.cloudflare.com/workers/languages/python/stdlib/) * All [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), including [Workers AI](https://developers.cloudflare.com/workers-ai/), [Vectorize](https://developers.cloudflare.com/vectorize), [R2](https://developers.cloudflare.com/r2), [KV](https://developers.cloudflare.com/kv), [D1](https://developers.cloudflare.com/d1), [Queues](https://developers.cloudflare.com/queues/), [Durable Objects](https://developers.cloudflare.com/durable-objects/), [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) and more. * [Environment Variables](https://developers.cloudflare.com/workers/configuration/environment-variables/), and [Secrets](https://developers.cloudflare.com/workers/configuration/secrets/) * A robust [foreign function interface (FFI)](https://developers.cloudflare.com/workers/languages/python/ffi) that lets you use JavaScript objects and functions directly from Python — including all [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/) * [Built-in packages](https://developers.cloudflare.com/workers/languages/python/packages), including [FastAPI](https://fastapi.tiangolo.com/), [Langchain](https://pypi.org/project/langchain/), [httpx](https://www.python-httpx.org/) and more. Python Workers are in beta. Packages do not run in production. Currently, you can only deploy Python Workers that use the standard library. [Packages](https://developers.cloudflare.com/workers/languages/python/packages/#supported-packages) **cannot be deployed** and will only work in local development for the time being. You must add the `python_workers` compatibility flag to your Worker, while Python Workers are in open beta. We'd love your feedback. Join the #python-workers channel in the [Cloudflare Developers Discord](https://discord.cloudflare.com/) and let us know what you'd like to see next. ## Get started ```bash git clone https://github.com/cloudflare/python-workers-examples cd python-workers-examples/01-hello npx wrangler@latest dev ``` A Python Worker can be as simple as three lines of code: ```python from workers import Response def on_fetch(request): return Response("Hello World!") ``` Similar to Workers written in [JavaScript](https://developers.cloudflare.com/workers/languages/javascript), [TypeScript](https://developers.cloudflare.com/workers/languages/typescript), or [Rust](https://developers.cloudflare.com/workers/languages/rust/), the main entry point for a Python worker is the [`fetch` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch). In a Python Worker, this handler is named `on_fetch`. To run a Python Worker locally, you use [Wrangler](https://developers.cloudflare.com/workers/wrangler/), the CLI for Cloudflare Workers: ```bash npx wrangler@latest dev ``` To deploy a Python Worker to Cloudflare, run [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy): ```bash npx wrangler@latest deploy ``` ## Modules Python workers can be split across multiple files. Let's create a new Python file, called `src/hello.py`: ```python def hello(name): return "Hello, " + name + "!" ``` Now, we can modify `src/entry.py` to make use of the new module. ```python from hello import hello from workers import Response def on_fetch(request): return Response(hello("World")) ``` Once you edit `src/entry.py`, Wrangler will automatically detect the change and reload your Worker. ## The `Request` Interface The `request` parameter passed to your `fetch` handler is a JavaScript Request object, exposed via the foreign function interface, allowing you to access it directly from your Python code. Let's try editing the worker to accept a POST request. We know from the [documentation for `Request`](https://developers.cloudflare.com/workers/runtime-apis/request) that we can call `await request.json()` within an `async` function to parse the request body as JSON. In a Python Worker, you would write: ```python from workers import Response from hello import hello async def on_fetch(request): name = (await request.json()).name return Response(hello(name)) ``` Once you edit the `src/entry.py`, Wrangler should automatically restart the local development server. Now, if you send a POST request with the appropriate body, your Worker should respond with a personalized message. ```bash curl --header "Content-Type: application/json" \ --request POST \ --data '{"name": "Python"}' http://localhost:8787 ``` ```bash Hello, Python! ``` ## The `env` Parameter In addition to the `request` parameter, the `env` parameter is also passed to the Python `fetch` handler and can be used to access [environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/), [secrets](https://developers.cloudflare.com/workers/configuration/secrets/),and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/). For example, let us try setting and using an environment variable in a Python Worker. First, add the environment variable to your Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/): * wrangler.jsonc ```jsonc { "name": "hello-python-worker", "main": "src/entry.py", "compatibility_flags": [ "python_workers" ], "compatibility_date": "2024-03-20", "vars": { "API_HOST": "example.com" } } ``` * wrangler.toml ```toml name = "hello-python-worker" main = "src/entry.py" compatibility_flags = ["python_workers"] compatibility_date = "2024-03-20" [vars] API_HOST = "example.com" ``` Then, you can access the `API_HOST` environment variable via the `env` parameter: ```python from workers import Response async def on_fetch(request, env): return Response(env.API_HOST) ``` ## Further Reading * Understand which parts of the [Python Standard Library](https://developers.cloudflare.com/workers/languages/python/stdlib) are supported in Python Workers. * Learn about Python Workers' [foreign function interface (FFI)](https://developers.cloudflare.com/workers/languages/python/ffi), and how to use it to work with [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings) and [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/). * Explore the [Built-in Python packages](https://developers.cloudflare.com/workers/languages/python/packages) that the Workers runtime provides. --- title: Cloudflare Workers — Rust language support · Cloudflare Workers docs description: Write Workers in 100% Rust using the [`workers-rs` crate](https://github.com/cloudflare/workers-rs) lastUpdated: 2025-05-06T10:45:54.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/languages/rust/ md: https://developers.cloudflare.com/workers/languages/rust/index.md --- Cloudflare Workers provides support for Rust via the [`workers-rs` crate](https://github.com/cloudflare/workers-rs), which makes [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis) and [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) to developer platform products, such as [Workers KV](https://developers.cloudflare.com/kv/concepts/how-kv-works/), [R2](https://developers.cloudflare.com/r2/), and [Queues](https://developers.cloudflare.com/queues/), available directly from your Rust code. By following this guide, you will learn how to build a Worker entirely in the Rust programming language. ## Prerequisites Before starting this guide, make sure you have: * A recent version of [`Rust`](https://rustup.rs/) * [`npm`](https://docs.npmjs.com/getting-started) * The Rust `wasm32-unknown-unknown` toolchain: ```sh rustup target add wasm32-unknown-unknown ``` * And `cargo-generate` sub-command by running: ```sh cargo install cargo-generate ``` ## 1. Create a new project with Wrangler Open a terminal window, and run the following command to generate a Worker project template in Rust: ```sh cargo generate cloudflare/workers-rs ``` Your project will be created in a new directory that you named, in which you will find the following files and folders: * `Cargo.toml` - The standard project configuration file for Rust's [`Cargo`](https://doc.rust-lang.org/cargo/) package manager. The template pre-populates some best-practice settings for building for Wasm on Workers. * `wrangler.toml` - Wrangler configuration, pre-populated with a custom build command to invoke `worker-build` (Refer to [Wrangler Bundling](https://developers.cloudflare.com/workers/languages/rust/#bundling-worker-build)). * `src` - Rust source directory, pre-populated with Hello World Worker. ## 2. Develop locally After you have created your first Worker, run the [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) command to start a local server for developing your Worker. This will allow you to test your Worker in development. ```sh npx wrangler dev ``` If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account. Note If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](https://developers.cloudflare.com/workers/wrangler/commands/#login) documentation for more information. Go to to review your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker. ## 3. Write your Worker code With your new project generated, write your Worker code. Find the entrypoint to your Worker in `src/lib.rs`: ```rust use worker::*; #[event(fetch)] async fn main(req: Request, env: Env, ctx: Context) -> Result { Response::ok("Hello, World!") } ``` Note There is some counterintuitive behavior going on here: 1. `workers-rs` provides an `event` macro which expects a handler function signature identical to those seen in JavaScript Workers. 2. `async` is not generally supported by Wasm, but you are able to use `async` in a `workers-rs` project (refer to [`async`](https://developers.cloudflare.com/workers/languages/rust/#async-wasm-bindgen-futures)). ### Related runtime APIs `workers-rs` provides a runtime API which closely matches Worker's JavaScript API, and enables integration with Worker's platform features. For detailed documentation of the API, refer to [`docs.rs/worker`](https://docs.rs/worker/latest/worker/). #### `event` macro This macro allows you to define entrypoints to your Worker. The `event` macro supports the following events: * `fetch` - Invoked by an incoming HTTP request. * `scheduled` - Invoked by [`Cron Triggers`](https://developers.cloudflare.com/workers/configuration/cron-triggers/). * `queue` - Invoked by incoming message batches from [Queues](https://developers.cloudflare.com/queues/) (Requires `queue` feature in `Cargo.toml`, refer to the [`workers-rs` GitHub repository and `queues` feature flag](https://github.com/cloudflare/workers-rs#queues)). * `start` - Invoked when the Worker is first launched (such as, to install panic hooks). #### `fetch` parameters The `fetch` handler provides three arguments which match the JavaScript API: 1. **[`Request`](https://docs.rs/worker/latest/worker/struct.Request.html)** An object representing the incoming request. This includes methods for accessing headers, method, path, Cloudflare properties, and body (with support for asynchronous streaming and JSON deserialization with [Serde](https://serde.rs/)). 1. **[`Env`](https://docs.rs/worker/latest/worker/struct.Env.html)** Provides access to Worker [bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/). * [`Secret`](https://github.com/cloudflare/workers-rs/blob/e15f88110d814c2d7759b2368df688433f807694/worker/src/env.rs#L92) - Secret value configured in Cloudflare dashboard or using `wrangler secret put`. * [`Var`](https://github.com/cloudflare/workers-rs/blob/e15f88110d814c2d7759b2368df688433f807694/worker/src/env.rs#L92) - Environment variable defined in `wrangler.toml`. * [`KvStore`](https://docs.rs/worker-kv/latest/worker_kv/struct.KvStore.html) - Workers [KV](https://developers.cloudflare.com/kv/api/) namespace binding. * [`ObjectNamespace`](https://docs.rs/worker/latest/worker/durable/struct.ObjectNamespace.html) - [Durable Object](https://developers.cloudflare.com/durable-objects/) binding. * [`Fetcher`](https://docs.rs/worker/latest/worker/struct.Fetcher.html) - [Service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) to another Worker. * [`Bucket`](https://docs.rs/worker/latest/worker/struct.Bucket.html) - [R2](https://developers.cloudflare.com/r2/) Bucket binding. 1. **[`Context`](https://docs.rs/worker/latest/worker/struct.Context.html)** Provides access to [`waitUntil`](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) (deferred asynchronous tasks) and [`passThroughOnException`](https://developers.cloudflare.com/workers/runtime-apis/context/#passthroughonexception) (fail open) functionality. #### [`Response`](https://docs.rs/worker/latest/worker/struct.Response.html) The `fetch` handler expects a [`Response`](https://docs.rs/worker/latest/worker/struct.Response.html) return type, which includes support for streaming responses to the client asynchronously. This is also the return type of any subrequests made from your Worker. There are methods for accessing status code and headers, as well as streaming the body asynchronously or deserializing from JSON using [Serde](https://serde.rs/). #### `Router` Implements convenient [routing API](https://docs.rs/worker/latest/worker/struct.Router.html) to serve multiple paths from one Worker. Refer to the [`Router` example in the `worker-rs` GitHub repository](https://github.com/cloudflare/workers-rs#or-use-the-router). ## 4. Deploy your Worker project With your project configured, you can now deploy your Worker, to a `*.workers.dev` subdomain, or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the deployment process to set one up. ```sh npx wrangler deploy ``` Preview your Worker at `..workers.dev`. Note When pushing to your `*.workers.dev` subdomain for the first time, you may see [`523` errors](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-523/) while DNS is propagating. These errors should resolve themselves after a minute or so. After completing these steps, you will have a basic Rust-based Worker deployed. From here, you can add crate dependencies and write code in Rust to implement your Worker application. If you would like to know more about the inner workings of how Rust compiled to Wasm is supported by Workers, the next section outlines the libraries and tools involved. ## How this deployment works Wasm Workers are invoked from a JavaScript entrypoint script which is created automatically for you when using `workers-rs`. ### JavaScript Plumbing (`wasm-bindgen`) To access platform features such as bindings, Wasm Workers must be able to access methods from the JavaScript runtime API. This interoperability is achieved using [`wasm-bindgen`](https://rustwasm.github.io/wasm-bindgen/), which provides the glue code needed to import runtime APIs to, and export event handlers from, the Wasm module. `wasm-bindgen` also provides [`js-sys`](https://docs.rs/js-sys/latest/js_sys/), which implements types for interacting with JavaScript objects. In practice, this is an implementation detail, as `workers-rs`'s API handles conversion to and from JavaScript objects, and interaction with imported JavaScript runtime APIs for you. Note If you are using `wasm-bindgen` without `workers-rs` / `worker-build`, then you will need to patch the JavaScript that it emits. This is because when you import a `wasm` file in Workers, you get a `WebAssembly.Module` instead of a `WebAssembly.Instance` for performance and security reasons. To patch the JavaScript that `wasm-bindgen` emits: 1. Run `wasm-pack build --target bundler` as you normally would. 2. Patch the JavaScript file that it produces (the following code block assumes the file is called `mywasmlib.js`): ```js import * as imports from "./mywasmlib_bg.js"; // switch between both syntax for node and for workerd import wkmod from "./mywasmlib_bg.wasm"; import * as nodemod from "./mywasmlib_bg.wasm"; if (typeof process !== "undefined" && process.release.name === "node") { imports.__wbg_set_wasm(nodemod); } else { const instance = new WebAssembly.Instance(wkmod, { "./mywasmlib_bg.js": imports, }); imports.__wbg_set_wasm(instance.exports); } export * from "./mywasmlib_bg.js"; ``` 1. In your Worker entrypoint, import the function and use it directly: ```js import { myFunction } from "path/to/mylib.js"; ``` ### Async (`wasm-bindgen-futures`) [`wasm-bindgen-futures`](https://rustwasm.github.io/wasm-bindgen/api/wasm_bindgen_futures/) (part of the `wasm-bindgen` project) provides interoperability between Rust Futures and JavaScript Promises. `workers-rs` invokes the entire event handler function using `spawn_local`, meaning that you can program using async Rust, which is turned into a single JavaScript Promise and run on the JavaScript event loop. Calls to imported JavaScript runtime APIs are automatically converted to Rust Futures that can be invoked from async Rust functions. ### Bundling (`worker-build`) To run the resulting Wasm binary on Workers, `workers-rs` includes a build tool called [`worker-build`](https://github.com/cloudflare/workers-rs/tree/main/worker-build) which: 1. Creates a JavaScript entrypoint script that properly invokes the module using `wasm-bindgen`'s JavaScript API. 2. Invokes `web-pack` to minify and bundle the JavaScript code. 3. Outputs a directory structure that Wrangler can use to bundle and deploy the final Worker. `worker-build` is invoked by default in the template project using a custom build command specified in the `wrangler.toml` file. ### Binary Size (`wasm-opt`) Unoptimized Rust Wasm binaries can be large and may exceed Worker bundle size limits or experience long startup times. The template project pre-configures several useful size optimizations in your `Cargo.toml` file: ```toml [profile.release] lto = true strip = true codegen-units = 1 ``` Finally, `worker-bundle` automatically invokes [`wasm-opt`](https://github.com/brson/wasm-opt-rs) to further optimize binary size before upload. ## Related resources * [Rust Wasm Book](https://rustwasm.github.io/docs/book/) --- title: Write Cloudflare Workers in TypeScript · Cloudflare Workers docs description: TypeScript is a first-class language on Cloudflare Workers. All APIs provided in Workers are fully typed, and type definitions are generated directly from workerd, the open-source Workers runtime. lastUpdated: 2025-04-16T21:02:18.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/languages/typescript/ md: https://developers.cloudflare.com/workers/languages/typescript/index.md --- TypeScript is a first-class language on Cloudflare Workers. All APIs provided in Workers are fully typed, and type definitions are generated directly from [workerd](https://github.com/cloudflare/workerd), the open-source Workers runtime. We recommend you generate types for your Worker by running [`wrangler types`](https://developers.cloudflare.com/workers/wrangler/commands/#types). Cloudflare also publishes type definitions to [GitHub](https://github.com/cloudflare/workers-types) and [npm](https://www.npmjs.com/package/@cloudflare/workers-types) (`npm install -D @cloudflare/workers-types`). ### Generate types that match your Worker's configuration Cloudflare continuously improves [workerd](https://github.com/cloudflare/workerd), the open-source Workers runtime. Changes in workerd can introduce JavaScript API changes, thus changing the respective TypeScript types. This means the correct types for your Worker depend on: 1. Your Worker's [compatibility date](https://developers.cloudflare.com/workers/configuration/compatibility-dates/). 2. Your Worker's [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/). 3. Your Worker's bindings, which are defined in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration). 4. Any [module rules](https://developers.cloudflare.com/workers/wrangler/configuration/#bundling) you have specified in your Wrangler configuration file under `rules`. For example, the runtime will only allow you to use the [`AsyncLocalStorage`](https://nodejs.org/api/async_context.html#class-asynclocalstorage) class if you have `compatibility_flags = ["nodejs_als"]` in your [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/). This should be reflected in the type definitions. To ensure that your type definitions always match your Worker's configuration, you can dynamically generate types by running: * npm ```sh npx wrangler types ``` * yarn ```sh yarn wrangler types ``` * pnpm ```sh pnpm wrangler types ``` See [the `wrangler types` command docs](https://developers.cloudflare.com/workers/wrangler/commands/#types) for more details. Note If you are running a version of Wrangler that is greater than `3.66.0` but below `4.0.0`, you will need to include the `--experimental-include-runtime` flag. During its experimental release, runtime types were output to a separate file (`.wrangler/types/runtime.d.ts` by default). If you have an older version of Wrangler, you can access runtime types through the `@cloudflare/workers-types` package. This will generate a `d.ts` file and (by default) save it to `worker-configuration.d.ts`. This will include `Env` types based on your Worker bindings *and* runtime types based on your Worker's compatibility date and flags. You should then add that file to your `tsconfig.json`'s `compilerOptions.types` array. If you have the `nodejs_compat` compatibility flag, you should also install `@types/node`. You can commit your types file to git if you wish. Note To ensure that your types are always up-to-date, make sure to run `wrangler types` after any changes to your config file. ### Migrating from `@cloudflare/workers-types` to `wrangler types` We recommend you use `wrangler types` to generate runtime types, rather than using the `@cloudflare/workers-types` package, as it generates types based on your Worker's [compatibility date](https://github.com/cloudflare/workerd/tree/main/npm/workers-types#compatibility-dates) and `compatibility flags`, ensuring that types match the exact runtime APIs made available to your Worker. Note There are no plans to stop publishing the `@cloudflare/workers-types` package, which will still be the recommended way to type libraries and shared packages in the workers environment. #### 1. Uninstall `@cloudflare/workers-types` * npm ```sh npm uninstall @cloudflare/workers-types ``` * yarn ```sh yarn remove @cloudflare/workers-types ``` * pnpm ```sh pnpm remove @cloudflare/workers-types ``` #### 2. Generate runtime types using Wrangler * npm ```sh npx wrangler types ``` * yarn ```sh yarn wrangler types ``` * pnpm ```sh pnpm wrangler types ``` This will generate a `.d.ts` file, saved to `worker-configuration.d.ts` by default. This will also generate `Env` types. If for some reason you do not want to include those, you can set `--include-env=false`. You can now remove any imports from `@cloudflare/workers-types` in your Worker code. Note If you are running a version of Wrangler that is greater than `3.66.0` but below `4.0.0`, you will need to include the `--experimental-include-runtime` flag. During its experimental release, runtime types were output to a separate file (`.wrangler/types/runtime.d.ts` by default). If you have an older version of Wrangler, you can access runtime types through the `@cloudflare/workers-types` package. #### 3. Make sure your `tsconfig.json` includes the generated types ```json { "compilerOptions": { "types": ["worker-configuration.d.ts"] } } ``` Note that if you have specified a custom path for the runtime types file, you should use that in your `compilerOptions.types` array instead of the default path. #### 4. Add @types/node if you are using [`nodejs_compat`](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) (Optional) If you are using the `nodejs_compat` compatibility flag, you should also install `@types/node`. * npm ```sh npm i @types/node ``` * yarn ```sh yarn add @types/node ``` * pnpm ```sh pnpm add @types/node ``` Then add this to your `tsconfig.json`. ```json { "compilerOptions": { "types": ["worker-configuration.d.ts", "node"] } } ``` #### 5. Update your scripts and CI pipelines Regardless of your specific framework or build tools, you should run the `wrangler types` command before any tasks that rely on TypeScript. Most projects will have existing build and development scripts, as well as some type-checking. In the example below, we're adding the `wrangler types` before the type-checking script in the project: ```json { "scripts": { "dev": "existing-dev-command", "build": "existing-build-command", "generate-types": "wrangler types", "type-check": "generate-types && tsc" } } ``` We recommend you commit your generated types file for use in CI. Alternatively, you can run `wrangler types` before other CI commands, as it should not take more than a few seconds. For example: * npm ```yaml - run: npm run generate-types - run: npm run build - run: npm test ``` * yarn ```yaml - run: yarn generate-types - run: yarn build - run: yarn test ``` * pnpm ```yaml - run: pnpm run generate-types - run: pnpm run build - run: pnpm test ``` ### Resources * [TypeScript template](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare/templates/hello-world/ts) * [@cloudflare/workers-types](https://github.com/cloudflare/workers-types) * [Runtime APIs](https://developers.cloudflare.com/workers/runtime-apis/) * [TypeScript Examples](https://developers.cloudflare.com/workers/examples/?languages=TypeScript) --- title: DevTools · Cloudflare Workers docs description: When running your Worker locally using the Wrangler CLI (wrangler dev) or using Vite with the Cloudflare Vite plugin, you automatically have access to Cloudflare's implementation of Chrome DevTools. lastUpdated: 2025-07-07T18:08:08.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/observability/dev-tools/ md: https://developers.cloudflare.com/workers/observability/dev-tools/index.md --- ## Using DevTools When running your Worker locally using the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/) (`wrangler dev`) or using [Vite](https://vite.dev/) with the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), you automatically have access to [Cloudflare's implementation](https://github.com/cloudflare/workers-sdk/tree/main/packages/chrome-devtools-patches) of [Chrome DevTools](https://developer.chrome.com/docs/devtools/overview). You can use Chrome DevTools to: * View logs directly in the Chrome console * [Debug code by setting breakpoints](https://developers.cloudflare.com/workers/observability/dev-tools/breakpoints/) * [Profile CPU usage](https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/) * [Observe memory usage and debug memory leaks in your code that can cause out-of-memory (OOM) errors](https://developers.cloudflare.com/workers/observability/dev-tools/memory-usage/) ## Opening DevTools ### Wrangler * Run your Worker locally, by running `wrangler dev` * Press the `D` key from your terminal to open DevTools in a browser tab ### Vite * Run your Worker locally by running `vite` * In a new Chrome tab, open the debug URL that shows in your console (for example, `http://localhost:5173/__debug`) ### Dashboard editor & playground Both the [Cloudflare dashboard](https://dash.cloudflare.com/) and the [Worker's Playground](https://workers.cloudflare.com/playground) include DevTools in the UI. ## Related resources * [Local development](https://developers.cloudflare.com/workers/development-testing/) - Develop your Workers and connected resources locally via Wrangler and workerd, for a fast, accurate feedback loop. --- title: Errors and exceptions · Cloudflare Workers docs description: Review Workers errors and exceptions. lastUpdated: 2025-05-23T21:38:55.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/observability/errors/ md: https://developers.cloudflare.com/workers/observability/errors/index.md --- Review Workers errors and exceptions. ## Error pages generated by Workers When a Worker running in production has an error that prevents it from returning a response, the client will receive an error page with an error code, defined as follows: | Error code | Meaning | | - | - | | `1101` | Worker threw a JavaScript exception. | | `1102` | Worker exceeded [CPU time limit](https://developers.cloudflare.com/workers/platform/limits/#cpu-time). | | `1103` | The owner of this worker needs to contact [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/) | | `1015` | Worker hit the [burst rate limit](https://developers.cloudflare.com/workers/platform/limits/#burst-rate). | | `1019` | Worker hit [loop limit](#loop-limit). | | `1021` | Worker has requested a host it cannot access. | | `1022` | Cloudflare has failed to route the request to the Worker. | | `1024` | Worker cannot make a subrequest to a Cloudflare-owned IP address. | | `1027` | Worker exceeded free tier [daily request limit](https://developers.cloudflare.com/workers/platform/limits/#daily-request). | | `1042` | Worker tried to fetch from another Worker on the same zone, which is only [supported](https://developers.cloudflare.com/workers/runtime-apis/fetch/) when the [`global_fetch_strictly_public` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#global-fetch-strictly-public) is used. | Other `11xx` errors generally indicate a problem with the Workers runtime itself. Refer to the [status page](https://www.cloudflarestatus.com) if you are experiencing an error. ### Loop limit A Worker cannot call itself or another Worker more than 16 times. In order to prevent infinite loops between Workers, the [`CF-EW-Via`](https://developers.cloudflare.com/fundamentals/reference/http-headers/#cf-ew-via) header's value is an integer that indicates how many invocations are left. Every time a Worker is invoked, the integer will decrement by 1. If the count reaches zero, a [`1019`](#error-pages-generated-by-workers) error is returned. ### "The script will never generate a response" errors Some requests may return a 1101 error with `The script will never generate a response` in the error message. This occurs when the Workers runtime detects that all the code associated with the request has executed and no events are left in the event loop, but a Response has not been returned. #### Cause 1: Unresolved Promises This is most commonly caused by relying on a Promise that is never resolved or rejected, which is required to return a Response. To debug, look for Promises within your code or dependencies' code that block a Response, and ensure they are resolved or rejected. In browsers and other JavaScript runtimes, equivalent code will hang indefinitely, leading to both bugs and memory leaks. The Workers runtime throws an explicit error to help you debug. In the example below, the Response relies on a Promise resolution that never happens. Uncommenting the `resolve` callback solves the issue. ```js export default { fetch(req) { let response = new Response("Example response"); let { promise, resolve } = Promise.withResolvers(); // If the promise is not resolved, the Workers runtime will // recognize this and throw an error. // setTimeout(resolve, 0) return promise.then(() => response); }, }; ``` You can prevent this by enforcing the [`no-floating-promises` eslint rule](https://typescript-eslint.io/rules/no-floating-promises/), which reports when a Promise is created and not properly handled. #### Cause 2: WebSocket connections that are never closed If a WebSocket is missing the proper code to close its server-side connection, the Workers runtime will throw a `script will never generate a response` error. In the example below, the `'close'` event from the client is not properly handled by calling `server.close()`, and the error is thrown. In order to avoid this, ensure that the WebSocket's server-side connection is properly closed via an event listener or other server-side logic. ```js async function handleRequest(request) { let webSocketPair = new WebSocketPair(); let [client, server] = Object.values(webSocketPair); server.accept(); server.addEventListener("close", () => { // This missing line would keep a WebSocket connection open indefinitely // and results in "The script will never generate a response" errors // server.close(); }); return new Response(null, { status: 101, webSocket: client, }); } ``` ### "Illegal invocation" errors The error message `TypeError: Illegal invocation: function called with incorrect this reference` can be a source of confusion. This is typically caused by calling a function that calls `this`, but the value of `this` has been lost. For example, given an `obj` object with the `obj.foo()` method which logic relies on `this`, executing the method via `obj.foo();` will make sure that `this` properly references the `obj` object. However, assigning the method to a variable, e.g.`const func = obj.foo;` and calling such variable, e.g. `func();` would result in `this` being `undefined`. This is because `this` is lost when the method is called as a standalone function. This is standard behavior in JavaScript. In practice, this is often seen when destructuring runtime provided Javascript objects that have functions that rely on the presence of `this`, such as `ctx`. The following code will error: ```js export default { async fetch(request, env, ctx) { // destructuring ctx makes waitUntil lose its 'this' reference const { waitUntil } = ctx; // waitUntil errors, as it has no 'this' waitUntil(somePromise); return fetch(request); }, }; ``` Avoid destructuring or re-bind the function to the original context to avoid the error. The following code will run properly: ```js export default { async fetch(request, env, ctx) { // directly calling the method on ctx avoids the error ctx.waitUntil(somePromise); // alternatively re-binding to ctx via apply, call, or bind avoids the error const { waitUntil } = ctx; waitUntil.apply(ctx, [somePromise]); waitUntil.call(ctx, somePromise); const reboundWaitUntil = waitUntil.bind(ctx); reboundWaitUntil(somePromise); return fetch(request); }, }; ``` ### Cannot perform I/O on behalf of a different request ```plaintext Uncaught (in promise) Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler. ``` This error occurs when you attempt to share input/output (I/O) objects (such as streams, requests, or responses) created by one invocation of your Worker in the context of a different invocation. In Cloudflare Workers, each invocation is handled independently and has its own execution context. This design ensures optimal performance and security by isolating requests from one another. When you try to share I/O objects between different invocations, you break this isolation. Since these objects are tied to the specific request they were created in, accessing them from another request's handler is not allowed and leads to the error. This error is most commonly caused by attempting to cache an I/O object, like a [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) in global scope, and then access it in a subsequent request. For example, if you create a Worker and run the following code in local development, and make two requests to your Worker in quick succession, you can reproduce this error: ```js let cachedResponse = null; export default { async fetch(request, env, ctx) { if (cachedResponse) { return cachedResponse; } cachedResponse = new Response("Hello, world!"); await new Promise((resolve) => setTimeout(resolve, 5000)); // Sleep for 5s to demonstrate this particular error case return cachedResponse; }, }; ``` You can fix this by instead storing only the data in global scope, rather than the I/O object itself: ```js let cachedData = null; export default { async fetch(request, env, ctx) { if (cachedData) { return new Response(cachedData); } const response = new Response("Hello, world!"); cachedData = await response.text(); return new Response(cachedData, response); }, }; ``` If you need to share state across requests, consider using [Durable Objects](https://developers.cloudflare.com/durable-objects/). If you need to cache data across requests, consider using [Workers KV](https://developers.cloudflare.com/kv/). ## Errors on Worker upload These errors occur when a Worker is uploaded or modified. | Error code | Meaning | | - | - | | `10006` | Could not parse your Worker's code. | | `10007` | Worker or [workers.dev subdomain](https://developers.cloudflare.com/workers/configuration/routing/workers-dev/) not found. | | `10015` | Account is not entitled to use Workers. | | `10016` | Invalid Worker name. | | `10021` | Validation Error. Refer to [Validation Errors](https://developers.cloudflare.com/workers/observability/errors/#validation-errors-10021) for details. | | `10026` | Could not parse request body. | | `10027` | The uploaded Worker exceeded the [Worker size limits](https://developers.cloudflare.com/workers/platform/limits/#worker-size). | | `10035` | Multiple attempts to modify a resource at the same time | | `10037` | An account has exceeded the number of [Workers allowed](https://developers.cloudflare.com/workers/platform/limits/#number-of-workers). | | `10052` | A [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) is uploaded without a name. | | `10054` | A environment variable or secret exceeds the [size limit](https://developers.cloudflare.com/workers/platform/limits/#environment-variables). | | `10055` | The number of environment variables or secrets exceeds the [limit/Worker](https://developers.cloudflare.com/workers/platform/limits/#environment-variables). | | `10056` | [Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/) not found. | | `10068` | The uploaded Worker has no registered [event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/). | | `10069` | The uploaded Worker contains [event handlers](https://developers.cloudflare.com/workers/runtime-apis/handlers/) unsupported by the Workers runtime. | ### Validation Errors (10021) The 10021 error code includes all errors that occur when you attempt to deploy a Worker, and Cloudflare then attempts to load and run the top-level scope (everything that happens before your Worker's [handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/) is invoked). For example, if you attempt to deploy a broken Worker with invalid JavaScript that would throw a `SyntaxError` — Cloudflare will not deploy your Worker. Specific error cases include but are not limited to: #### Script startup exceeded CPU time limit This means that you are doing work in the top-level scope of your Worker that takes [more than the startup time limit (400ms)](https://developers.cloudflare.com/workers/platform/limits/#worker-startup-time) of CPU time. This is usually a sign of a bug and/or large performance problem with your code or a dependency you rely on. It's not typical to use more than 400ms of CPU time when your app starts. The more time your Worker's code spends parsing and executing top-level scope, the slower your Worker will be when you deploy a code change or a new [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/) is created. This error is most commonly caused by attempting to perform expernsive initialization work directly in top level (global) scope, rather than either at build time or when your Worker's handler is invoked. For example, attempting to initialize an app by generating or consuming a large schema. To analyze what is consuming so much CPU time, you should open Chrome DevTools for your Worker and look at the Profiling and/or Performance panels to understand where time is being spent. Is there something glaring that consumes tons of CPU time, especially the first time you make a request to your Worker? ## Runtime errors Runtime errors will occur within the runtime, do not throw up an error page, and are not visible to the end user. Runtime errors are detected by the user with logs. | Error message | Meaning | | - | - | | `Network connection lost` | Connection failure. Catch a `fetch` or binding invocation and retry it. | | `Memory limit` `would be exceeded` `before EOF` | Trying to read a stream or buffer that would take you over the [memory limit](https://developers.cloudflare.com/workers/platform/limits/#memory). | | `daemonDown` | A temporary problem invoking the Worker. | ## Identify errors: Workers Metrics To review whether your application is experiencing any downtime or returning any errors: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker and review your Worker's metrics. ### Worker Errors The **Errors by invocation status** chart shows the number of errors broken down into the following categories: | Error | Meaning | | - | - | | `Uncaught Exception` | Your Worker code threw a JavaScript exception during execution. | | `Exceeded CPU Time Limits` | Worker exceeded CPU time limit or other resource constraints. | | `Exceeded Memory` | Worker exceeded the memory limit during execution. | | `Internal` | An internal error occurred in the Workers runtime. | The **Client disconnected by type** chart shows the number of client disconnect errors broken down into the following categories: | Client Disconnects | Meaning | | - | - | | `Response Stream Disconnected` | Connection was terminated during the deferred proxying stage of a Worker request flow. It commonly appears for longer lived connections such as [WebSockets](https://developers.cloudflare.com/workers/runtime-apis/websockets/). | | `Cancelled` | The Client disconnected before the Worker completed its response. | ## Debug exceptions with Workers Logs [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs) is a powerful tool for debugging your Workers. It shows all the historic logs generated by your Worker, including any uncaught exceptions that occur during execution. To find all your errors in Workers Logs, you can use the following filter: `$metadata.error EXISTS`. This will show all the logs that have an error associated with them. You can also filter by `$workers.outcome` to find the requests that resulted in an error. For example, you can filter by `$workers.outcome = "exception"` to find all the requests that resulted in an uncaught exception. All the possible outcome values can be found in the [Workers Trace Event](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/#outcome) reference. ## Debug exceptions from `Wrangler` To debug your worker via wrangler use `wrangler tail` to inspect and fix the exceptions. Exceptions will show up under the `exceptions` field in the JSON returned by `wrangler tail`. After you have identified the exception that is causing errors, redeploy your code with a fix, and continue tailing the logs to confirm that it is fixed. ## Set up a 3rd party logging service A Worker can make HTTP requests to any HTTP service on the public Internet. You can use a service like [Sentry](https://sentry.io) to collect error logs from your Worker, by making an HTTP request to the service to report the error. Refer to your service’s API documentation for details on what kind of request to make. When using an external logging strategy, remember that outstanding asynchronous tasks are canceled as soon as a Worker finishes sending its main response body to the client. To ensure that a logging subrequest completes, pass the request promise to [`event.waitUntil()`](https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil). For example: * Module Worker ```js export default { async fetch(request, env, ctx) { function postLog(data) { return fetch("https://log-service.example.com/", { method: "POST", body: data, }); } // Without ctx.waitUntil(), the `postLog` function may or may not complete. ctx.waitUntil(postLog(stack)); return fetch(request); }, }; ``` * Service Worker Service Workers are deprecated Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers. ```js addEventListener("fetch", (event) => { event.respondWith(handleEvent(event)); }); async function handleEvent(event) { // ... // Without event.waitUntil(), the `postLog` function may or may not complete. event.waitUntil(postLog(stack)); return fetch(event.request); } function postLog(data) { return fetch("https://log-service.example.com/", { method: "POST", body: data, }); } ``` ## Go to origin on error By using [`event.passThroughOnException`](https://developers.cloudflare.com/workers/runtime-apis/context/#passthroughonexception), a Workers application will forward requests to your origin if an exception is thrown during the Worker's execution. This allows you to add logging, tracking, or other features with Workers, without degrading your application's functionality. * Module Worker ```js export default { async fetch(request, env, ctx) { ctx.passThroughOnException(); // an error here will return the origin response, as if the Worker wasn't present return fetch(request); }, }; ``` * Service Worker Service Workers are deprecated Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers. ```js addEventListener("fetch", (event) => { event.passThroughOnException(); event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { // An error here will return the origin response, as if the Worker wasn’t present. // ... return fetch(request); } ``` ## Related resources * [Log from Workers](https://developers.cloudflare.com/workers/observability/logs/) - Learn how to log your Workers. * [Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/) - Learn how to push Workers Trace Event Logs to supported destinations. * [RPC error handling](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/) - Learn how to handle errors from remote-procedure calls. --- title: Logs · Cloudflare Workers docs description: Logs are an important component of a developer's toolkit to troubleshoot and diagnose application issues and maintaining system health. The Cloudflare Developer Platform offers many tools to help developers manage their application's logs. lastUpdated: 2025-04-09T02:45:13.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/observability/logs/ md: https://developers.cloudflare.com/workers/observability/logs/index.md --- Logs are an important component of a developer's toolkit to troubleshoot and diagnose application issues and maintaining system health. The Cloudflare Developer Platform offers many tools to help developers manage their application's logs. ## [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs) Automatically ingest, filter, and analyze logs emitted from Cloudflare Workers in the Cloudflare dashboard. ## [Real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs) Access log events in near real-time. Real-time logs provide immediate feedback and visibility into the health of your Cloudflare Worker. ## [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers) Beta Tail Workers allow developers to apply custom filtering, sampling, and transformation logic to telemetry data. ## [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush) Send Workers Trace Event Logs to a supported destination. Workers Logpush includes metadata about requests and responses, unstructured `console.log()` messages and any uncaught exceptions. ## Video Tutorial --- title: Metrics and analytics · Cloudflare Workers docs description: Diagnose issues with Workers metrics, and review request data for a zone with Workers analytics. lastUpdated: 2025-04-09T02:45:13.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/observability/metrics-and-analytics/ md: https://developers.cloudflare.com/workers/observability/metrics-and-analytics/index.md --- There are two graphical sources of information about your Workers traffic at a given time: Workers metrics and zone-based Workers analytics. Workers metrics can help you diagnose issues and understand your Workers' workloads by showing performance and usage of your Workers. If your Worker runs on a route on a zone, or on a few zones, Workers metrics will show how much traffic your Worker is handling on a per-zone basis, and how many requests your site is getting. Zone analytics show how much traffic all Workers assigned to a zone are handling. ## Workers metrics Workers metrics aggregate request data for an individual Worker (if your Worker is running across multiple domains, and on `*.workers.dev`, metrics will aggregate requests across them). To view your Worker's metrics: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Compute (Workers)**. 3. In **Overview**, select your Worker to view its metrics. There are two metrics that can help you understand the health of your Worker in a given moment: requests success and error metrics, and invocation statuses. ### Requests The first graph shows historical request counts from the Workers runtime broken down into successful requests, errored requests, and subrequests. * **Total**: All incoming requests registered by a Worker. Requests blocked by [WAF](https://www.cloudflare.com/waf/) or other security features will not count. * **Success**: Requests that returned a Success or Client Disconnected invocation status. * **Errors**: Requests that returned a Script Threw Exception, Exceeded Resources, or Internal Error invocation status — refer to [Invocation Statuses](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#invocation-statuses) for a breakdown of where your errors are coming from. Request traffic data may display a drop off near the last few minutes displayed in the graph for time ranges less than six hours. This does not reflect a drop in traffic, but a slight delay in aggregation and metrics delivery. ### Subrequests Subrequests are requests triggered by calling `fetch` from within a Worker. A subrequest that throws an uncaught error will not be counted. * **Total**: All subrequests triggered by calling `fetch` from within a Worker. * **Cached**: The number of cached responses returned. * **Uncached**: The number of uncached responses returned. ### Wall time per execution Wall time represents the elapsed time in milliseconds between the start of a Worker invocation, and when the Workers runtime determines that no more JavaScript needs to run. Specifically, wall time per execution chart measures the wall time that the JavaScript context remained open — including time spent waiting on I/O, and time spent executing in your Worker's [`waitUntil()`](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) handler. Wall time is not the same as the time it takes your Worker to send the final byte of a response back to the client - wall time can be higher, if tasks within `waitUntil()` are still running after the response has been sent, or it can be lower. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JavaScript context before all the bytes have passed through and been sent. The Wall Time per execution chart shows historical wall time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). ### CPU Time per execution The CPU Time per execution chart shows historical CPU time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). In some cases, higher quantiles may appear to exceed [CPU time limits](https://developers.cloudflare.com/workers/platform/limits/#cpu-time) without generating invocation errors because of a mechanism in the Workers runtime that allows rollover CPU time for requests below the CPU limit. ### Execution duration (GB-seconds) The Duration per request chart shows historical [duration](https://developers.cloudflare.com/workers/platform/limits/#duration) per Worker invocation. The data is broken down into relevant quantiles, similar to the CPU time chart. Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). Understanding duration on your Worker is especially useful when you are intending to do a significant amount of computation on the Worker itself. ### Invocation statuses To review invocation statuses: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages**. 3. Select your Worker. 4. Find the **Summary** graph in **Metrics**. 5. Select **Errors**. Worker invocation statuses indicate whether a Worker executed successfully or failed to generate a response in the Workers runtime. Invocation statuses differ from HTTP status codes. In some cases, a Worker invocation succeeds but does not generate a successful HTTP status because of another error encountered outside of the Workers runtime. Some invocation statuses result in a [Workers error code](https://developers.cloudflare.com/workers/observability/errors/#error-pages-generated-by-workers) being returned to the client. | Invocation status | Definition | Workers error code | GraphQL field | | - | - | - | - | | Success | Worker executed successfully | | `success` | | Client disconnected | HTTP client (that is, the browser) disconnected before the request completed | | `clientDisconnected` | | Worker threw exception | Worker threw an unhandled JavaScript exception | 1101 | `scriptThrewException` | | Exceeded resources¹ | Worker exceeded runtime limits | 1102, 1027 | `exceededResources` | | Internal error² | Workers runtime encountered an error | | `internalError` | ¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](https://developers.cloudflare.com/workers/platform/limits/#request-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits. ² The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Worker code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com](https://www.cloudflarestatus.com/). To further investigate exceptions, use [`wrangler tail`](https://developers.cloudflare.com/workers/wrangler/commands/#tail). ### Request duration The request duration chart shows how long it took your Worker to respond to requests, including code execution and time spent waiting on I/O. The request duration chart is currently only available when your Worker has [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement) enabled. In contrast to [execution duration](https://developers.cloudflare.com/workers/observability/metrics-and-analytics/#execution-duration-gb-seconds), which measures only the time a Worker is active, request duration measures from the time a request comes into a data center until a response is delivered. The data shows the duration for requests with Smart Placement enabled compared to those with Smart Placement disabled (by default, 1% of requests are routed with Smart Placement disabled). The chart shows a histogram with duration across the x-axis and the percentage of requests that fall into the corresponding duration on the y-axis. ### Metrics retention Worker metrics can be inspected for up to three months in the past in maximum increments of one week. ## Zone analytics Zone analytics aggregate request data for all Workers assigned to any [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) defined for a zone. To review zone metrics: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select your site. 3. In **Analytics & Logs**, select **Workers**. Zone data can be scoped by time range within the last 30 days. The dashboard includes charts and information described below. ### Subrequests This chart shows subrequests — requests triggered by calling `fetch` from within a Worker — broken down by cache status. * **Uncached**: Requests answered directly by your origin server or other servers responding to subrequests. * **Cached**: Requests answered by Cloudflare’s [cache](https://www.cloudflare.com/learning/cdn/what-is-caching/). As Cloudflare caches more of your content, it accelerates content delivery and reduces load on your origin. ### Bandwidth This chart shows historical bandwidth usage for all Workers on a zone broken down by cache status. ### Status codes This chart shows historical requests for all Workers on a zone broken down by HTTP status code. ### Total requests This chart shows historical data for all Workers on a zone broken down by successful requests, failed requests, and subrequests. These request types are categorized by HTTP status code where `200`-level requests are successful and `400` to `500`-level requests are failed. ## GraphQL Worker metrics are powered by GraphQL. Learn more about querying our data sets in the [Querying Workers Metrics with GraphQL tutorial](https://developers.cloudflare.com/analytics/graphql-api/tutorials/querying-workers-metrics/). --- title: Query Builder · Cloudflare Workers docs description: Write structured queries to investigate and visualize your telemetry data. lastUpdated: 2025-04-09T02:45:13.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/observability/query-builder/ md: https://developers.cloudflare.com/workers/observability/query-builder/index.md --- The Query Builder helps you write structured queries to investigate and visualize your telemetry data. The Query Builder searches the Workers Observability dataset, which currently includes all logs stored by [Workers Logs](https://developers.cloudflare.com/workers/observability/logs/workers-logs/). The Query Builder can be found in the [Workers' Observability tab in the Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/investigate/). ## Enable Query Builder The Query Builder is available to all developers and requires no enablement. Queries search all Workers Logs stored by Cloudflare. If you have not yet enabled Workers Logs, you can do so by adding the following setting to your [Worker's Wrangler file](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#enable-workers-logs) and redeploying your Worker. * wrangler.jsonc ```jsonc { "observability": { "enabled": true, "logs": { "invocation_logs": true, "head_sampling_rate": 1 } } } ``` * wrangler.toml ```toml [observability] enabled = true [observability.logs] invocation_logs = true head_sampling_rate = 1 # optional. default = 1. ``` ## Write a query in the Cloudflare dashboard 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/investigate/) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. Select **Observability** in the left-hand navigation panel, and then the **Investigate** tab. 4. Select a **Visualization**. 5. Optional: Add fields to Filter, Group By, Order By, and Limit. For more information, see what [composes a query](https://developers.cloudflare.com/workers/observability/query-builder/#query-composition). 6. Optional: Select the appropriate time range. 7. Select **Run**. The query will automatically run whenever changes are made. ## Query composition ### Visualization The Query Builder supports many visualization operators, including: | Function | Arguments | Description | | - | - | - | | **Count** | n/a | The total number of rows matching the query conditions | | **Count Distinct** | any field | The number of occurrences of the unique values in the dataset | | **Min** | numeric field | The smallest value for the field in the dataset | | **Max** | numeric field | The largest value for the field in the dataset | | **Sum** | numeric field | The total of all of the values for the field in the dataset | | **Average** | numeric field | The average of the field in the dataset | | **Standard Deviation** | numeric field | The standard deviation of the field in the dataset | | **Variance** | numeric field | The variance of the field in the dataset | | **P001** | numeric field | The value of the field below which 0.1% of the data falls | | **P01** | numeric field | The value of the field below with 1% of the data falls | | **P05** | numeric field | The value of the field below with 5% of the data falls | | **P10** | numeric field | The value of the field below with 10% of the data falls | | **P25** | numeric field | The value of the field below with 25% of the data falls | | **Median (P50)** | numeric field | The value of the field below with 50% of the data falls | | **P75** | numeric field | The value of the field below with 75% of the data falls | | **P90** | numeric field | The value of the field below with 90% of the data falls | | **P95** | numeric field | The value of the field below with 95% of the data falls | | **P99** | numeric field | The value of the field below with 99% of the data falls | | **P999** | numeric field | The value of the field below with 99.9% of the data falls | You can add multiple visualizations in a single query. Each visualization renders a graph. A single summary table is also returned, which shows the raw query results. ![Example of showing the Query Builder with multiple visualization](https://developers.cloudflare.com/_astro/query-builder-visualization.CBcVDFe0_25kyAz.webp) All methods are aggregate functions. Most methods operate on a specific field in the log event. `Count` is an exception, and is an aggregate function that returns the number of log events matching the filter conditions. ### Filter Filters help return the columns that match the specified conditions. Filters have three components: a key, an operator, and a value. The key is any field in a log event. For example, you may choose `$workers.cpuTimeMs` or `$metadata.message`. The operator is a logical condition that evaluates to true or false. See the table below for supported conditions: | Data Type | Valid Conditions (Operators) | | - | - | | Numeric | Equals, Does not equal, Greater, Greater or equals, Less, Less or equals, Exists, Does not exist | | String | Equals, Does not equal, Includes, Does not include, Regex, Exists, Does not exist, Starts with | The value for a numeric field is an integer. The value for a string field is any string. To add a filter: 1. Select **+** in the **Filter** section. 2. Select **Select key...** and input a key name. For example, `$workers.cpuTimeMs`. 3. Select the operator and change it to the operator best suited. For example, `Greater than`. 4. Select **Select value...** and input a value. For example, `100`. When you run the query with the filter specified above, only log events where `$workers.cpuTimeMs > 100` will be returned. Adding multiple filters combines them with an AND operator, meaning that only events matching all the filters will be returned. ### Search Search is a text filter that returns only events containing the specified text. Search can be helpful as a quick filtering mechanism, or to search for unique identifiable values in your logs. ### Group By Group By combines rows that have the same value into summary rows. For example, if a query adds `$workers.event.request.cf.country` as a Group By field, then the summary table will group by country. ### Order By Order By affects how the results are sorted in the summary table. If `asc` is selected, the results are sorted in ascending order - from least to greatest. If `desc` is selected, the results are sorted in descending order - from greatest to least. ### Limit Limit restricts the number of results returned. When paired with [Order By](https://developers.cloudflare.com/workers/observability/query-builder/#order-by), it can be used to return the "top" or "first" N results. ### Select time range When you select a time range, you specify the time interval where you want to look for matching events. The retention period is dependent on your [plan type](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#pricing). ## Viewing query results There are three views for queries: Visualizations, Invocations, and Events. ### Visualizations tab The **Visualizations** tab shows graphs and a summary table for the query. ![Visualization Overview](https://developers.cloudflare.com/_astro/query-builder-visualization.CBcVDFe0_25kyAz.webp) ### Invocations tab The **Invocations** tab shows all logs, grouped by by the invocation, and ordered by timestamp. Only invocations matching the query criteria are returned. ![Invocations Overview](https://developers.cloudflare.com/_astro/query-builder-invocations-overview.C02m4pPf_5zMXx.webp) ### Events tab The **Events** tab shows all logs, ordered by timestamp. Only events matching the query criteria are returned. The Events tab can be customized to add additional fields in the view. ![Overview](https://developers.cloudflare.com/_astro/query-builder-events-overview.Cvj8cxX3_Z17BcJ5.webp) ## Save queries It is recommended to save queries that may be reused for future investigations. You can save a query with a name, description, and custom tags by selecting **Save Query**. Queries are saved at the account-level and are accessible to all users in the account. Saved queries can be re-run by selecting the relevant query from the **Queries** tab. You can edit the query and save edits. Queries can be starred by users. Starred queries are unique to the user, and not to the account. ## Delete queries Saved queries can be deleted from the **Queries** tab. If you delete a query, the query is deleted for all users in the account. 1. Select the [Queries](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/queries) tab in the Observability dashboard. 2. On the right-hand side, select the three dots for additional actions. 3. Select **Delete Query** and follow the instructions. ## Share queries Saved queries are assigned a unique URL and can be shared with any user in the account. ## Example: Composing a query In this example, we will construct a query to find and debug all paths that respond with 5xx errors. First, we create a base query. In this base query, we want to visualize by the raw event count. We can add a filter for `$workers.event.response.status` that is greater than 500. Then, we group by `$workers.event.request.path` and `$workers.event.response.status` to identify the number of requests that were affected by this behavior. ![Constructing a query](https://developers.cloudflare.com/_astro/query-builder-ex1-query.CDbj8N5d_Z1yElmc.webp) The results show that the `/actuator/env` path has been experiencing 500s. Now, we can apply a filter for this path and investigate. ![Adding an additional field to the query](https://developers.cloudflare.com/_astro/query-builder-ex1-query-with-filter.DUqcI8AK_1aMEHy.webp) Now, we can investigate by selecting the **Invocations** tab. We can see that there were two logged invocations of this error. ![Examining the Invocations tab in the Query Builder](https://developers.cloudflare.com/_astro/query-builder-ex1-invocations.C4Qt7ulL_eBX3s.webp) We can expand a single invocation to view the relevant logs, and continue to debug. ![Viewing the logs for a single Invocation](https://developers.cloudflare.com/_astro/query-builder-ex1-invocation-logs.FJWtya7H_2tU9NB.webp) --- title: Source maps and stack traces · Cloudflare Workers docs description: Adding source maps and generating stack traces for Workers. lastUpdated: 2025-04-23T14:32:23.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/observability/source-maps/ md: https://developers.cloudflare.com/workers/observability/source-maps/index.md --- [Stack traces](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/stack) help with debugging your code when your application encounters an unhandled exception. Stack traces show you the specific functions that were called, in what order, from which line and file, and with what arguments. Most JavaScript code is first bundled, often transpiled, and then minified before being deployed to production. This process creates smaller bundles to optimize performance and converts code from TypeScript to Javascript if needed. Source maps translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace. ## Source Maps To enable source maps, add the following to your Worker's [Wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/): * wrangler.jsonc ```jsonc { "upload_source_maps": true } ``` * wrangler.toml ```toml upload_source_maps = true ``` When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy-2). ​​ Note Miniflare can also [output source maps](https://miniflare.dev/developing/source-maps) for use in local development or [testing](https://developers.cloudflare.com/workers/testing/miniflare/writing-tests). ## Stack traces ​​ When your Worker throws an uncaught exception, we fetch the source map and use it to map the stack trace of the exception back to lines of your Worker’s original source code. You can then view the stack trace when streaming [real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/) or in [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/). Note The source map is retrieved after your Worker invocation completes — it's an asynchronous process that does not impact your Worker's CPU utilization or performance. Source maps are not accessible inside the Worker at runtime, if you `console.log()` the [stack property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/stack) within a Worker, you will not get a deobfuscated stack trace. When Cloudflare attempts to remap a stack trace to the Worker's source map, it does so line-by-line, remapping as much as possible. If a line of the stack trace cannot be remapped for any reason, Cloudflare will leave that line of the stack trace unchanged, and continue to the next line of the stack trace. ## Limits Wrangler version Minimum required Wrangler version for source maps: 3.46.0. Check your version by running `wrangler --version`. | Description | Limit | | - | - | | Maximum Source Map Size | 15 MB gzipped | ## Example Consider a simple project. `src/index.ts` serves as the entrypoint of the application and `src/calculator.ts` defines a ComplexCalculator class that supports basic arithmetic. Let's see how source maps can simplify debugging an error in the ComplexCalculator class. ![Stack Trace without Source Map remapping](https://developers.cloudflare.com/_astro/without-source-map.ByYR83oU_1kmSml.webp) With **no source maps uploaded**: notice how all the Javascript has been minified to one file, so the stack trace is missing information on file name, shows incorrect line numbers, and incorrectly references `js` instead of `ts`. ![Stack Trace with Source Map remapping](https://developers.cloudflare.com/_astro/with-source-map.PipytmVe_Z17DcFD.webp) With **source maps uploaded**: all methods reference the correct files and line numbers. ## Related resources * [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/logpush/) - Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints. * [Real-time logs](https://developers.cloudflare.com/workers/observability/logs/real-time-logs/) - Learn how to capture Workers logs in real-time. * [RPC error handling](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/) - Learn how exceptions are handled over RPC (Remote Procedure Call). --- title: Integrations · Cloudflare Workers docs description: Send your telemetry data to third parties. lastUpdated: 2025-06-11T17:40:43.000Z chatbotDeprioritize: true source_url: html: https://developers.cloudflare.com/workers/observability/third-party-integrations/ md: https://developers.cloudflare.com/workers/observability/third-party-integrations/index.md --- Send your telemetry data to third parties. * [Sentry](https://docs.sentry.io/platforms/javascript/guides/cloudflare/) --- title: Betas · Cloudflare Workers docs description: Cloudflare developer platform and Workers features beta status. lastUpdated: 2024-09-25T21:11:15.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/betas/ md: https://developers.cloudflare.com/workers/platform/betas/index.md --- These are the current alphas and betas relevant to the Cloudflare Workers platform. * **Public alphas and betas are openly available**, but may have limitations and caveats due to their early stage of development. * Private alphas and betas require explicit access to be granted. Refer to the documentation to join the relevant product waitlist. | Product | Private Beta | Public Beta | More Info | | - | - | - | - | | Email Workers | | ✅ | [Docs](https://developers.cloudflare.com/email-routing/email-workers/) | | Green Compute | | ✅ | [Blog](https://blog.cloudflare.com/earth-day-2022-green-compute-open-beta/) | | Pub/Sub | ✅ | | [Docs](https://developers.cloudflare.com/pub-sub) | | [TCP Sockets](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) | | ✅ | [Docs](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets) | --- title: Workers Changelog · Cloudflare Workers docs description: Review recent changes to Cloudflare Workers. lastUpdated: 2025-02-13T19:35:19.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/changelog/ md: https://developers.cloudflare.com/workers/platform/changelog/index.md --- This changelog details meaningful changes made to Workers across the Cloudflare dashboard, Wrangler, the API, and the workerd runtime. These changes are not configurable. This is *different* from [compatibility dates](https://developers.cloudflare.com/workers/configuration/compatibility-dates/) and [compatibility flags](https://developers.cloudflare.com/workers/configuration/compatibility-flags/), which let you explicitly opt-in to or opt-out of specific changes to the Workers Runtime. [Subscribe to RSS](https://developers.cloudflare.com/workers/platform/changelog/index.xml) ## 2025-06-04 * Updated v8 to version 13.8. ## 2025-05-22 * Enabled explicit resource context management and support for Float16Array ## 2025-05-20 * Updated v8 to version 13.7. ## 2025-04-16 * Updated v8 to version 13.6. ## 2025-04-03 * Websocket client exceptions are now JS exceptions rather than internal errors. ## 2025-03-27 * Updated v8 to version 13.5. ## 2025-02-28 * Updated v8 to version 13.4. * When using `nodejs_compat`, the new `nodejs_compat_populate_process_env` compatibility flag will cause `process.env` to be automatically populated with text bindings configured for the worker. ## 2025-02-26 * [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/) now supports building projects that use **pnpm 10** as the package manager. If your build previously failed due to this unsupported version, retry your build. No config changes needed. ## 2025-02-13 * [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) no longer runs Workers in the same location as D1 databases they are bound to. The same [placement logic](https://developers.cloudflare.com/workers/configuration/smart-placement/#understand-how-smart-placement-works) now applies to all Workers that use Smart Placement, regardless of whether they use D1 bindings. ## 2025-02-11 * When Workers generate an "internal error" exception in response to certain failures, the exception message may provide a reference ID that customers can include in support communication for easier error identification. For example, an exception with the new message might look like: `internal error; reference = 0123456789abcdefghijklmn`. ## 2025-01-31 * Updated v8 to version 13.3. ## 2025-01-15 * The runtime will no longer reuse isolates across worker versions even if the code happens to be identical. This "optimization" was deemed more confusing than it is worth. ## 2025-01-14 * Updated v8 to version 13.2. ## 2024-12-19 * **Cloudflare GitHub App Permissions Update** * Cloudflare is requesting updated permissions for the [Cloudflare GitHub App](https://github.com/apps/cloudflare-workers-and-pages) to enable features like automatically creating a repository on your GitHub account and deploying the new repository for you when getting started with a template. This feature is coming out soon to support a better onboarding experience. * **Requested permissions:** * [Repository Administration](https://docs.github.com/en/rest/authentication/permissions-required-for-github-apps?apiVersion=2022-11-28#repository-permissions-for-administration) (read/write) to create repositories. * [Contents](https://docs.github.com/en/rest/authentication/permissions-required-for-github-apps?apiVersion=2022-11-28#repository-permissions-for-contents) (read/write) to push code to the created repositories. * **Who is impacted:** * Existing users will be prompted to update permissions when GitHub sends an email with subject "\[GitHub] Cloudflare Workers & Pages is requesting updated permission" on December 19th, 2024. * New users installing the app will see the updated permissions during the connecting repository process. * **Action:** Review and accept the permissions update to use upcoming features. *If you decline or take no action, you can continue connecting repositories and deploying changes via the Cloudflare GitHub App as you do today, but new features requiring these permissions will not be available.* * **Questions?** Visit [#github-permissions-update](https://discord.com/channels/595317990191398933/1313895851520688163) in the Cloudflare Developers Discord. ## 2024-11-18 * Updated v8 to version 13.1. ## 2024-11-12 * Fixes exception seen when trying to call deleteAll() during a SQLite-backed Durable Object's alarm handler. ## 2024-11-08 * Update SQLite to version 3.47. ## 2024-10-21 * Fixed encoding of WebSocket pong messages when talking to remote servers. Previously, when a Worker made a WebSocket connection to an external server, the server may have prematurely closed the WebSocket for failure to respond correctly to pings. Client-side connections were not affected. ## 2024-10-14 * Updated v8 to version 13.0. ## 2024-09-26 * You can now connect your GitHub or GitLab repository to an existing Worker to automatically build and deploy your changes when you make a git push with [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds/). ## 2024-09-20 * Workers now support the \[`handle_cross_request_promise_resolution`] compatibility flag which addresses certain edge cases around awaiting and resolving promises across multiple requests. ## 2024-09-19 * Revamped Workers and Pages UI settings to simplify the creation and management of project configurations. For bugs and general feedback, please submit this [form](https://forms.gle/XXqhRGbZmuzninuN9). ## 2024-09-16 * Updated v8 to version 12.9. ## 2024-08-19 * Workers now support the [`allow_custom_ports` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#allow-specifying-a-custom-port-when-making-a-subrequest-with-the-fetch-api) which enables using the `fetch()` calls to custom ports. ## 2024-08-15 * Updated v8 to version 12.8. * You can now use [`Promise.try()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/try) in Cloudflare Workers. Refer to [`tc39/proposal-promise-try`](https://github.com/tc39/proposal-promise-try) for more context on this API that has recently been added to the JavaScript language. ## 2024-08-14 * When using the `nodejs_compat_v2` compatibility flag, the `setImmediate(fn)` API from Node.js is now available at the global scope. * The `internal_writable_stream_abort_clears_queue` compatibility flag will ensure that certain `WritableStream` `abort()` operations are handled immediately rather than lazily, ensuring that the stream is appropriately aborted when the consumer of the stream is no longer active. ## 2024-07-19 * Workers with the [mTLS](https://developers.cloudflare.com/workers/runtime-apis/bindings/mtls/) binding now support [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/). ## 2024-07-18 * Added a new `truncated` flag to [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) events to indicate when the event buffer is full and events are being dropped. ## 2024-07-17 * Updated v8 to version 12.7. ## 2024-07-11 * Added community contributed tutorial on how to create [custom access control for files in R2 using D1 and Workers](https://developers.cloudflare.com/developer-spotlight/tutorials/custom-access-control-for-files/). * Added community contributed tutorial on how to [send form submissions using Astro and Resend](https://developers.cloudflare.com/developer-spotlight/tutorials/handle-form-submission-with-astro-resend/). * Added community contributed tutorial on how to [create a sitemap from Sanity CMS with Workers](https://developers.cloudflare.com/developer-spotlight/tutorials/create-sitemap-from-sanity-cms/). ## 2024-07-03 * The [`node:crypto`](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/) implementation now includes the scrypt(...) and scryptSync(...) APIs. * Workers now support the standard [EventSource](https://developers.cloudflare.com/workers/runtime-apis/eventsource/) API. * Fixed a bug where when writing to an HTTP Response body would sometimes hang when the client disconnected (and sometimes throw an exception). It will now always throw an exception. ## 2024-07-01 * When using [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/), you can now use [version overrides](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) to send a request to a specific version of your Worker. ## 2024-06-28 * Fixed a bug which caused `Date.now()` to return skewed results if called before the first I/O of the first request after a Worker first started up. The value returned would be offset backwards by the amount of CPU time spent starting the Worker (compiling and running global scope), making it seem like the first I/O (e.g. first fetch()) was slower than it really was. This skew had nothing to do with Spectre mitigations; it was simply a longstanding bug. ## 2024-06-24 * [Exceptions](https://developers.cloudflare.com/durable-objects/best-practices/error-handling) thrown from Durable Object internal operations and tunneled to the caller may now be populated with a `.retryable: true` property if the exception was likely due to a transient failure, or populated with an `.overloaded: true` property if the exception was due to [overload](https://developers.cloudflare.com/durable-objects/observability/troubleshooting/#durable-object-is-overloaded). ## 2024-06-20 * We now prompt for extra confirmation if attempting to rollback to a version of a Worker using the [Deployments API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/deployments/methods/create/) where the value of a secret is different than the currently deployed version. A `?force=true` query parameter can be specified to proceed with the rollback. ## 2024-06-19 * When using [`nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/), the `buffer` module now has an implementation of `isAscii()` and `isUtf8()` methods. * Fixed a bug where exceptions propagated from [JS RPC](https://developers.cloudflare.com/workers/runtime-apis/rpc) calls to Durable Objects would lack the `.remote` property that exceptions from `fetch()` calls to Durable Objects have. ## 2024-06-12 * Blob and Body objects now include a new `bytes()` method, reflecting [recent](https://w3c.github.io/FileAPI/#bytes-method-algo) [additions](https://fetch.spec.whatwg.org/#dom-body-bytes) to web standards. ## 2024-06-03 * Workers with [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) enabled now support [Gradual Deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/). ## 2024-05-17 * Updated v8 to version 12.6. ## 2024-05-15 * The new [`fetch_standard_url` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#use-standard-url-parsing-in-fetch) will become active by default on June 3rd, 2024 and ensures that URLs passed into the `fetch(...)` API, the `new Request(...)` constructor, and redirected requests will be parsed using the standard WHATWG URL parser. * DigestStream is now more efficient and exposes a new `bytesWritten` property that indicates that number of bytes written to the digest. ## 2024-05-13 * Updated v8 to version 12.5. * A bug in the fetch API implementation would cause the content type of a Blob to be incorrectly set. The fix is being released behind a new [`blob_standard_mime_type` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#properly-extract-blob-mime-type-from-content-type-headers). ## 2024-05-03 * Fixed RPC to/from Durable Objects not honoring the output gate. * The `internal_stream_byob_return_view` compatibility flag can be used to improve the standards compliance of the `ReadableStreamBYOBReader` implementation when working with BYOB streams provided by the runtime (like in `response.body` or `request.body`). The flag ensures that the final read result will always include a `value` field whose value is set to an empty `Uint8Array` whose underlying `ArrayBuffer` is the same memory allocation as the one passed in on the call to `read()`. * The Web platform standard `reportError(err)` global API is now available in workers. The reported error will first be emitted as an 'error' event on the global scope then reported in both the console output and tail worker exceptions by default. ## 2024-04-26 * Updated v8 to version 12.4. ## 2024-04-11 * Improve Streams API spec compliance by exposing `desiredSize` and other properties on stream class prototypes * The new `URL.parse(...)` method is implemented. This provides an alternative to the URL constructor that does not throw exceptions on invalid URLs. * R2 bindings objects now have a `storageClass` option. This can be set on object upload to specify the R2 storage class - Standard or Infrequent Access. The property is also returned with object metadata. ## 2024-04-05 * A new [JavaScript-native remote procedure call (RPC) API](https://developers.cloudflare.com/workers/runtime-apis/rpc) is now available, allowing you to communicate more easily across Workers and between Workers and Durable Objects. ## 2024-04-04 * There is no longer an explicit limit on the total amount of data which may be uploaded with Cache API [`put()`](https://developers.cloudflare.com/workers/runtime-apis/cache/#put) per request. Other [Cache API Limits](https://developers.cloudflare.com/workers/platform/limits/#cache-api-limits) continue to apply. * The Web standard `ReadableStream.from()` API is now implemented. The API enables creating a `ReadableStream` from a either a sync or async iterable. ## 2024-04-03 * When the [`brotli_content_encoding`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#brotli-content-encoding-support) compatibility flag is enabled, the Workers runtime now supports compressing and decompressing request bodies encoded using the [Brotli](https://developer.mozilla.org/en-US/docs/Glossary/Brotli_compression) compression algorithm. Refer to [this docs section](https://developers.cloudflare.com/workers/runtime-apis/fetch/#how-the-accept-encoding-header-is-handled) for more detail. ## 2024-04-02 * You can now [write Workers in Python](https://developers.cloudflare.com/workers/languages/python) ## 2024-04-01 * The new [`unwrap_custom_thenables` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#handling-custom-thenables) enables workers to accept custom thenables in internal APIs that expect a promise (for instance, the `ctx.waitUntil(...)` method). * TransformStreams created with the TransformStream constructor now have a cancel algorithm that is called when the stream is canceled or aborted. This change is part of the implementation of the WHATWG Streams standard. * The [`nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) now includes an implementation of the [`MockTracker` API from `node:test`](https://nodejs.org/api/test.html#class-mocktracker). This is not an implementation of the full `node:test` module, and mock timers are currently not included. * Exceptions reported to [Tail Workers](https://developers.cloudflare.com/workers/observability/logs/tail-workers/) now include a "stack" property containing the exception's stack trace, if available. ## 2024-03-11 * Built-in APIs that return Promises will now produce stack traces when the Promise rejects. Previously, the rejection error lacked a stack trace. * A new compat flag `fetcher_no_get_put_delete` removes the `get()`, `put()`, and `delete()` methods on service bindings and Durable Object stubs. This will become the default as of compatibility date 2024-03-26. These methods were designed as simple convenience wrappers around `fetch()`, but were never documented. * Updated v8 to version 12.3. ## 2024-02-24 * v8 updated to version 12.2. * You can now use [Iterator helpers](https://v8.dev/features/iterator-helpers) in Workers. * You can now use [new methods on `Set`](https://github.com/tc39/proposal-set-methods), such as `Set.intersection` and `Set.union`, in Workers. ## 2024-02-23 * Sockets now support an [`opened`](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/#socket) attribute. * [Durable Object alarm handlers](https://developers.cloudflare.com/durable-objects/api/alarms/#alarm) now impose a maximum wall time of 15 minutes. ## 2023-12-04 * The Web Platform standard [`navigator.sendBeacon(...)` API](https://developers.cloudflare.com/workers/runtime-apis/web-standards#navigatorsendbeaconurl-data) is now provided by the Workers runtime. * V8 updated to 12.0. ## 2023-10-30 * A new usage model called [Workers Standard](https://developers.cloudflare.com/workers/platform/pricing/#workers) is available for Workers and Pages Functions pricing. This is now the default usage model for accounts that are first upgraded to the Workers Paid plan. Read the [blog post](https://blog.cloudflare.com/workers-pricing-scale-to-zero/) for more information. * The usage model set in a script's wrangler.toml will be ignored after an account has opted-in to [Workers Standard](https://developers.cloudflare.com/workers/platform/pricing/#workers) pricing. It must be configured through the dashboard (Workers & Pages > Select your Worker > Settings > Usage Model). * Workers and Pages Functions on the Standard usage model can set custom [CPU limits](https://developers.cloudflare.com/workers/wrangler/configuration/#limits) for their Workers ## 2023-10-20 * Added the [`crypto_preserve_public_exponent`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#webcrypto-preserve-publicexponent-field) compatibility flag to correct a wrong type being used in the algorithm field of RSA keys in the WebCrypto API. ## 2023-10-18 * The limit of 3 Cron Triggers per Worker has been removed. Account-level limits on the total number of Cron Triggers across all Workers still apply. ## 2023-10-12 * A [TCP Socket](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/)'s WritableStream now ensures the connection has opened before resolving the promise returned by `close`. ## 2023-10-09 * The Web Platform standard [`CustomEvent` class](https://dom.spec.whatwg.org/#interface-customevent) is now available in Workers. * Fixed a bug in the WebCrypto API where the `publicExponent` field of the algorithm of RSA keys would have the wrong type. Use the [`crypto_preserve_public_exponent` compatibility flag](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#webcrypto-preserve-publicexponent-field) to enable the new behavior. ## 2023-09-14 * An implementation of the [`node:crypto`](https://developers.cloudflare.com/workers/runtime-apis/nodejs/crypto/) API from Node.js is now available when the [`nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) is enabled. ## 2023-07-14 * An implementation of the [`util.MIMEType`](https://nodejs.org/api/util.html#class-utilmimetype) API from Node.js is now available when the [`nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs/) is enabled. ## 2023-07-07 * An implementation of the [`process.env`](https://developers.cloudflare.com/workers/runtime-apis/nodejs/process) API from Node.js is now available when using the `nodejs_compat` compatibility flag. * An implementation of the [`diagnostics_channel`](https://developers.cloudflare.com/workers/runtime-apis/nodejs/diagnostics-channel) API from Node.js is now available when using the `nodejs_compat` compatibility flag. ## 2023-06-22 * Added the [`strict_crypto_checks`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#strict-crypto-error-checking) compatibility flag to enable additional [Web Crypto API](https://developers.cloudflare.com/workers/runtime-apis/web-crypto/) error and security checking. * Fixes regression in the [TCP Sockets API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) where `connect("google.com:443")` would fail with a `TypeError`. ## 2023-06-19 * The [TCP Sockets API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) now reports clearer errors when a connection cannot be established. * Updated V8 to 11.5. ## 2023-06-09 * `AbortSignal.any()` is now available. * Updated V8 to 11.4. * Following an update to the [WHATWG URL spec](https://url.spec.whatwg.org/#interface-urlsearchparams), the `delete()` and `has()` methods of the `URLSearchParams` class now accept an optional second argument to specify the search parameter’s value. This is potentially a breaking change, so it is gated behind the new `urlsearchparams_delete_has_value_arg` and [`url_standard`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#new-url-parser-implementation) compatibility flags. * Added the [`strict_compression_checks`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#strict-compression-error-checking) compatibility flag for additional [`DecompressionStream`](https://developers.cloudflare.com/workers/runtime-apis/web-standards/#compression-streams) error checking. ## 2023-05-26 * A new [Hibernatable WebSockets API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/) (beta) has been added to [Durable Objects](https://developers.cloudflare.com/durable-objects/). The Hibernatable WebSockets API allows a Durable Object that is not currently running an event handler (for example, processing a WebSocket message or alarm) to be removed from memory while keeping its WebSockets connected (“hibernation”). A Durable Object that hibernates will not incur billable Duration (GB-sec) charges. ## 2023-05-16 * The [new `connect()` method](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) allows you to connect to any TCP-speaking services directly from your Workers. To learn more about other protocols supported on the Workers platform, visit the [new Protocols documentation](https://developers.cloudflare.com/workers/reference/protocols/). * We have added new [native database integrations](https://developers.cloudflare.com/workers/databases/native-integrations/) for popular serverless database providers, including Neon, PlanetScale, and Supabase. Native integrations automatically handle the process of creating a connection string and adding it as a Secret to your Worker. * You can now also connect directly to databases over TCP from a Worker, starting with [PostgreSQL](https://developers.cloudflare.com/hyperdrive/examples/connect-to-postgres/). Support for PostgreSQL is based on the popular `pg` driver, and allows you to connect to any PostgreSQL instance over TLS from a Worker directly. * The [R2 Migrator](https://developers.cloudflare.com/r2/data-migration/) (Super Slurper), which automates the process of migrating from existing object storage providers to R2, is now Generally Available. ## 2023-05-15 * [Cursor](https://developers.cloudflare.com/workers/ai/), an experimental AI assistant, trained to answer questions about Cloudflare's Developer Platform, is now available to preview! Cursor can answer questions about Workers and the Cloudflare Developer Platform, and is itself built on Workers. You can read more about Cursor in the [announcement blog](https://blog.cloudflare.com/introducing-cursor-the-ai-assistant-for-docs/). ## 2023-05-12 * The [`performance.now()`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) and [`performance.timeOrigin`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin) APIs can now be used in Cloudflare Workers. Just like `Date.now()`, for [security reasons](https://developers.cloudflare.com/workers/reference/security-model/) time only advances after I/O. ## 2023-05-05 * The new `nodeJsCompatModule` type can be used with a Worker bundle to emulate a Node.js environment. Common Node.js globals such as `process` and `Buffer` will be present, and `require('...')` can be used to load Node.js built-ins without the `node:` specifier prefix. * Fixed an issue where websocket connections would be disconnected when updating workers. Now, only WebSockets connected to Durable Objects are disconnected by updates to that Durable Object’s code. ## 2023-04-28 * The Web Crypto API now supports curves Ed25519 and X25519 defined in the Secure Curves specification. * The global `connect` method has been moved to a `cloudflare:sockets` module. ## 2023-04-14 * No externally-visible changes this week. ## 2023-04-10 * `URL.canParse(...)` is a new standard API for testing that an input string can be parsed successfully as a URL without the additional cost of creating and throwing an error. * The Workers-specific `IdentityTransformStream` and `FixedLengthStream` classes now support specifying a `highWaterMark` for the writable-side that is used for backpressure signaling using the standard `writer.desiredSize`/`writer.ready` mechanisms. ## 2023-03-24 * Fixed a bug in Wrangler tail and live logs on the dashboard that prevented the Administrator Read-Only and Workers Tail Read roles from successfully tailing Workers. ## 2023-03-09 * No externally-visible changes. ## 2023-03-06 * [Workers Logpush](https://developers.cloudflare.com/workers/observability/logs/logpush/#limits) now supports 300 characters per log line. This is an increase from the previous limit of 150 characters per line. ## 2023-02-06 * Fixed a bug where transferring large request bodies to a Durable Object was unexpectedly slow. * Previously, an error would be thrown when trying to access unimplemented standard `Request` and `Response` properties. Now those will be left as `undefined`. ## 2023-01-31 * The [`request.cf`](https://developers.cloudflare.com/workers/runtime-apis/request/#incomingrequestcfproperties) object now includes two additional properties, `tlsClientHelloLength` and `tlsClientRandom`. ## 2023-01-13 * Durable Objects can now use jurisdictions with `idFromName` via a new subnamespace API. * V8 updated to 10.9. --- title: Deploy to Cloudflare buttons · Cloudflare Workers docs description: Set up a Deploy to Cloudflare button lastUpdated: 2025-06-05T13:06:24.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/deploy-buttons/ md: https://developers.cloudflare.com/workers/platform/deploy-buttons/index.md --- If you're building a Workers application and would like to share it with other developers, you can embed a Deploy to Cloudflare button in your README, blog post, or documentation to enable others to quickly deploy your application on their own Cloudflare account. Deploy to Cloudflare buttons eliminate the need for complex setup, allowing developers to get started with your public GitHub or GitLab repository in just a few clicks. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/saas-admin-template) ## What are Deploy to Cloudflare buttons? Deploy to Cloudflare buttons simplify the deployment of a Workers application by enabling Cloudflare to: * **Clone a Git repository**: Cloudflare clones your source repository into the user's GitHub/GitLab account where they can continue development after deploying. * **Configure a project**: Your users can customize key details such as repository name, Worker name, and required resource names in a single setup page with customizations reflected in the newly created Git repository. * **Build & deploy**: Cloudflare builds the application using [Workers Builds](https://developers.cloudflare.com/workers/ci-cd/builds) and deploys it to the Cloudflare network. Any required resources are automatically provisioned and bound to the Worker without additional setup. ![Deploy to Cloudflare Flow](https://developers.cloudflare.com/_astro/dtw-user-flow.zgS3Y8iK_hqlHb.webp) ## How to Set Up Deploy to Cloudflare buttons Deploy to Cloudflare buttons can be embedded anywhere developers might want to launch your project. To add a Deploy to Cloudflare button, copy the following snippet and replace the Git repository URL with your project's URL. You can also optionally specify a subdirectory. * Markdown ```md [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=) ``` * HTML ```html Deploy to Cloudflare ``` * URL ```plaintext https://deploy.workers.cloudflare.com/?url= ``` If you have already deployed your application using Workers Builds, you can generate a Deploy to Cloudflare button directly from the Cloudflare dashboard by selecting the share button (located within your Worker details) and copying the provided snippet. ![Share an application](https://developers.cloudflare.com/_astro/dtw-share-project.CTDMrwQu_Z1yXLMx.webp) Once you have your snippet, you can paste this wherever you would like your button to be displayed. ## Automatic Resource provisioning If your Worker application requires Cloudflare resources, they will be automatically provisioned as part of the deployment. Currently, supported resources include: * **Storage**: [KV namespaces](https://developers.cloudflare.com/kv/), [D1 databases](https://developers.cloudflare.com/d1/), [R2 buckets](https://developers.cloudflare.com/r2/), [Hyperdrive](https://developers.cloudflare.com/hyperdrive/), and [Vectorize databases](https://developers.cloudflare.com/vectorize/) * **Compute**: [Durable Objects](https://developers.cloudflare.com/durable-objects/), [Workers AI](https://developers.cloudflare.com/workers-ai/), and [Queues](https://developers.cloudflare.com/queues/) Cloudflare will read the Wrangler configuration file of your source repo to determine resource requirements for your application. During deployment, Cloudflare will provision any necessary resources and update the Wrangler configuration where applicable for newly created resources (e.g. database IDs and namespace IDs). To ensure successful deployment, please make sure your source repository includes default values for resource names, resource IDs and any other properties for each binding. ## Best practices **Configuring Build/Deploy commands**: If you are using custom `build` and `deploy` scripts in your package.json (for example, if using a full stack framework or running D1 migrations), Cloudflare will automatically detect and pre-populate the build and deploy fields. Users can choose to modify or accept the custom commands during deployment configuration. If no `deploy` script is specified, Cloudflare will preconfigure `npx wrangler deploy` by default. If no `build` script is specified, Cloudflare will leave this field blank. **Running D1 Migrations**: If you would like to run migrations as part of your setup, you can specify this in your `package.json` by running your migrations as part of your `deploy` script. The migration command should reference the binding name rather than the database name to ensure migrations are successful when users specify a database name that is different from that of your source repository. The following is an example of how you can set up the scripts section of your `package.json`: ```json { "scripts": { "build": "astro build", "deploy": "npm run db:migrations:apply && wrangler deploy", "db:migrations:apply": "wrangler d1 migrations apply DB_BINDING --remote" } } ``` ## Limitations * **Monorepos**: Cloudflare does not fully support monorepos * If your repository URL contains a subdirectory, your application must be fully isolated within that subdirectory, including any dependencies. Otherwise, the build will fail. Cloudflare treats this subdirectory as the root of the new repository created as part of the deploy process. * Additionally, if you have a monorepo that contains multiple Workers applications, they will not be deployed together. You must configure a separate Deploy to Cloudflare button for each application. The user will manually create a distinct Workers application for each subdirectory. * **Pages applications**: Deploy to Cloudflare buttons only support Workers applications. * **Non-GitHub/GitLab repositories**: Source repositories from anything other than github.com and gitlab.com are not supported. Self-hosted versions of GitHub and GitLab are also not supported. * **Private repositories**: Repositories must be public in order for others to successfully use your Deploy to Cloudflare button. --- title: Infrastructure as Code (IaC) · Cloudflare Workers docs description: Uploading and managing Workers is easy with Wrangler, but sometimes you need to do it more programmatically. You might do this with IaC ("Infrastructure as Code") tools or by calling the Cloudflare API directly. Use cases for the API include build and deploy scripts, CI/CD pipelines, custom dev tools, and testing. We provide API SDK libraries for common languages that make interacting with the API easier, such as cloudflare-typescript and cloudflare-python. For IaC, a common tool is HashiCorp's Terraform. You can use the Cloudflare Terraform Provider to create and manage Workers resources. lastUpdated: 2025-06-19T17:15:24.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/infrastructure-as-code/ md: https://developers.cloudflare.com/workers/platform/infrastructure-as-code/index.md --- Uploading and managing Workers is easy with [Wrangler](https://developers.cloudflare.com/workers/wrangler/configuration), but sometimes you need to do it more programmatically. You might do this with IaC ("Infrastructure as Code") tools or by calling the [Cloudflare API](https://developers.cloudflare.com/api) directly. Use cases for the API include build and deploy scripts, CI/CD pipelines, custom dev tools, and testing. We provide API SDK libraries for common languages that make interacting with the API easier, such as [cloudflare-typescript](https://github.com/cloudflare/cloudflare-typescript) and [cloudflare-python](https://github.com/cloudflare/cloudflare-python). For IaC, a common tool is HashiCorp's Terraform. You can use the [Cloudflare Terraform Provider](https://developers.cloudflare.com/terraform) to create and manage Workers resources. Here are examples of deploying a Worker with common tools and languages, and considerations for successfully managing Workers with IaC. In particular, the examples highlight how to upload script content and metadata which is different with each approach. Reference the Upload Worker Module API docs [here](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update) for an exact definition of how script upload works. All of these examples need an [account ID](https://developers.cloudflare.com/fundamentals/account/find-account-and-zone-ids) and [API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token) (not Global API key) to work. ## Workers Bundling None of the examples below do [Workers Bundling](https://developers.cloudflare.com/workers/wrangler/bundling) which is usually the function of a tool like Wrangler or [esbuild](https://esbuild.github.io). Generally, you'd run this bundling step before applying your Terraform plan or using the API for script upload: ```bash wrangler deploy --dry-run -outdir build ``` Then you'd reference the bundled script like `build/index.js`. Note Depending on your Wrangler project and `-outdir`, the name and location of your bundled script might vary. Make sure to copy all of your config from `wrangler.json` into your Terraform config or API request. This is especially important for compatibility date or compatibility flags your script relies on. ## Terraform In this example, you need a local file named `my-hello-world-script.mjs` with script content similar to the above examples. Replace `account_id` with your own. Learn more about the Cloudflare Terraform Provider [here](https://developers.cloudflare.com/terraform), and see an example with all the Workers script resource settings [here](https://github.com/cloudflare/terraform-provider-cloudflare/blob/main/examples/resources/cloudflare_workers_script/resource.tf). ```tf terraform { required_providers { cloudflare = { source = "cloudflare/cloudflare" version = "~> 5" } } } resource "cloudflare_workers_script" "my-hello-world-script" { account_id = "" script_name = "my-hello-world-script" main_module = "my-hello-world-script.mjs" content = trimspace(file("my-hello-world-script.mjs")) compatibility_date = "$today" bindings = [{ name = "MESSAGE" type = "plain_text" text = "Hello World!" }] } ``` Note * `trimspace()` removes empty lines in the file * The Workers Script resource does not have a `metadata` property like in the other examples. All of the properties found in `metadata` are instead at the top-level of the resource class, such as `bindings` or `compatibility_date`. Please see the [cloudflare\_workers\_script (Resource) docs](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/workers_script). ## Cloudflare API Libraries ### JavaScript/TypeScript This example uses the [cloudflare-typescript](https://github.com/cloudflare/cloudflare-typescript) library which provides convenient access to the Cloudflare REST API from server-side JavaScript or TypeScript. * JavaScript ```js #!/usr/bin/env -S npm run tsn -T /* * Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/ * (Not Global API Key!) * * Find your account id: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/ * * Set these environment variables: * - CLOUDFLARE_API_TOKEN * - CLOUDFLARE_ACCOUNT_ID * * ### Workers for Platforms ### * * For uploading a User Worker to a dispatch namespace: * https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/ * * Define a "dispatchNamespaceName" variable and change the entire "const script = " line to the following: * "const script = await client.workersForPlatforms.dispatch.namespaces.scripts.update(dispatchNamespaceName, scriptName, {" */ import Cloudflare from "cloudflare"; import { toFile } from "cloudflare/index"; const apiToken = process.env["CLOUDFLARE_API_TOKEN"] ?? ""; if (!apiToken) { throw new Error("Please set envar CLOUDFLARE_ACCOUNT_ID"); } const accountID = process.env["CLOUDFLARE_ACCOUNT_ID"] ?? ""; if (!accountID) { throw new Error("Please set envar CLOUDFLARE_API_TOKEN"); } const client = new Cloudflare({ apiToken: apiToken, }); async function main() { const scriptName = "my-hello-world-script"; const scriptFileName = `${scriptName}.mjs`; // Workers Scripts prefer Module Syntax // https://blog.cloudflare.com/workers-javascript-modules/ const scriptContent = ` export default { async fetch(request, env, ctx) { return new Response(env.MESSAGE, { status: 200 }); } }; `; try { // https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/ const script = await client.workers.scripts.update(scriptName, { account_id: accountID, // https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/ metadata: { main_module: scriptFileName, bindings: [ { type: "plain_text", name: "MESSAGE", text: "Hello World!", }, ], }, files: { // Add main_module file [scriptFileName]: await toFile( Buffer.from(scriptContent), scriptFileName, { type: "application/javascript+module", }, ), // Can add other files, such as more modules or source maps // [sourceMapFileName]: await toFile(Buffer.from(sourceMapContent), sourceMapFileName, { // type: 'application/source-map', // }), }, }); console.log("Script Upload success!"); console.log(JSON.stringify(script, null, 2)); } catch (error) { console.error("Script Upload failure!"); console.error(error); } } main(); ``` * TypeScript ```ts #!/usr/bin/env -S npm run tsn -T /* * Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/ * (Not Global API Key!) * * Find your account id: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/ * * Set these environment variables: * - CLOUDFLARE_API_TOKEN * - CLOUDFLARE_ACCOUNT_ID * * ### Workers for Platforms ### * * For uploading a User Worker to a dispatch namespace: * https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/ * * Define a "dispatchNamespaceName" variable and change the entire "const script = " line to the following: * "const script = await client.workersForPlatforms.dispatch.namespaces.scripts.update(dispatchNamespaceName, scriptName, {" */ import Cloudflare from 'cloudflare'; import { toFile } from 'cloudflare/index'; const apiToken = process.env['CLOUDFLARE_API_TOKEN'] ?? ''; if (!apiToken) { throw new Error('Please set envar CLOUDFLARE_ACCOUNT_ID'); } const accountID = process.env['CLOUDFLARE_ACCOUNT_ID'] ?? ''; if (!accountID) { throw new Error('Please set envar CLOUDFLARE_API_TOKEN'); } const client = new Cloudflare({ apiToken: apiToken, }); async function main() { const scriptName = 'my-hello-world-script'; const scriptFileName = `${scriptName}.mjs`; // Workers Scripts prefer Module Syntax // https://blog.cloudflare.com/workers-javascript-modules/ const scriptContent = ` export default { async fetch(request, env, ctx) { return new Response(env.MESSAGE, { status: 200 }); } }; `; try { // https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/ const script = await client.workers.scripts.update(scriptName, { account_id: accountID, // https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/ metadata: { main_module: scriptFileName, bindings: [ { type: 'plain_text', name: 'MESSAGE', text: 'Hello World!', }, ], }, files: { // Add main_module file [scriptFileName]: await toFile(Buffer.from(scriptContent), scriptFileName, { type: 'application/javascript+module', }), // Can add other files, such as more modules or source maps // [sourceMapFileName]: await toFile(Buffer.from(sourceMapContent), sourceMapFileName, { // type: 'application/source-map', // }), }, }); console.log('Script Upload success!'); console.log(JSON.stringify(script, null, 2)); } catch (error) { console.error('Script Upload failure!'); console.error(error); } } main(); ``` ### Python This example uses the [cloudflare-python](https://github.com/cloudflare/cloudflare-python) library. ```py """Workers Script Upload Example Generate an API token: https://developers.cloudflare.com/fundamentals/api/get-started/create-token/ (Not Global API Key!) Find your account id: https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/ Set these environment variables: - CLOUDFLARE_API_TOKEN - CLOUDFLARE_ACCOUNT_ID ### Workers for Platforms ### For uploading a User Worker to a dispatch namespace: https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/ Change the entire "script = " line to the following: "script = client.workers_for_platforms.dispatch.namespaces.scripts.update(" Then, define a "dispatch_namespace_name" variable and add a "dispatch_namespace=dispatch_namespace_name" keyword argument to the "update" method. """ import os from cloudflare import Cloudflare, BadRequestError API_TOKEN = os.environ.get("CLOUDFLARE_API_TOKEN") if API_TOKEN is None: raise RuntimeError("Please set envar CLOUDFLARE_API_TOKEN") ACCOUNT_ID = os.environ.get("CLOUDFLARE_ACCOUNT_ID") if ACCOUNT_ID is None: raise RuntimeError("Please set envar CLOUDFLARE_ACCOUNT_ID") client = Cloudflare(api_token=API_TOKEN) def main() -> None: """Workers Script Upload Example""" script_name = "my-hello-world-script" script_file_name = f"{script_name}.mjs" # Workers Scripts prefer Module Syntax # https://blog.cloudflare.com/workers-javascript-modules/ script_content = """ export default { async fetch(request, env, ctx) { return new Response(env.MESSAGE, { status: 200 }); } }; """ try: # https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/ script = client.workers.scripts.update( script_name, account_id=ACCOUNT_ID, # type: ignore # https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/ metadata={ "main_module": script_file_name, "bindings": [ { "type": "plain_text", "name": "MESSAGE", "text": "Hello World!", } ], }, files={ # Add main_module file script_file_name: ( script_file_name, bytes(script_content, "utf-8"), "application/javascript+module", ) # Can add other files, such as more modules or source maps # source_map_file_name: ( # source_map_file_name, # bytes(source_map_content, "utf-8"), # "application/source-map" #) }, ) print("Script Upload success!") print(script.to_json(indent=2)) except BadRequestError as err: print("Script Upload failure!") print(err) if __name__ == "__main__": main() ``` ## Cloudflare REST API Open a terminal or create a shell script to upload a Worker easily with curl. For this example, replace `` and `` with your own. What's notable about interacting with the Workers Script Upload API directly is that it uses [multipart/form-data](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods/POST) for uploading metadata, multiple JavaScript modules, source maps, and more. This is abstracted away in Terraform and the API libraries. ```bash curl https://api.cloudflare.com/client/v4/accounts//workers/scripts/my-hello-world-script \ -X PUT \ -H 'Authorization: Bearer ' \ -F 'metadata={ "main_module": "my-hello-world-script.mjs", "bindings": [ { "type": "plain_text", "name": "MESSAGE", "text": "Hello World!" } ], "compatibility_date": "$today" };type=application/json' \ -F 'my-hello-world-script.mjs=@-;filename=my-hello-world-script.mjs;type=application/javascript+module' </workers/dispatch/namespaces//scripts/my-hello-world-script ``` For this to work, you first need to configure [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/get-started/configuration), create a dispatch namespace, and replace `` with your own. ### Python Workers [Python Workers](https://developers.cloudflare.com/workers/languages/python/) (open beta) have their own special `text/x-python` content type and `python_workers` compatibility flag for uploading. ```bash curl https://api.cloudflare.com/client/v4/accounts//workers/scripts/my-hello-world-script \ -X PUT \ -H 'Authorization: Bearer ' \ -F 'metadata={ "main_module": "my-hello-world-script.py", "bindings": [ { "type": "plain_text", "name": "MESSAGE", "text": "Hello World!" } ], "compatibility_date": "$today", "compatibility_flags": [ "python_workers" ] };type=application/json' \ -F 'my-hello-world-script.py=@-;filename=my-hello-world-script.py;type=text/x-python' < --- title: Known issues · Cloudflare Workers docs description: Known issues and bugs to be aware of when using Workers. lastUpdated: 2025-05-15T14:14:09.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/known-issues/ md: https://developers.cloudflare.com/workers/platform/known-issues/index.md --- Below are some known bugs and issues to be aware of when using Cloudflare Workers. ## Route specificity * When defining route specificity, a trailing `/*` in your pattern may not act as expected. Consider two different Workers, each deployed to the same zone. Worker A is assigned the `example.com/images/*` route and Worker B is given the `example.com/images*` route pattern. With these in place, here are how the following URLs will be resolved: ```plaintext // (A) example.com/images/* // (B) example.com/images* "example.com/images" // -> B "example.com/images123" // -> B "example.com/images/hello" // -> B ``` You will notice that all examples trigger Worker B. This includes the final example, which exemplifies the unexpected behavior. When adding a wildcard on a subdomain, here are how the following URLs will be resolved: ```plaintext // (A) *.example.com/a // (B) a.example.com/* "a.example.com/a" // -> B ``` ## wrangler dev * When running `wrangler dev --remote`, all outgoing requests are given the `cf-workers-preview-token` header, which Cloudflare recognizes as a preview request. This applies to the entire Cloudflare network, so making HTTP requests to other Cloudflare zones is currently discarded for security reasons. To enable a workaround, insert the following code into your Worker script: ```js const request = new Request(url, incomingRequest); request.headers.delete('cf-workers-preview-token'); return await fetch(request); ``` ## Fetch API in CNAME setup When you make a subrequest using [`fetch()`](https://developers.cloudflare.com/workers/runtime-apis/fetch/) from a Worker, the Cloudflare DNS resolver is used. When a zone has a [Partial (CNAME) setup](https://developers.cloudflare.com/dns/zone-setups/partial-setup/), all hostnames that the Worker needs to be able to resolve require a dedicated DNS entry in Cloudflare's DNS setup. Otherwise the Fetch API call will fail with status code [530 (1016)](https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-1xxx-errors/error-1016/). Setup with missing DNS records in Cloudflare DNS ```plaintext // Zone in partial setup: example.com // DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ... // DNS records at Cloudflare DNS: sub1.example.com "sub1.example.com/" // -> Can be resolved by Fetch API "sub2.example.com/" // -> Cannot be resolved by Fetch API, will lead to 530 status code ``` After adding `sub2.example.com` to Cloudflare DNS ```plaintext // Zone in partial setup: example.com // DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ... // DNS records at Cloudflare DNS: sub1.example.com, sub2.example.com "sub1.example.com/" // -> Can be resolved by Fetch API "sub2.example.com/" // -> Can be resolved by Fetch API ``` ## Fetch to IP addresses For Workers subrequests, requests can only be made to URLs, not to IP addresses directly. To overcome this limitation [add a A or AAAA name record to your zone](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/) and then fetch that resource. For example, in the zone `example.com` create a record of type `A` with the name `server` and value `192.0.2.1`, and then use: ```js await fetch('http://server.example.com') ``` Do not use: ```js await fetch('http://192.0.2.1') ``` --- title: Limits · Cloudflare Workers docs description: Cloudflare Workers plan and platform limits. lastUpdated: 2025-07-15T13:58:04.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/limits/ md: https://developers.cloudflare.com/workers/platform/limits/index.md --- ## Account plan limits | Feature | Workers Free | Workers Paid | | - | - | - | | [Subrequests](#subrequests) | 50/request | 1000/request | | [Simultaneous outgoing connections/request](#simultaneous-open-connections) | 6 | 6 | | [Environment variables](#environment-variables) | 64/Worker | 128/Worker | | [Environment variable size](#environment-variables) | 5 KB | 5 KB | | [Worker size](#worker-size) | 3 MB | 10 MB | | [Worker startup time](#worker-startup-time) | 400 ms | 400 ms | | [Number of Workers](#number-of-workers)1 | 100 | 500 | | Number of [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) per account | 5 | 250 | | Number of [Static Asset](#static-assets) files per Worker version | 20000 | 20000 | | Individual [Static Asset](#static-assets) file size | 25 MiB | 25 MiB | 1 If you are running into limits, your project may be a good fit for [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/). Need a higher limit? To request an adjustment to a limit, complete the [Limit Increase Request Form](https://forms.gle/ukpeZVLWLnKeixDu7). If the limit can be increased, Cloudflare will contact you with next steps. *** ## Request limits URLs have a limit of 16 KB. Request headers observe a total limit of 32 KB, but each header is limited to 16 KB. Cloudflare has network-wide limits on the request body size. This limit is tied to your Cloudflare account's plan, which is separate from your Workers plan. When the request body size of your `POST`/`PUT`/`PATCH` requests exceed your plan's limit, the request is rejected with a `(413) Request entity too large` error. Cloudflare Enterprise customers may contact their account team or [Cloudflare Support](https://developers.cloudflare.com/support/contacting-cloudflare-support/) to have a request body limit beyond 500 MB. | Cloudflare Plan | Maximum body size | | - | - | | Free | 100 MB | | Pro | 100 MB | | Business | 200 MB | | Enterprise | 500 MB (by default) | *** ## Response limits Response headers observe a total limit of 32 KB, but each header is limited to 16 KB. Cloudflare does not enforce response limits on response body sizes, but cache limits for [our CDN are observed](https://developers.cloudflare.com/cache/concepts/default-cache-behavior/). Maximum file size is 512 MB for Free, Pro, and Business customers and 5 GB for Enterprise customers. *** ## Worker limits | Feature | Workers Free | Workers Paid | | - | - | - | | [Request](#request) | 100,000 requests/day 1000 requests/min | No limit | | [Worker memory](#memory) | 128 MB | 128 MB | | [CPU time](#cpu-time) | 10 ms | 5 min HTTP request 15 min [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) | | [Duration](#duration) | No limit | No limit for Workers. 15 min duration limit for [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/), [Durable Object Alarms](https://developers.cloudflare.com/durable-objects/api/alarms/) and [Queue Consumers](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) | ### Duration Duration is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker. There is no hard limit on the duration of a Worker. As long as the client that sent the request remains connected, the Worker can continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client request are canceled. Use [`event.waitUntil()`](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to delay cancellation for another 30 seconds or until the promise passed to `waitUntil()` completes. Note Cloudflare updates the Workers runtime a few times per week. When this happens, any in-flight requests are given a grace period of 30 seconds to finish. If a request does not finish within this time, it is terminated. While your application should follow the best practice of handling disconnects by retrying requests, this scenario is extremely improbable. To encounter it, you would need to have a request that takes longer than 30 seconds that also happens to intersect with the exact time an update to the runtime is happening. ### CPU time CPU time is the amount of time the CPU actually spends doing work during a given request. If a Worker's request makes a sub-request and waits for that request to come back before doing additional work, this time spent waiting **is not** counted towards CPU time. **Most Workers requests consume less than 1-2 milliseconds of CPU time**, but you can increase the maximum CPU time from the default 30 seconds to 5 minutes (300,000 milliseconds) if you have CPU-bound tasks, such as large JSON payloads that need to be serialized, cryptographic key generation, or other data processing tasks. Each [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured. To understand your CPU usage: * CPU time and Wall time are surfaced in the [invocation log](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#invocation-logs) within Workers Logs. * For Tail Workers, CPU time and Wall time are surfaced at the top level of the [Workers Trace Events object](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/). * DevTools locally can help identify CPU intensive portions of your code. See the [CPU profiling with DevTools documentation](https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/). You can also set a [custom limit](https://developers.cloudflare.com/workers/wrangler/configuration/#limits) on the amount of CPU time that can be used during each invocation of your Worker. * wrangler.jsonc ```jsonc { // ...rest of your configuration... "limits": { "cpu_ms": 300000, // default is 30000 (30 seconds) }, // ...rest of your configuration... } ``` * wrangler.toml ```toml [limits] cpu_ms = 300_000 ``` You can also customize this in the [Workers dashboard](https://dash.cloudflare.com/?to=/:account/workers). Select the specific Worker you wish to modify -> click on the "Settings" tab -> adjust the CPU time limit. Note Scheduled Workers ([Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/)) have different limits on CPU time based on the schedule interval. When the schedule interval is less than 1 hour, a Scheduled Worker may run for up to 30 seconds. When the schedule interval is more than 1 hour, a scheduled Worker may run for up to 15 minutes. *** ## Cache API limits | Feature | Workers Free | Workers Paid | | - | - | - | | [Maximum object size](#cache-api-limits) | 512 MB | 512 MB | | [Calls/request](#cache-api-limits) | 50 | 1,000 | Calls/request means the number of calls to `put()`, `match()`, or `delete()` Cache API method per-request, using the same quota as subrequests (`fetch()`). Note The size of chunked response bodies (`Transfer-Encoding: chunked`) is not known in advance. Then, `.put()`ing such responses will block subsequent `.put()`s from starting until the current `.put()` completes. *** ## Request Workers automatically scale onto thousands of Cloudflare global network servers around the world. There is no general limit to the number of requests per second Workers can handle. Cloudflare’s abuse protection methods do not affect well-intentioned traffic. However, if you send many thousands of requests per second from a small number of client IP addresses, you can inadvertently trigger Cloudflare’s abuse protection. If you expect to receive `1015` errors in response to traffic or expect your application to incur these errors, [contact Cloudflare support](https://developers.cloudflare.com/support/contacting-cloudflare-support/) to increase your limit. Cloudflare's anti-abuse Workers Rate Limiting does not apply to Enterprise customers. You can also confirm if you have been rate limited by anti-abuse Worker Rate Limiting by logging into the Cloudflare dashboard, selecting your account and zone, and going to **Security** > **Events**. Find the event and expand it. If the **Rule ID** is `worker`, this confirms that it is the anti-abuse Worker Rate Limiting. The burst rate and daily request limits apply at the account level, meaning that requests on your `*.workers.dev` subdomain count toward the same limit as your zones. Upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to automatically lift these limits. Warning If you are currently being rate limited, upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to lift burst rate and daily request limits. ### Burst rate Accounts using the Workers Free plan are subject to a burst rate limit of 1,000 requests per minute. Users visiting a rate limited site will receive a Cloudflare `1015` error page. However if you are calling your Worker programmatically, you can detect the rate limit page and handle it yourself by looking for HTTP status code `429`. Workers being rate-limited by Anti-Abuse Protection are also visible from the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account and your website. 2. Select **Security** > **Events** > scroll to **Sampled logs**. 3. Review the log for a Web Application Firewall block event with a `ruleID` of `worker`. ### Daily request Accounts using the Workers Free plan are subject to a daily request limit of 100,000 requests. Free plan daily requests counts reset at midnight UTC. A Worker that fails as a result of daily request limit errors can be configured by toggling its corresponding [route](https://developers.cloudflare.com/workers/configuration/routing/routes/) in two modes: 1) Fail open and 2) Fail closed. #### Fail open Routes in fail open mode will bypass the failing Worker and prevent it from operating on incoming traffic. Incoming requests will behave as if there was no Worker. #### Fail closed Routes in fail closed mode will display a Cloudflare `1027` error page to visitors, signifying the Worker has been temporarily disabled. Cloudflare recommends this option if your Worker is performing security related tasks. *** ## Memory Only one Workers instance runs on each of the many global Cloudflare global network servers. Each Workers instance can consume up to 128 MB of memory. Use [global variables](https://developers.cloudflare.com/workers/runtime-apis/web-standards/) to persist data between requests on individual nodes. Note however, that nodes are occasionally evicted from memory. If a Worker processes a request that pushes the Worker over the 128 MB limit, the Cloudflare Workers runtime may cancel one or more requests. To view these errors, as well as CPU limit overages: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages** and in **Overview**, select the Worker you would like to investigate. 3. Under **Metrics**, select **Errors** > **Invocation Statuses** and examine **Exceeded Memory**. Use the [TransformStream API](https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/) to stream responses if you are concerned about memory usage. This avoids loading an entire response into memory. Using DevTools locally can help identify memory leaks in your code. See the [memory profiling with DevTools documentation](https://developers.cloudflare.com/workers/observability/dev-tools/memory-usage/) to learn more. *** ## Subrequests A subrequest is any request that a Worker makes to either Internet resources using the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) or requests to other Cloudflare services like [R2](https://developers.cloudflare.com/r2/), [KV](https://developers.cloudflare.com/kv/), or [D1](https://developers.cloudflare.com/d1/). ### Worker-to-Worker subrequests To make subrequests from your Worker to another Worker on your account, use [Service Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). Service bindings allow you to send HTTP requests to another Worker without those requests going over the Internet. If you attempt to use global [`fetch()`](https://developers.cloudflare.com/workers/runtime-apis/fetch/) to make a subrequest to another Worker on your account that runs on the same [zone](https://developers.cloudflare.com/fundamentals/concepts/accounts-and-zones/#zones), without service bindings, the request will fail. If you make a subrequest from your Worker to a target Worker that runs on a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/#worker-to-worker-communication) rather than a route, the request will be allowed. ### How many subrequests can I make? You can make 50 subrequests per request on Workers Free, and 1,000 subrequests per request on Workers Paid. Each subrequest in a redirect chain counts against this limit. This means that the number of subrequests a Worker makes could be greater than the number of `fetch(request)` calls in the Worker. For subrequests to internal services like Workers KV and Durable Objects, the subrequest limit is 1,000 per request, regardless of the [usage model](https://developers.cloudflare.com/workers/platform/pricing/#workers) configured for the Worker. ### How long can a subrequest take? There is no set limit on the amount of real time a Worker may use. As long as the client which sent a request remains connected, the Worker may continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client’s request are proactively canceled. If the Worker passed a promise to [`event.waitUntil()`](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/), cancellation will be delayed until the promise has completed or until an additional 30 seconds have elapsed, whichever happens first. *** ## Simultaneous open connections You can open up to six connections simultaneously for each invocation of your Worker. The connections opened by the following API calls all count toward this limit: * the `fetch()` method of the [Fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/). * `get()`, `put()`, `list()`, and `delete()` methods of [Workers KV namespace objects](https://developers.cloudflare.com/kv/api/). * `put()`, `match()`, and `delete()` methods of [Cache objects](https://developers.cloudflare.com/workers/runtime-apis/cache/). * `list()`, `get()`, `put()`, `delete()`, and `head()` methods of [R2](https://developers.cloudflare.com/r2/). * `send()` and `sendBatch()`, methods of [Queues](https://developers.cloudflare.com/queues/). * Opening a TCP socket using the [`connect()`](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) API. Outbound WebSocket connections are just HTTP connections and thus also contribute to the maximum concurrent connections limit. Once an invocation has six connections open, it can still attempt to open additional connections. * These attempts are put in a pending queue — the connections will not be initiated until one of the currently open connections has closed. * Earlier connections can delay later ones, if a Worker tries to make many simultaneous subrequests, its later subrequests may appear to take longer to start. * Earlier connections that are stalled1 might get closed with a `Response closed due to connection limit` exception. If you have cases in your application that use `fetch()` but that do not require consuming the response body, you can avoid the unread response body from consuming a concurrent connection by using `response.body.cancel()`. For example, if you want to check whether the HTTP response code is successful (2xx) before consuming the body, you should explicitly cancel the pending response body: ```ts const response = await fetch(url); // Only read the response body for successful responses if (response.statusCode <= 299) { // Call response.json(), response.text() or otherwise process the body } else { // Explicitly cancel it response.body.cancel(); } ``` This will free up an open connection. If the system detects that a Worker is deadlocked on stalled connections1 — for example, if the Worker has pending connection attempts but has no in-progress reads or writes on the connections that it already has open — then the least-recently-used open connection will be canceled to unblock the Worker. If the Worker later attempts to use a canceled connection, a `Response closed due to connection limit` exception will be thrown. These exceptions should rarely occur in practice, though, since it is uncommon for a Worker to open a connection that it does not have an immediate use for. 1A connections is considered stalled when it is not not being actively read from or written to, for example: ```ts // Within a for-of loop const response = await fetch("https://example.org"); for await (const chunk of response.body) { // While this code block is executing, there are no pending // reads on the response.body. Accordingly, the system may view // the stream as not being active within this block. } // Using body.getReader() const response = await fetch("https://example.org"); const reader = response.body.getReader(); let chunk = await reader.read(); await processChunk(chunk); chunk = await reader.read(); await processChunk(chunk); async function processChunk(chunk) { // The stream is considered inactive as there is no pending reads // on response.body. It may then get cancelled. } ``` Note Simultaneous Open Connections are measured from the top-level request, meaning any connections open from Workers sharing resources (for example, Workers triggered via [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/)) will share the simultaneous open connection limit. *** ## Environment variables The maximum number of environment variables (secret and text combined) for a Worker is 128 variables on the Workers Paid plan, and 64 variables on the Workers Free plan. There is no limit to the number of environment variables per account. Each environment variable has a size limitation of 5 KB. *** ## Worker size A Worker can be up to 10 MB in size *after compression* on the Workers Paid plan, and up to 3 MB on the Workers Free plan. On either plan, a Worker can be up to 64 MB *before compression*. You can assess the size of your Worker bundle after compression by performing a dry-run with `wrangler` and reviewing the final compressed (`gzip`) size output by `wrangler`: ```sh wrangler deploy --outdir bundled/ --dry-run ``` ```sh # Output will resemble the below: Total Upload: 259.61 KiB / gzip: 47.23 KiB ``` Note that larger Worker bundles can impact the start-up time of the Worker, as the Worker needs to be loaded into memory. To reduce the upload size of a Worker, consider some of the following strategies: * Removing unnecessary dependencies and packages * Storing configuration files, static assets, and binary data using [Workers KV](https://developers.cloudflare.com/kv/), [R2](https://developers.cloudflare.com/r2/), [D1](https://developers.cloudflare.com/d1/), or [Workers Static Assets](https://developers.cloudflare.com/workers/static-assets/) instead of bundling them within your Worker code. * Splitting functionality across multiple Workers and connecting them using [Service bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). *** ## Worker startup time A Worker must be able to be parsed and execute its global scope (top-level code outside of any handlers) within 400 ms. Worker size can impact startup because there is more code to parse and evaluate. Avoiding expensive code in the global scope can keep startup efficient as well. You can measure your Worker's startup time by deploying it to Cloudflare using [Wrangler](https://developers.cloudflare.com/workers/wrangler/). When you run `npx wrangler@latest deploy` or `npx wrangler@latest versions upload`, Wrangler will output the startup time of your Worker in the command-line output, using the `startup_time_ms` field in the [Workers Script API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/). If you are having trouble staying under this limit, consider [profiling using DevTools](https://developers.cloudflare.com/workers/observability/dev-tools/) locally to learn how to optimize your code. When you attempt to deploy a Worker using the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/), but your deployment is rejected because your Worker exceeds the maximum startup time, Wrangler will automatically generate a CPU profile that you can import into Chrome DevTools or open directly in VSCode. You can use this to learn what code in your Worker uses large amounts of CPU time at startup. Refer to [`wrangler check startup`](https://developers.cloudflare.com/workers/wrangler/commands/#startup) for more details. Need a higher limit? To request an adjustment to a limit, complete the [Limit Increase Request Form](https://forms.gle/ukpeZVLWLnKeixDu7). If the limit can be increased, Cloudflare will contact you with next steps. *** ## Number of Workers You can have up to 500 Workers on your account on the Workers Paid plan, and up to 100 Workers on the Workers Free plan. If you need more than 500 Workers, consider using [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/). *** ## Routes and domains ### Number of routes per zone Each zone has a limit of 1,000 [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/). If you require more than 1,000 routes on your zone, consider using [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit. ### Number of routes per zone when using `wrangler dev --remote` When you run a [remote development](https://developers.cloudflare.com/workers/development-testing/#remote-bindings) session using the `--remote` flag, a limit of 50 [routes](https://developers.cloudflare.com/workers/configuration/routing/routes/) per zone is enforced. The Quick Editor in the Cloudflare Dashboard also uses `wrangler dev --remote`, so any changes made there are subject to the same 50-route limit. If your zone has more than 50 routes, you **will not be able to run a remote session**. To fix this, you must remove routes until you are under the 50-route limit. ### Number of custom domains per zone Each zone has a limit of 100 [custom domains](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/). If you require more than 100 custom domains on your zone, consider using a wildcard [route](https://developers.cloudflare.com/workers/configuration/routing/routes/) or request an increase to this limit. ### Number of routed zones per Worker When configuring [routing](https://developers.cloudflare.com/workers/configuration/routing/), the maximum number of zones that can be referenced by a Worker is 1,000. If you require more than 1,000 zones on your Worker, consider using [Workers for Platforms](https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit. *** ## Image Resizing with Workers When using Image Resizing with Workers, refer to [Image Resizing documentation](https://developers.cloudflare.com/images/transform-images/) for more information on the applied limits. *** ## Log size You can emit a maximum of 256 KB of data (across `console.log()` statements, exceptions, request metadata and headers) to the console for a single request. After you exceed this limit, further context associated with the request will not be recorded in logs, appear when tailing logs of your Worker, or within a [Tail Worker](https://developers.cloudflare.com/workers/observability/logs/tail-workers/). Refer to the [Workers Trace Event Logpush documentation](https://developers.cloudflare.com/workers/observability/logs/logpush/#limits) for information on the maximum size of fields sent to logpush destinations. *** ## Unbound and Bundled plan limits Note Unbound and Bundled plans have been deprecated and are no longer available for new accounts. If your Worker is on an Unbound plan, your limits are exactly the same as the Workers Paid plan. If your Worker is on a Bundled plan, your limits are the same as the Workers Paid plan except for the following differences: * Your limit for [subrequests](https://developers.cloudflare.com/workers/platform/limits/#subrequests) is 50/request * Your limit for [CPU time](https://developers.cloudflare.com/workers/platform/limits/#cpu-time) is 50ms for HTTP requests and 50ms for [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/) * You have no [Duration](https://developers.cloudflare.com/workers/platform/limits/#duration) limits for [Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/), [Durable Object alarms](https://developers.cloudflare.com/durable-objects/api/alarms/), or [Queue consumers](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) * Your Cache API limits for calls/requests is 50 *** ## Static Assets ### Files There is a 20,000 file count limit per [Worker version](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/), and a 25 MiB individual file size limit. This matches the [limits in Cloudflare Pages](https://developers.cloudflare.com/pages/platform/limits/) today. ### Headers A `_headers` file may contain up to 100 rules and each line may contain up to 2,000 characters. The entire line, including spacing, header name, and value, counts towards this limit. ### Redirects A `_redirects` file may contain up to 2,000 static redirects and 100 dynamic redirects, for a combined total of 2,100 redirects. Each redirect declaration has a 1,000-character limit. *** ## Related resources Review other developer platform resource limits. * [KV limits](https://developers.cloudflare.com/kv/platform/limits/) * [Durable Object limits](https://developers.cloudflare.com/durable-objects/platform/limits/) * [Queues limits](https://developers.cloudflare.com/queues/platform/limits/) --- title: Pricing · Cloudflare Workers docs description: Workers plans and pricing information. lastUpdated: 2025-07-10T17:05:48.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/pricing/ md: https://developers.cloudflare.com/workers/platform/pricing/index.md --- By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions, Workers KV and Hyperdrive. Read more about the [Free plan limits](https://developers.cloudflare.com/workers/platform/limits/#worker-limits). The Workers Paid plan includes Workers, Pages Functions, Workers KV, Hyperdrive, and Durable Objects usage for a minimum charge of $5 USD per month for an account. The plan includes increased initial usage allotments, with clear charges for usage that exceeds the base plan. There are no additional charges for data transfer (egress) or throughput (bandwidth). All included usage is on a monthly basis. Pages Functions billing All [Pages Functions](https://developers.cloudflare.com/pages/functions/) are billed as Workers. All pricing and inclusions in this document apply to Pages Functions. Refer to [Functions Pricing](https://developers.cloudflare.com/pages/functions/pricing/) for more information on Pages Functions pricing. ## Workers Users on the Workers Paid plan have access to the Standard usage model. Workers Enterprise accounts are billed based on the usage model specified in their contract. To switch to the Standard usage model, reach out to your CSM. | | Requests1, 2 | Duration | CPU time | | - | - | - | - | | **Free** | 100,000 per day | No charge for duration | 10 milliseconds of CPU time per invocation | | **Standard** | 10 million included per month +$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month +$0.02 per additional million CPU milliseconds Max of [5 minutes of CPU time](https://developers.cloudflare.com/workers/platform/limits/#worker-limits) per invocation (default: 30 seconds) Max of 15 minutes of CPU time per [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) or [Queue Consumer](https://developers.cloudflare.com/queues/configuration/javascript-apis/#consumer) invocation | 1 Inbound requests to your Worker. Cloudflare does not bill for [subrequests](https://developers.cloudflare.com/workers/platform/limits/#subrequests) you make from your Worker. 2 Requests to static assets are free and unlimited. ### Example pricing #### Example 1 A Worker that serves 15 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs: | | Monthly Costs | Formula | | - | - | - | | **Subscription** | $5.00 | | | **Requests** | $1.50 | (15,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 | | **CPU time** | $1.50 | ((7 ms of CPU time per request \* 15,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $8.00 | | #### Example 2 A project that serves 15 million requests per month, with 80% (12 million) requests serving [static assets](https://developers.cloudflare.com/workers/static-assets/) and the remaining invoking dynamic Worker code. The Worker uses an average of 7 milliseconds (ms) of time per request. Requests to static assets are free and unlimited. This project would have the following estimated costs: | | Monthly Costs | Formula | | - | - | - | | **Subscription** | $5.00 | | | **Requests to static assets** | $0 | - | | **Requests to Worker** | $0 | - | | **CPU time** | $0 | - | | **Total** | $5.00 | | | | | | #### Example 3 A Worker that runs on a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) once an hour to collect data from multiple APIs, process the data and create a report. * 720 requests/month * 3 minutes (180,000ms) of CPU time per request In this scenario, the estimated monthly cost would be calculated as: | | Monthly Costs | Formula | | - | - | - | | **Subscription** | $5.00 | | | **Requests** | $0.00 | - | | **CPU time** | $1.99 | ((180,000 ms of CPU time per request \* 720 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $6.99 | | | | | | #### Example 4 A high traffic Worker that serves 100 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs: | | Monthly Costs | Formula | | - | - | - | | **Subscription** | $5.00 | | | **Requests** | $27.00 | (100,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 | | **CPU time** | $13.40 | ((7 ms of CPU time per request \* 100,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $45.40 | | Custom limits To prevent accidental runaway bills or denial-of-wallet attacks, configure the maximum amount of CPU time that can be used per invocation by [defining limits in your Worker's Wrangler file](https://developers.cloudflare.com/workers/wrangler/configuration/#limits), or via the Cloudflare dashboard (**Workers & Pages** > Select your Worker > **Settings** > **CPU Limits**). If you had a Worker on the Bundled usage model prior to the migration to Standard pricing on March 1, 2024, Cloudflare has automatically added a 50 ms CPU limit on your Worker. ### How to switch usage models Note Some Workers Enterprise customers maintain the ability to change usage models. Users on the Workers Paid plan have access to the Standard usage model. However, some users may still have a legacy usage model configured. Legacy usage models include Workers Unbound and Workers Bundled. Users are advised to move to the Workers Standard usage model. Changing the usage model only affects billable usage, and has no technical implications. To change your default account-wide usage model: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers-and-pages) and select your account. 2. In Account Home, select **Workers & Pages**. 3. Find **Usage Model** on the right-side menu > **Change**. Usage models may be changed at the individual Worker level: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/settings) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings** > **Usage Model**. Existing Workers will not be impacted when changing the default usage model. You may change the usage model for individual Workers without affecting your account-wide default usage model. ## Workers Logs Workers Logs is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/). | | Log Events Written | Retention | | - | - | - | | **Workers Free** | 200,000 per day | 3 Days | | **Workers Paid** | 20 million included per month +$0.60 per additional million | 7 Days | Workers Logs documentation For more information and [examples of Workers Logs billing](https://developers.cloudflare.com/workers/observability/logs/workers-logs/#example-pricing), refer to the [Workers Logs documentation](https://developers.cloudflare.com/workers/observability/logs/workers-logs). ## Workers Trace Events Logpush Workers Logpush is only available on the Workers Paid plan. | | Paid plan | | - | - | | Requests 1 | 10 million / month, +$0.05/million | 1 Workers Logpush charges for request logs that reach your end destination after applying filtering or sampling. ## Workers KV Workers KV is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/). | | Free plan1 | Paid plan | | - | - | - | | Keys read | 100,000 / day | 10 million/month, + $0.50/million | | Keys written | 1,000 / day | 1 million/month, + $5.00/million | | Keys deleted | 1,000 / day | 1 million/month, + $5.00/million | | List requests | 1,000 / day | 1 million/month, + $5.00/million | | Stored data | 1 GB | 1 GB, + $0.50/ GB-month | 1 The Workers Free plan includes limited Workers KV usage. All limits reset daily at 00:00 UTC. If you exceed any one of these limits, further operations of that type will fail with an error. Note Workers KV pricing for read, write and delete operations is on a per-key basis. Bulk read operations are billed by the amount of keys read in a bulk read operation. KV documentation To learn more about KV, refer to the [KV documentation](https://developers.cloudflare.com/kv/). ## Hyperdrive Hyperdrive is included in both the Free and Paid [Workers plans](https://developers.cloudflare.com/workers/platform/pricing/). | | Free plan[1](#user-content-fn-1) | Paid plan | | - | - | - | | Database queries[2](#user-content-fn-2) | 100,000 / day | Unlimited | Footnotes 1: The Workers Free plan includes limited Hyperdrive usage. All limits reset daily at 00:00 UTC. If you exceed any one of these limits, further operations of that type will fail with an error. 2: Database queries refers to any database statement made via Hyperdrive, whether a query (`SELECT`), a modification (`INSERT`,`UPDATE`, or `DELETE`) or a schema change (`CREATE`, `ALTER`, `DROP`). ## Footnotes 1. The Workers Free plan includes limited Hyperdrive usage. All limits reset daily at 00:00 UTC. If you exceed any one of these limits, further operations of that type will fail with an error. [↩](#user-content-fnref-1) 2. Database queries refers to any database statement made via Hyperdrive, whether a query (`SELECT`), a modification (`INSERT`,`UPDATE`, or `DELETE`) or a schema change (`CREATE`, `ALTER`, `DROP`). [↩](#user-content-fnref-2) Hyperdrive documentation To learn more about Hyperdrive, refer to the [Hyperdrive documentation](https://developers.cloudflare.com/hyperdrive/). ## Queues Note Cloudflare Queues requires the [Workers Paid plan](https://developers.cloudflare.com/workers/platform/pricing/#workers) to use, but does not increase your monthly subscription cost. Cloudflare Queues charges for the total number of operations against each of your queues during a given month. * An operation is counted for each 64 KB of data that is written, read, or deleted. * Messages larger than 64 KB are charged as if they were multiple messages: for example, a 65 KB message and a 127 KB message would both incur two operation charges when written, read, or deleted. * A KB is defined as 1,000 bytes, and each message includes approximately 100 bytes of internal metadata. * Operations are per message, not per batch. A batch of 10 messages (the default batch size), if processed, would incur 10x write, 10x read, and 10x delete operations: one for each message in the batch. * There are no data transfer (egress) or throughput (bandwidth) charges. | | Workers Paid | | - | - | | Standard operations | 1,000,000 operations/month included + $0.40/million operations | In most cases, it takes 3 operations to deliver a message: 1 write, 1 read, and 1 delete. Therefore, you can use the following formula to estimate your monthly bill: ```txt ((Number of Messages * 3) - 1,000,000) / 1,000,000 * $0.40 ``` Additionally: * Each retry incurs a read operation. A batch of 10 messages that is retried would incur 10 operations for each retry. * Messages that reach the maximum retries and that are written to a [Dead Letter Queue](https://developers.cloudflare.com/queues/configuration/batching-retries/) incur a write operation for each 64 KB chunk. A message that was retried 3 times (the default), fails delivery on the fourth time and is written to a Dead Letter Queue would incur five (5) read operations. * Messages that are written to a queue, but that reach the maximum persistence duration (or "expire") before they are read, incur only a write and delete operation per 64 KB chunk. Queues billing examples To learn more about Queues pricing and review billing examples, refer to [Queues Pricing](https://developers.cloudflare.com/queues/platform/pricing/). ## D1 D1 is available on both the Workers Free and Workers Paid plans. | | [Workers Free](https://developers.cloudflare.com/workers/platform/pricing/#workers) | [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing/#workers) | | - | - | - | | Rows read | 5 million / day | First 25 billion / month included + $0.001 / million rows | | Rows written | 100,000 / day | First 50 million / month included + $1.00 / million rows | | Storage (per GB stored) | 5 GB (total) | First 5 GB included + $0.75 / GB-mo | Track your D1 usage To accurately track your usage, use the [meta object](https://developers.cloudflare.com/d1/worker-api/return-object/), [GraphQL Analytics API](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-via-the-graphql-api), or the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/d1/). Select your D1 database, then view: Metrics > Row Metrics. ### Definitions 1. Rows read measure how many rows a query reads (scans), regardless of the size of each row. For example, if you have a table with 5000 rows and run a `SELECT * FROM table` as a full table scan, this would count as 5,000 rows read. A query that filters on an [unindexed column](https://developers.cloudflare.com/d1/best-practices/use-indexes/) may return fewer rows to your Worker, but is still required to read (scan) more rows to determine which subset to return. 2. Rows written measure how many rows were written to D1 database. Write operations include `INSERT`, `UPDATE`, and `DELETE`. Each of these operations contribute towards rows written. A query that `INSERT` 10 rows into a `users` table would count as 10 rows written. 3. DDL operations (for example, `CREATE`, `ALTER`, and `DROP`) are used to define or modify the structure of a database. They may contribute to a mix of read rows and write rows. Ensure you are accurately tracking your usage through the available tools ([meta object](https://developers.cloudflare.com/d1/worker-api/return-object/), [GraphQL Analytics API](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-via-the-graphql-api), or the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/d1/)). 4. Row size or the number of columns in a row does not impact how rows are counted. A row that is 1 KB and a row that is 100 KB both count as one row. 5. Defining [indexes](https://developers.cloudflare.com/d1/best-practices/use-indexes/) on your table(s) reduces the number of rows read by a query when filtering on that indexed field. For example, if the `users` table has an index on a timestamp column `created_at`, the query `SELECT * FROM users WHERE created_at > ?1` would only need to read a subset of the table. 6. Indexes will add an additional written row when writes include the indexed column, as there are two rows written: one to the table itself, and one to the index. The performance benefit of an index and reduction in rows read will, in nearly all cases, offset this additional write. 7. Storage is based on gigabytes stored per month, and is based on the sum of all databases in your account. Tables and indexes both count towards storage consumed. 8. Free limits reset daily at 00:00 UTC. Monthly included limits reset based on your monthly subscription renewal date, which is determined by the day you first subscribed. 9. There are no data transfer (egress) or throughput (bandwidth) charges for data accessed from D1. D1 billing Refer to [D1 Pricing](https://developers.cloudflare.com/d1/platform/pricing/) to learn more about how D1 is billed. ## Durable Objects Note Durable Objects are available both on Workers Free and Workers Paid plans. * **Workers Free plan**: Only Durable Objects with [SQLite storage backend](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#wrangler-configuration-for-sqlite-backed-durable-objects) are available. * **Workers Paid plan**: Durable Objects with either SQLite storage backend or [key-value storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) are available. If you wish to downgrade from a Workers Paid plan to a Workers Free plan, you must first ensure that you have deleted all Durable Object namespaces with the key-value storage backend. ### Compute billing Durable Objects are billed for duration while the Durable Object is active and running in memory. Requests to a Durable Object keep it active or creates the object if it was inactive, not in memory. | | Free plan | Paid plan | | - | - | - | | Requests | 100,000 / day | 1 million, + $0.15/million Includes HTTP requests, RPC sessions1, WebSocket messages2, and alarm invocations | | Duration3 | 13,000 GB-s / day | 400,000 GB-s, + $12.50/million GB-s4,5 | Footnotes 1 Each [RPC session](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/) is billed as one request to your Durable Object. Every [RPC method call](https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/) on a [Durable Objects stub](https://developers.cloudflare.com/durable-objects/) is its own RPC session and therefore a single billed request. RPC method calls can return objects (stubs) extending [`RpcTarget`](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/#lifetimes-memory-and-resource-management) and invoke calls on those stubs. Subsequent calls on the returned stub are part of the same RPC session and are not billed as separate requests. For example: ```js let durableObjectStub = OBJECT_NAMESPACE.get(id); // retrieve Durable Object stub using foo = await durableObjectStub.bar(); // billed as a request await foo.baz(); // treated as part of the same RPC session created by calling bar(), not billed as a request await durableObjectStub.cat(); // billed as a request ``` 2 A request is needed to create a WebSocket connection. There is no charge for outgoing WebSocket messages, nor for incoming [WebSocket protocol pings](https://www.rfc-editor.org/rfc/rfc6455#section-5.5.2). For compute requests billing-only, a 20:1 ratio is applied to incoming WebSocket messages to factor in smaller messages for real-time communication. For example, 100 WebSocket incoming messages would be charged as 5 requests for billing purposes. The 20:1 ratio does not affect Durable Object metrics and analytics, which reflect actual usage. 3 Application level auto-response messages handled by [`state.setWebSocketAutoResponse()`](https://developers.cloudflare.com/durable-objects/best-practices/websockets/) will not incur additional wall-clock time, and so they will not be charged. 4 Duration is billed in wall-clock time as long as the Object is active, but is shared across all requests active on an Object at once. Calling `accept()` on a WebSocket in an Object will incur duration charges for the entire time the WebSocket is connected. It is recommended to use the WebSocket Hibernation API to avoid incurring duration charges once all event handlers finish running. Note that the Durable Object will remain active for 10 seconds after the last client disconnects. For a complete explanation, refer to [When does a Durable Object incur duration charges?](https://developers.cloudflare.com/durable-objects/platform/pricing/#when-does-a-durable-object-incur-duration-charges). 5 Duration billing charges for the 128 MB of memory your Durable Object is allocated, regardless of actual usage. If your account creates many instances of a single Durable Object class, Durable Objects may run in the same isolate on the same physical machine and share the 128 MB of memory. These Durable Objects are still billed as if they are allocated a full 128 MB of memory. ### Storage billing The [Durable Objects Storage API](https://developers.cloudflare.com/durable-objects/api/storage-api/) is only accessible from within Durable Objects. Pricing depends on the storage backend of your Durable Objects. * **SQLite-backed Durable Objects (recommended)**: [SQLite storage backend](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) is recommended for all new Durable Object classes. Workers Free plan can only create and access SQLite-backed Durable Objects. * **Key-value backed Durable Objects**: [Key-value storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) is only available on the Workers Paid plan. #### SQLite storage backend Storage billing on SQLite-backed Durable Objects Storage billing is not yet enabled for Durable Object classes using the SQLite storage backend. SQLite-backed Durable Objects will incur [charges for requests and duration](https://developers.cloudflare.com/durable-objects/platform/pricing/#compute-billing). Storage billing for SQLite-backed Durable Objects will be enabled at a later date with advance notice with the [shared pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sqlite-storage-backend). | | Workers Free plan | Workers Paid plan | | - | - | - | | Rows reads 1,2 | 5 million / day | First 25 billion / month included + $0.001 / million rows | | Rows written 1,2,3,4 | 100,000 / day | First 50 million / month included + $1.00 / million rows | | SQL Stored data 5 | 5 GB (total) | 5 GB-month, + $0.20/ GB-month | Footnotes 1 Rows read and rows written included limits and rates match [D1 pricing](https://developers.cloudflare.com/d1/platform/pricing/), Cloudflare's serverless SQL database. 2 Key-value methods like `get()`, `put()`, `delete()`, or `list()` store and query data in a hidden SQLite table and are billed as rows read and rows written. 3 Each `setAlarm()` is billed as a single row written. 4 Deletes are counted as rows written. 5 Durable Objects will be billed for stored data until the [data is removed](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#remove-a-durable-objects-storage). Once the data is removed, the object will be cleaned up automatically by the system. #### Key-value storage backend | | Workers Paid plan | | - | - | | Read request units1,2 | 1 million, + $0.20/million | | Write request units3 | 1 million, + $1.00/million | | Delete requests4 | 1 million, + $1.00/million | | Stored data5 | 1 GB, + $0.20/ GB-month | Footnotes 1 A request unit is defined as 4 KB of data read or written. A request that writes or reads more than 4 KB will consume multiple units, for example, a 9 KB write will consume 3 write request units. 2 List operations are billed by read request units, based on the amount of data examined. For example, a list request that returns a combined 80 KB of keys and values will be billed 20 read request units. A list request that does not return anything is billed for 1 read request unit. 3 Each `setAlarm` is billed as a single write request unit. 4 Delete requests are unmetered. For example, deleting a 100 KB value will be charged one delete request. 5 Durable Objects will be billed for stored data until the data is removed. Once the data is removed, the object will be cleaned up automatically by the system. Requests that hit the [Durable Objects in-memory cache](https://developers.cloudflare.com/durable-objects/reference/in-memory-state/) or that use the [multi-key versions of `get()`/`put()`/`delete()` methods](https://developers.cloudflare.com/durable-objects/api/storage-api/) are billed the same as if they were a normal, individual request for each key. Durable Objects billing examples For more information and [examples of Durable Objects billing](https://developers.cloudflare.com/durable-objects/platform/pricing#compute-billing-examples), refer to [Durable Objects Pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/). ## Vectorize Vectorize is currently only available on the Workers paid plan. | | [Workers Free](https://developers.cloudflare.com/workers/platform/pricing/#workers) | [Workers Paid](https://developers.cloudflare.com/workers/platform/pricing/#workers) | | - | - | - | | **Total queried vector dimensions** | 30 million queried vector dimensions / month | First 50 million queried vector dimensions / month included + $0.01 per million | | **Total stored vector dimensions** | 5 million stored vector dimensions | First 10 million stored vector dimensions + $0.05 per 100 million | ### Calculating vector dimensions To calculate your potential usage, calculate the queried vector dimensions and the stored vector dimensions, and multiply by the unit price. The formula is defined as `((queried vectors + stored vectors) * dimensions * ($0.01 / 1,000,000)) + (stored vectors * dimensions * ($0.05 / 100,000,000))` * For example, inserting 10,000 vectors of 768 dimensions each, and querying those 1,000 times per day (30,000 times per month) would be calculated as `((30,000 + 10,000) * 768) = 30,720,000` queried dimensions and `(10,000 * 768) = 7,680,000` stored dimensions (within the included monthly allocation) * Separately, and excluding the included monthly allocation, this would be calculated as `(30,000 + 10,000) * 768 * ($0.01 / 1,000,000) + (10,000 * 768 * ($0.05 / 100,000,000))` and sum to $0.31 per month. ## Service bindings Requests made from your Worker to another worker via a [Service Binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/) do not incur additional request fees. This allows you to split apart functionality into multiple Workers, without incurring additional costs. For example, if Worker A makes a subrequest to Worker B via a Service Binding, or calls an RPC method provided by Worker B via a Service Binding, this is billed as: * One request (for the initial invocation of Worker A) * The total amount of CPU time used across both Worker A and Worker B Only available on Workers Standard pricing If your Worker is on the deprecated Bundled or Unbound pricing plans, incoming requests from Service Bindings are charged the same as requests from the Internet. In the example above, you would be charged for two requests, one to Worker A, and one to Worker B. ## Fine Print Workers Paid plan is separate from any other Cloudflare plan (Free, Professional, Business) you may have. If you are an Enterprise customer, reach out to your account team to confirm pricing details. Only requests that hit a Worker will count against your limits and your bill. Since Cloudflare Workers runs before the Cloudflare cache, the caching of a request still incurs costs. Refer to [Limits](https://developers.cloudflare.com/workers/platform/limits/) to review definitions and behavior after a limit is hit. --- title: Choosing a data or storage product. · Cloudflare Workers docs description: Storage and database options available on Cloudflare's developer platform. lastUpdated: 2025-05-27T15:16:17.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/storage-options/ md: https://developers.cloudflare.com/workers/platform/storage-options/index.md --- This guide describes the storage & database products available as part of Cloudflare Workers, including recommended use-cases and best practices. ## Choose a storage product The following table maps our storage & database products to common industry terms as well as recommended use-cases: | Use-case | Product | Ideal for | | - | - | - | | Key-value storage | [Workers KV](https://developers.cloudflare.com/kv/) | Configuration data, service routing metadata, personalization (A/B testing) | | Object storage / blob storage | [R2](https://developers.cloudflare.com/r2/) | User-facing web assets, images, machine learning and training datasets, analytics datasets, log and event data. | | Accelerate a Postgres or MySQL database | [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) | Connecting to an existing database in a cloud or on-premise using your existing database drivers & ORMs. | | Global coordination & stateful serverless | [Durable Objects](https://developers.cloudflare.com/durable-objects/) | Building collaborative applications; global coordination across clients; real-time WebSocket applications; strongly consistent, transactional storage. | | Lightweight SQL database | [D1](https://developers.cloudflare.com/d1/) | Relational data, including user profiles, product listings and orders, and/or customer data. | | Task processing, batching and messaging | [Queues](https://developers.cloudflare.com/queues/) | Background job processing (emails, notifications, APIs), message queuing, and deferred tasks. | | Vector search & embeddings queries | [Vectorize](https://developers.cloudflare.com/vectorize/) | Storing [embeddings](https://developers.cloudflare.com/workers-ai/models/#text-embeddings) from AI models for semantic search and classification tasks. | | Streaming ingestion | [Pipelines](https://developers.cloudflare.com/pipelines/) | Streaming data ingestion and processing, including clickstream analytics, telemetry/log data, and structured data for querying | | Time-series metrics | [Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/) | Write and query high-cardinality time-series data, usage metrics, and service-level telemetry using Workers and/or SQL. | Applications can build on multiple storage & database products: for example, using Workers KV for session data; R2 for large file storage, media assets and user-uploaded files; and Hyperdrive to connect to a hosted Postgres or MySQL database. Pages Functions Storage options can also be used by your front-end application built with Cloudflare Pages. For more information on available storage options for Pages applications, refer to the [Pages Functions bindings documentation](https://developers.cloudflare.com/pages/functions/bindings/). ## SQL database options There are three options for SQL-based databases available when building applications with Workers. * **Hyperdrive** if you have an existing Postgres or MySQL database, require large (1TB, 100TB or more) single databases, and/or want to use your existing database tools. You can also connect Hyperdrive to database platforms like [PlanetScale](https://planetscale.com/) or [Neon](https://neon.tech/). * **D1** for lightweight, serverless applications that are read-heavy, have global users that benefit from D1's [read replication](https://developers.cloudflare.com/d1/best-practices/read-replication/), and do not require you to manage and maintain a traditional RDBMS. You can connect to * **Durable Objects** for stateful serverless workloads, per-user or per-customer SQL state, and building distributed systems (D1 and Queues are built on Durable Objects) where Durable Object's [strict serializability](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/) enables global ordering of requests and storage operations. ### Session storage We recommend using [Workers KV](https://developers.cloudflare.com/kv/) for storing session data, credentials (API keys), and/or configuration data. These are typically read at high rates (thousands of RPS or more), are not typically modified (within KV's 1 write RPS per unique key limit), and do not need to be immediately consistent. Frequently read keys benefit from KV's [internal cache](https://developers.cloudflare.com/kv/concepts/how-kv-works/), and repeated reads to these "hot" keys will typically see latencies in the 500µs to 10ms range. Authentication frameworks like [OpenAuth](https://openauth.js.org/docs/storage/cloudflare/) use Workers KV as session storage when deployed to Cloudflare, and [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/policies/access/) uses KV to securely store and distribute user credentials so that they can be validated as close to the user as possible and reduce overall latency. ## Product overviews ### Workers KV Workers KV is an eventually consistent key-value data store that caches on the Cloudflare global network. It is ideal for projects that require: * High volumes of reads and/or repeated reads to the same keys. * Low-latency global reads (typically within 10ms for hot keys) * Per-object time-to-live (TTL). * Distributed configuration and/or session storage. To get started with KV: * Read how [KV works](https://developers.cloudflare.com/kv/concepts/how-kv-works/). * Create a [KV namespace](https://developers.cloudflare.com/kv/concepts/kv-namespaces/). * Review the [KV Runtime API](https://developers.cloudflare.com/kv/api/). * Learn about KV [Limits](https://developers.cloudflare.com/kv/platform/limits/). ### R2 R2 is S3-compatible blob storage that allows developers to store large amounts of unstructured data without egress fees associated with typical cloud storage services. It is ideal for projects that require: * Storage for files which are infrequently accessed. * Large object storage (for example, gigabytes or more per object). * Strong consistency per object. * Asset storage for websites (refer to [caching guide](https://developers.cloudflare.com/r2/buckets/public-buckets/#caching)) To get started with R2: * Read the [Get started guide](https://developers.cloudflare.com/r2/get-started/). * Learn about R2 [Limits](https://developers.cloudflare.com/r2/platform/limits/). * Review the [R2 Workers API](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/). ### Durable Objects Durable Objects provide low-latency coordination and consistent storage for the Workers platform through global uniqueness and a transactional storage API. * Global Uniqueness guarantees that there will be a single instance of a Durable Object class with a given ID running at once, across the world. Requests for a Durable Object ID are routed by the Workers runtime to the Cloudflare data center that owns the Durable Object. * The transactional storage API provides strongly consistent key-value storage to the Durable Object. Each Object can only read and modify keys associated with that Object. Execution of a Durable Object is single-threaded, but multiple request events may still be processed out-of-order from how they arrived at the Object. It is ideal for projects that require: * Real-time collaboration (such as a chat application or a game server). * Consistent storage. * Data locality. To get started with Durable Objects: * Read the [introductory blog post](https://blog.cloudflare.com/introducing-workers-durable-objects/). * Review the [Durable Objects documentation](https://developers.cloudflare.com/durable-objects/). * Get started with [Durable Objects](https://developers.cloudflare.com/durable-objects/get-started/). * Learn about Durable Objects [Limits](https://developers.cloudflare.com/durable-objects/platform/limits/). ### D1 [D1](https://developers.cloudflare.com/d1/) is Cloudflare’s native serverless database. With D1, you can create a database by importing data or defining your tables and writing your queries within a Worker or through the API. D1 is ideal for: * Persistent, relational storage for user data, account data, and other structured datasets. * Use-cases that require querying across your data ad-hoc (using SQL). * Workloads with a high ratio of reads to writes (most web applications). To get started with D1: * Read [the documentation](https://developers.cloudflare.com/d1) * Follow the [Get started guide](https://developers.cloudflare.com/d1/get-started/) to provision your first D1 database. * Review the [D1 Workers Binding API](https://developers.cloudflare.com/d1/worker-api/). Note If your working data size exceeds 10 GB (the maximum size for a D1 database), consider splitting the database into multiple, smaller D1 databases. ### Queues Cloudflare Queues allows developers to send and receive messages with guaranteed delivery. It integrates with [Cloudflare Workers](https://developers.cloudflare.com/workers) and offers at-least once delivery, message batching, and does not charge for egress bandwidth. Queues is ideal for: * Offloading work from a request to schedule later. * Send data from Worker to Worker (inter-Service communication). * Buffering or batching data before writing to upstream systems, including third-party APIs or [Cloudflare R2](https://developers.cloudflare.com/queues/examples/send-errors-to-r2/). To get started with Queues: * [Set up your first queue](https://developers.cloudflare.com/queues/get-started/). * Learn more [about how Queues works](https://developers.cloudflare.com/queues/reference/how-queues-works/). ### Hyperdrive Hyperdrive is a service that accelerates queries you make to MySQL and Postgres databases, making it faster to access your data from across the globe, irrespective of your users’ location. Hyperdrive allows you to: * Connect to an existing database from Workers without connection overhead. * Cache frequent queries across Cloudflare's global network to reduce response times on highly trafficked content. * Reduce load on your origin database with connection pooling. To get started with Hyperdrive: * [Connect Hyperdrive](https://developers.cloudflare.com/hyperdrive/get-started/) to your existing database. * Learn more [about how Hyperdrive speeds up your database queries](https://developers.cloudflare.com/hyperdrive/configuration/how-hyperdrive-works/). ## Pipelines Pipelines is a streaming ingestion service that allows you to ingest high volumes of real time data, without managing any infrastructure. Pipelines allows you to: * Ingest data at extremely high throughput (tens of thousands of records per second or more) * Batch and write data directly to object storage, ready for querying * (Future) Transform and aggregate data during ingestion To get started with Pipelines: * [Create a Pipeline](https://developers.cloudflare.com/pipelines/getting-started/) that can batch and write records to R2. * Learn more [about how Pipelines works](https://developers.cloudflare.com/pipelines/concepts/how-pipelines-work/). ### Analytics Engine Analytics Engine is Cloudflare's time-series and metrics database that allows you to write unlimited-cardinality analytics at scale using a built-in API to write data points from Workers and query that data using SQL directly. Analytics Engine allows you to: * Expose custom analytics to your own customers * Build usage-based billing systems * Understand the health of your service on a per-customer or per-user basis * Add instrumentation to frequently called code paths, without impacting performance or overwhelming external analytics systems with events Cloudflare uses Analytics Engine internally to store and product per-product metrics for products like D1 and R2 at scale. To get started with Analytics Engine: * Learn how to [get started with Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/get-started/) * See [an example of writing time-series data to Analytics Engine](https://developers.cloudflare.com/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product/) * Understand the [SQL API](https://developers.cloudflare.com/analytics/analytics-engine/sql-api/) for reading data from your Analytics Engine datasets ### Vectorize Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers and [Workers AI](https://developers.cloudflare.com/workers-ai/). Vectorize allows you to: * Store embeddings from any vector embeddings model (Bring Your Own embeddings) for semantic search and classification tasks. * Add context to Large Language Model (LLM) queries by using vector search as part of a [Retrieval Augmented Generation](https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/) (RAG) workflow. * [Filter on vector metadata](https://developers.cloudflare.com/vectorize/reference/metadata-filtering/) to reduce the search space and return more relevant results. To get started with Vectorize: * [Create your first vector database](https://developers.cloudflare.com/vectorize/get-started/intro/). * Combine [Workers AI and Vectorize](https://developers.cloudflare.com/vectorize/get-started/embeddings/) to generate, store and query text embeddings. * Learn more about [how vector databases work](https://developers.cloudflare.com/vectorize/reference/what-is-a-vector-database/). ## SQL in Durable Objects vs D1 Cloudflare Workers offers a SQLite-backed serverless database product - [D1](https://developers.cloudflare.com/d1/). How should you compare [SQLite in Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/) and D1? **D1 is a managed database product.** D1 fits into a familiar architecture for developers, where application servers communicate with a database over the network. Application servers are typically Workers; however, D1 also supports external, non-Worker access via an [HTTP API](https://developers.cloudflare.com/api/resources/d1/subresources/database/methods/query/), which helps unlock [third-party tooling](https://developers.cloudflare.com/d1/reference/community-projects/#_top) support for D1. D1 aims for a "batteries included" feature set, including the above HTTP API, [database schema management](https://developers.cloudflare.com/d1/reference/migrations/#_top), [data import/export](https://developers.cloudflare.com/d1/best-practices/import-export-data/), and [database query insights](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights). With D1, your application code and SQL database queries are not colocated which can impact application performance. If performance is a concern with D1, Workers has [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/#_top) to dynamically run your Worker in the best location to reduce total Worker request latency, considering everything your Worker talks to, including D1. **SQLite in Durable Objects is a lower-level compute with storage building block for distributed systems.** By design, Durable Objects are accessed with Workers-only. Durable Objects require a bit more effort, but in return, give you more flexibility and control. With Durable Objects, you must implement two pieces of code that run in different places: a front-end Worker which routes incoming requests from the Internet to a unique Durable Object, and the Durable Object itself, which runs on the same machine as the SQLite database. You get to choose what runs where, and it may be that your application benefits from running some application business logic right next to the database. With SQLite in Durable Objects, you may also need to build some of your own database tooling that comes out-of-the-box with D1. SQL query pricing and limits are intended to be identical between D1 ([pricing](https://developers.cloudflare.com/d1/platform/pricing/), [limits](https://developers.cloudflare.com/d1/platform/limits/)) and SQLite in Durable Objects ([pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sql-storage-billing), [limits](https://developers.cloudflare.com/durable-objects/platform/limits/)). --- title: Workers for Platforms · Cloudflare Workers docs description: Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure. lastUpdated: 2024-08-13T19:56:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/platform/workers-for-platforms/ md: https://developers.cloudflare.com/workers/platform/workers-for-platforms/index.md --- Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure. --- title: How the Cache works · Cloudflare Workers docs description: How Workers interacts with the Cloudflare cache. lastUpdated: 2025-05-28T19:12:24.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/reference/how-the-cache-works/ md: https://developers.cloudflare.com/workers/reference/how-the-cache-works/index.md --- Workers was designed and built on top of Cloudflare's global network to allow developers to interact directly with the Cloudflare cache. The cache can provide ephemeral, data center-local storage, as a convenient way to frequently access static or dynamic content. By allowing developers to write to the cache, Workers provide a way to customize cache behavior on Cloudflare’s CDN. To learn about the benefits of caching, refer to the Learning Center’s article on [What is Caching?](https://www.cloudflare.com/learning/cdn/what-is-caching/). Cloudflare Workers run before the cache but can also be utilized to modify assets once they are returned from the cache. Modifying assets returned from cache allows for the ability to sign or personalize responses while also reducing load on an origin and reducing latency to the end user by serving assets from a nearby location. ## Interact with the Cloudflare Cache Conceptually, there are two ways to interact with Cloudflare’s Cache using a Worker: * Call to [`fetch()`](https://developers.cloudflare.com/workers/runtime-apis/fetch/) in a Workers script. Requests proxied through Cloudflare are cached even without Workers according to a zone’s default or configured behavior (for example, static assets like files ending in `.jpg` are cached by default). Workers can further customize this behavior by: * Setting Cloudflare cache rules (that is, operating on the `cf` object of a [request](https://developers.cloudflare.com/workers/runtime-apis/request/)). * Store responses using the [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) from a Workers script. This allows caching responses that did not come from an origin and also provides finer control by: * Customizing cache behavior of any asset by setting headers such as `Cache-Control` on the response passed to `cache.put()`. * Caching responses generated by the Worker itself through `cache.put()`. Tiered caching The Cache API is not compatible with tiered caching. To take advantage of tiered caching, use the [fetch API](https://developers.cloudflare.com/workers/runtime-apis/fetch/). ### Single file purge assets cached by a worker When using single-file purge to purge assets cached by a Worker, make sure not to purge the end user URL. Instead, purge the URL that is in the `fetch` request. For example, you have a Worker that runs on `https://example.com/hello` and this Worker makes a `fetch` request to `https://notexample.com/hello`. As far as cache is concerned, the asset in the `fetch` request (`https://notexample.com/hello`) is the asset that is cached. To purge it, you need to purge `https://notexample.com/hello`. Purging the end user URL, `https://example.com/hello`, will not work because that is not the URL that cache sees. You need to confirm in your Worker which URL you are actually fetching, so you can purge the correct asset. In the previous example, `https://notexample.com/hello` is not proxied through Cloudflare. If `https://notexample.com/hello` was proxied ([orange-clouded](https://developers.cloudflare.com/dns/proxy-status/)) through Cloudflare, then you must own `notexample.com` and purge `https://notexample.com/hello` from the `notexample.com` zone. To better understand the example, review the following diagram: ```mermaid flowchart TD accTitle: Single file purge assets cached by a worker accDescr: This diagram is meant to help choose how to purge a file. A("You have a Worker script that runs on https://example.com/hello
and this Worker makes a fetch request to https://notexample.com/hello.") --> B(Is notexample.com
an active zone on Cloudflare?) B -- Yes --> C(Is https://notexample.com/
proxied through Cloudflare?) B -- No --> D(Purge https://notexample.com/hello
from the original example.com zone.) C -- Yes --> E(Do you own
notexample.com?) C -- No --> F(Purge https://notexample.com/hello
from the original example.com zone.) E -- Yes --> G(Purge https://notexample.com/hello
from the notexample.com zone.) E -- No --> H(Sorry, you can not purge the asset.
Only the owner of notexample.com can purge it.) ``` ### Purge assets stored with the Cache API Assets stored in the cache through [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) operations can be purged in a couple of ways: * Call `cache.delete` within a Worker to invalidate the cache for the asset with a matching request variable. * Assets purged in this way are only purged locally to the data center the Worker runtime was executed. * To purge an asset globally, use the standard [cache purge options](https://developers.cloudflare.com/cache/how-to/purge-cache/). Based on cache API implementation, not all cache purge endpoints function for purging assets stored by the Cache API. * All assets on a zone can be purged by using the [Purge Everything](https://developers.cloudflare.com/cache/how-to/purge-cache/purge-everything/) cache operation. This purge will remove all assets associated with a Cloudflare zone from cache in all data centers regardless of the method set. * [Cache Tags](https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers) can be added to requests dynamically in a Worker by calling `response.headers.append()` and appending `Cache-Tag` values dynamically to that request. Once set, those tags can be used to selectively purge assets from cache without invalidating all cached assets on a zone. * Currently, it is not possible to purge a URL stored through Cache API that uses a custom cache key set by a Worker. Instead, use a [custom key created via Cache Rules](https://developers.cloudflare.com/cache/how-to/cache-rules/settings/#cache-key). Alternatively, purge your assets using purge everything, purge by tag, purge by host or purge by prefix. ## Edge versus browser caching The browser cache is controlled through the `Cache-Control` header sent in the response to the client (the `Response` instance return from the handler). Workers can customize browser cache behavior by setting this header on the response. Other means to control Cloudflare’s cache that are not mentioned in this documentation include: Page Rules and Cloudflare cache settings. Refer to the [How to customize Cloudflare’s cache](https://developers.cloudflare.com/cache/concepts/customize-cache/) if you wish to avoid writing JavaScript with still some granularity of control. What should I use: the Cache API or fetch for caching objects on Cloudflare? For requests where Workers are behaving as middleware (that is, Workers are sending a subrequest via `fetch`) it is recommended to use `fetch`. This is because preexisting settings are in place that optimize caching while preventing unintended dynamic caching. For projects where there is no backend (that is, the entire project is on Workers as in [Workers Sites](https://developers.cloudflare.com/workers/configuration/sites/start-from-scratch)) the Cache API is the only option to customize caching. The asset will be cached under the hostname specified within the Worker's subrequest — not the Worker's own hostname. Therefore, in order to purge the cached asset, the purge will have to be performed for the hostname included in the Worker subrequest. ### `fetch` In the context of Workers, a [`fetch`](https://developers.cloudflare.com/workers/runtime-apis/fetch/) provided by the runtime communicates with the Cloudflare cache. First, `fetch` checks to see if the URL matches a different zone. If it does, it reads through that zone’s cache (or Worker). Otherwise, it reads through its own zone’s cache, even if the URL is for a non-Cloudflare site. Cache settings on `fetch` automatically apply caching rules based on your Cloudflare settings. `fetch` does not allow you to modify or inspect objects before they reach the cache, but does allow you to modify how it will cache. When a response fills the cache, the response header contains `CF-Cache-Status: HIT`. You can tell an object is attempting to cache if one sees the `CF-Cache-Status` at all. This [template](https://developers.cloudflare.com/workers/examples/cache-using-fetch/) shows ways to customize Cloudflare cache behavior on a given request using fetch. ### Cache API The [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) can be thought of as an ephemeral key-value store, whereby the `Request` object (or more specifically, the request URL) is the key, and the `Response` is the value. There are two types of cache namespaces available to the Cloudflare Cache: * **`caches.default`** – You can access the default cache (the same cache shared with `fetch` requests) by accessing `caches.default`. This is useful when needing to override content that is already cached, after receiving the response. * **`caches.open()`** – You can access a namespaced cache (separate from the cache shared with `fetch` requests) using `let cache = await caches.open(CACHE_NAME)`. Note that [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) is an async function, unlike `caches.default`. When to use the Cache API: * When you want to programmatically save and/or delete responses from a cache. For example, say an origin is responding with a `Cache-Control: max-age:0` header and cannot be changed. Instead, you can clone the `Response`, adjust the header to the `max-age=3600` value, and then use the Cache API to save the modified `Response` for an hour. * When you want to programmatically access a Response from a cache without relying on a `fetch` request. For example, you can check to see if you have already cached a `Response` for the `https://example.com/slow-response` endpoint. If so, you can avoid the slow request. This [template](https://developers.cloudflare.com/workers/examples/cache-api/) shows ways to use the cache API. For limits of the cache API, refer to [Limits](https://developers.cloudflare.com/workers/platform/limits/#cache-api-limits). Tiered caching and the Cache API Cache API within Workers does not support tiered caching. Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. Cache API is local to a data center, this means that `cache.match` does a lookup, `cache.put` stores a response, and `cache.delete` removes a stored response only in the cache of the data center that the Worker handling the request is in. Because these methods apply only to local cache, they will not work with tiered cache. ## Related resources * [Cache API](https://developers.cloudflare.com/workers/runtime-apis/cache/) * [Customize cache behavior with Workers](https://developers.cloudflare.com/cache/interaction-cloudflare-products/workers/)
--- title: How Workers works · Cloudflare Workers docs description: The difference between the Workers runtime versus traditional browsers and Node.js. lastUpdated: 2024-10-10T02:36:06.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/reference/how-workers-works/ md: https://developers.cloudflare.com/workers/reference/how-workers-works/index.md --- Though Cloudflare Workers behave similarly to [JavaScript](https://www.cloudflare.com/learning/serverless/serverless-javascript/) in the browser or in Node.js, there are a few differences in how you have to think about your code. Under the hood, the Workers runtime uses the [V8 engine](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/) — the same engine used by Chromium and Node.js. The Workers runtime also implements many of the standard [APIs](https://developers.cloudflare.com/workers/runtime-apis/) available in most modern browsers. The differences between JavaScript written for the browser or Node.js happen at runtime. Rather than running on an individual's machine (for example, [a browser application or on a centralized server](https://www.cloudflare.com/learning/serverless/glossary/client-side-vs-server-side/)), Workers functions run on [Cloudflare's global network](https://www.cloudflare.com/network) - a growing global network of thousands of machines distributed across hundreds of locations. Each of these machines hosts an instance of the Workers runtime, and each of those runtimes is capable of running thousands of user-defined applications. This guide will review some of those differences. For more information, refer to the [Cloud Computing without Containers blog post](https://blog.cloudflare.com/cloud-computing-without-containers). The three largest differences are: Isolates, Compute per Request, and Distributed Execution. ## Isolates [V8](https://v8.dev) orchestrates isolates: lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. You could even consider an isolate a sandbox for your function to run in. A single instance of the runtime can run hundreds or thousands of isolates, seamlessly switching between them. Each isolate's memory is completely isolated, so each piece of code is protected from other untrusted or user-written code on the runtime. Isolates are also designed to start very quickly. Instead of creating a virtual machine for each function, an isolate is created within an existing environment. This model eliminates the cold starts of the virtual machine model. Unlike other serverless providers which use [containerized processes](https://www.cloudflare.com/learning/serverless/serverless-vs-containers/) each running an instance of a language runtime, Workers pays the overhead of a JavaScript runtime once on the start of a container. Workers processes are able to run essentially limitless scripts with almost no individual overhead. Any given isolate can start around a hundred times faster than a Node process on a container or virtual machine. Notably, on startup isolates consume an order of magnitude less memory. Traditional architecture Workers V8 isolates User code Process overhead A given isolate has its own scope, but isolates are not necessarily long-lived. An isolate may be spun down and evicted for a number of reasons: * Resource limitations on the machine. * A suspicious script - anything seen as trying to break out of the isolate sandbox. * Individual [resource limits](https://developers.cloudflare.com/workers/platform/limits/). Because of this, it is generally advised that you not store mutable state in your global scope unless you have accounted for this contingency. If you are interested in how Cloudflare handles security with the Workers runtime, you can [read more about how Isolates relate to Security and Spectre Threat Mitigation](https://developers.cloudflare.com/workers/reference/security-model/). ## Compute per request Most Workers are a variation on the default Workers flow: * JavaScript ```js export default { async fetch(request, env, ctx) { return new Response('Hello World!'); }, }; ``` * TypeScript ```ts export default { async fetch(request, env, ctx): Promise { return new Response('Hello World!'); }, } satisfies ExportedHandler; ``` For Workers written in [ES modules syntax](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/), when a request to your `*.workers.dev` subdomain or to your Cloudflare-managed domain is received by any of Cloudflare's data centers, the request invokes the [`fetch()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) defined in your Worker code with the given request. You can respond to the request by returning a [`Response`](https://developers.cloudflare.com/workers/runtime-apis/response/) object. ## Distributed execution Isolates are resilient and continuously available for the duration of a request, but in rare instances isolates may be evicted. When a Worker hits official [limits](https://developers.cloudflare.com/workers/platform/limits/) or when resources are exceptionally tight on the machine the request is running on, the runtime will selectively evict isolates after their events are properly resolved. Like all other JavaScript platforms, a single Workers instance may handle multiple requests including concurrent requests in a single-threaded event loop. That means that other requests may (or may not) be processed during awaiting any `async` tasks (such as `fetch`) if other requests come in while processing a request. Because there is no guarantee that any two user requests will be routed to the same or a different instance of your Worker, Cloudflare recommends you do not use or mutate global state. ## Related resources * [`fetch()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) - Review how incoming HTTP requests to a Worker are passed to the `fetch()` handler. * [Request](https://developers.cloudflare.com/workers/runtime-apis/request/) - Learn how incoming HTTP requests are passed to the `fetch()` handler. * [Workers limits](https://developers.cloudflare.com/workers/platform/limits/) - Learn about Workers limits including Worker size, startup time, and more. --- title: Migrate from Service Workers to ES Modules · Cloudflare Workers docs description: Write your Worker code in ES modules syntax for an optimized experience. lastUpdated: 2025-05-13T11:59:34.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/ md: https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/index.md --- This guide will show you how to migrate your Workers from the [Service Worker](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API) format to the [ES modules](https://blog.cloudflare.com/workers-javascript-modules/) format. ## Advantages of migrating There are several reasons to migrate your Workers to the ES modules format: 1. Your Worker will run faster. With service workers, bindings are exposed as globals. This means that for every request, the Workers runtime must create a new JavaScript execution context, which adds overhead and time. Workers written using ES modules can reuse the same execution context across multiple requests. 2. Implementing [Durable Objects](https://developers.cloudflare.com/durable-objects/) requires Workers that use ES modules. 3. Bindings for [D1](https://developers.cloudflare.com/d1/), [Workers AI](https://developers.cloudflare.com/workers-ai/), [Vectorize](https://developers.cloudflare.com/vectorize/), [Workflows](https://developers.cloudflare.com/workflows/), and [Images](https://developers.cloudflare.com/images/transform-images/bindings/) can only be used from Workers that use ES modules. 4. You can [gradually deploy changes to your Worker](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/) when you use the ES modules format. 5. You can easily publish Workers using ES modules to `npm`, allowing you to import and reuse Workers within your codebase. ## Migrate a Worker The following example demonstrates a Worker that redirects all incoming requests to a URL with a `301` status code. Service Workers are deprecated Service Workers are deprecated, but still supported. We recommend using [Module Workers](https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/) instead. New features may not be supported for Service Workers. With the Service Worker syntax, the example Worker looks like: ```js async function handler(request) { const base = 'https://example.com'; const statusCode = 301; const destination = new URL(request.url, base); return Response.redirect(destination.toString(), statusCode); } // Initialize Worker addEventListener('fetch', event => { event.respondWith(handler(event.request)); }); ``` Workers using ES modules format replace the `addEventListener` syntax with an object definition, which must be the file's default export (via `export default`). The previous example code becomes: ```js export default { fetch(request) { const base = "https://example.com"; const statusCode = 301; const source = new URL(request.url); const destination = new URL(source.pathname, base); return Response.redirect(destination.toString(), statusCode); }, }; ``` ## Bindings [Bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare developer platform. Workers using ES modules format do not rely on any global bindings. However, Service Worker syntax accesses bindings on the global scope. To understand bindings, refer the following `TODO` KV namespace binding example. To create a `TODO` KV namespace binding, you will: 1. Create a KV namespace named `My Tasks` and receive an ID that you will use in your binding. 2. Create a Worker. 3. Find your Worker's [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/) and add a KV namespace binding: * wrangler.jsonc ```jsonc { "kv_namespaces": [ { "binding": "TODO", "id": "" } ] } ``` * wrangler.toml ```toml kv_namespaces = [ { binding = "TODO", id = "" } ] ``` In the following sections, you will use your binding in Service Worker and ES modules format. Reference KV from Durable Objects and Workers To learn more about how to reference KV from Workers, refer to the [KV bindings documentation](https://developers.cloudflare.com/kv/concepts/kv-bindings/). ### Bindings in Service Worker format In Service Worker syntax, your `TODO` KV namespace binding is defined in the global scope of your Worker. Your `TODO` KV namespace binding is available to use anywhere in your Worker application's code. ```js addEventListener("fetch", async (event) => { return await getTodos() }); async function getTodos() { // Get the value for the "to-do:123" key // NOTE: Relies on the TODO KV binding that maps to the "My Tasks" namespace. let value = await TODO.get("to-do:123"); // Return the value, as is, for the Response event.respondWith(new Response(value)); } ``` ### Bindings in ES modules format In ES modules format, bindings are only available inside the `env` parameter that is provided at the entry point to your Worker. To access the `TODO` KV namespace binding in your Worker code, the `env` parameter must be passed from the `fetch` handler in your Worker to the `getTodos` function. ```js import { getTodos } from './todos' export default { async fetch(request, env, ctx) { // Passing the env parameter so other functions // can reference the bindings available in the Workers application return await getTodos(env) }, }; ``` The following code represents a `getTodos` function that calls the `get` function on the `TODO` KV binding. ```js async function getTodos(env) { // NOTE: Relies on the TODO KV binding which has been provided inside of // the env parameter of the `getTodos` function let value = await env.TODO.get("to-do:123"); return new Response(value); } export { getTodos } ``` ## Environment variables [Environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/) are accessed differently in code written in ES modules format versus Service Worker format. Review the following example environment variable configuration in the [Wrangler configuration file](https://developers.cloudflare.com/workers/wrangler/configuration/): * wrangler.jsonc ```jsonc { "name": "my-worker-dev", "vars": { "API_ACCOUNT_ID": "" } } ``` * wrangler.toml ```toml name = "my-worker-dev" # Define top-level environment variables # under the `[vars]` block using # the `key = "value"` format [vars] API_ACCOUNT_ID = "" ``` ### Environment variables in Service Worker format In Service Worker format, the `API_ACCOUNT_ID` is defined in the global scope of your Worker application. Your `API_ACCOUNT_ID` environment variable is available to use anywhere in your Worker application's code. ```js addEventListener("fetch", async (event) => { console.log(API_ACCOUNT_ID) // Logs "" return new Response("Hello, world!") }) ``` ### Environment variables in ES modules format In ES modules format, environment variables are only available inside the `env` parameter that is provided at the entrypoint to your Worker application. ```js export default { async fetch(request, env, ctx) { console.log(env.API_ACCOUNT_ID) // Logs "" return new Response("Hello, world!") }, }; ``` ## Cron Triggers To handle a [Cron Trigger](https://developers.cloudflare.com/workers/configuration/cron-triggers/) event in a Worker written with ES modules syntax, implement a [`scheduled()` event handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/scheduled/#syntax), which is the equivalent of listening for a `scheduled` event in Service Worker syntax. This example code: ```js addEventListener("scheduled", (event) => { // ... }); ``` Then becomes: ```js export default { async scheduled(event, env, ctx) { // ... }, }; ``` ## Access `event` or `context` data Workers often need access to data not in the `request` object. For example, sometimes Workers use [`waitUntil`](https://developers.cloudflare.com/workers/runtime-apis/context/#waituntil) to delay execution. Workers using ES modules format can access `waitUntil` via the `context` parameter. Refer to [ES modules parameters](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/#parameters) for more information. This example code: ```js async function triggerEvent(event) { // Fetch some data console.log('cron processed', event.scheduledTime); } // Initialize Worker addEventListener('scheduled', event => { event.waitUntil(triggerEvent(event)); }); ``` Then becomes: ```js async function triggerEvent(event) { // Fetch some data console.log('cron processed', event.scheduledTime); } export default { async scheduled(event, env, ctx) { ctx.waitUntil(triggerEvent(event)); }, }; ``` ## Service Worker syntax A Worker written in Service Worker syntax consists of two parts: 1. An event listener that listens for `FetchEvents`. 2. An event handler that returns a [Response](https://developers.cloudflare.com/workers/runtime-apis/response/) object which is passed to the event’s `.respondWith()` method. When a request is received on one of Cloudflare’s global network servers for a URL matching a Worker, Cloudflare's server passes the request to the Workers runtime. This dispatches a `FetchEvent` in the [isolate](https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates) where the Worker is running. ```js addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { return new Response('Hello worker!', { headers: { 'content-type': 'text/plain' }, }); } ``` Below is an example of the request response workflow: 1. An event listener for the `FetchEvent` tells the script to listen for any request coming to your Worker. The event handler is passed the `event` object, which includes `event.request`, a [`Request`](https://developers.cloudflare.com/workers/runtime-apis/request/) object which is a representation of the HTTP request that triggered the `FetchEvent`. 2. The call to `.respondWith()` lets the Workers runtime intercept the request in order to send back a custom response (in this example, the plain text `'Hello worker!'`). * The `FetchEvent` handler typically culminates in a call to the method `.respondWith()` with either a [`Response`](https://developers.cloudflare.com/workers/runtime-apis/response/) or `Promise` that determines the response. * The `FetchEvent` object also provides [two other methods](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) to handle unexpected exceptions and operations that may complete after a response is returned. Learn more about [the lifecycle methods of the `fetch()` handler](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/). ### Supported `FetchEvent` properties * `event.type` string * The type of event. This will always return `"fetch"`. * `event.request` Request * The incoming HTTP request. * `event.respondWith(responseResponse|Promise)` : void * Refer to [`respondWith`](#respondwith). * `event.waitUntil(promisePromise)` : void * Refer to [`waitUntil`](#waituntil). * `event.passThroughOnException()` : void * Refer to [`passThroughOnException`](#passthroughonexception). ### `respondWith` Intercepts the request and allows the Worker to send a custom response. If a `fetch` event handler does not call `respondWith`, the runtime delivers the event to the next registered `fetch` event handler. In other words, while not recommended, this means it is possible to add multiple `fetch` event handlers within a Worker. If no `fetch` event handler calls `respondWith`, then the runtime forwards the request to the origin as if the Worker did not. However, if there is no origin – or the Worker itself is your origin server, which is always true for `*.workers.dev` domains – then you must call `respondWith` for a valid response. ```js // Format: Service Worker addEventListener('fetch', event => { let { pathname } = new URL(event.request.url); // Allow "/ignore/*" URLs to hit origin if (pathname.startsWith('/ignore/')) return; // Otherwise, respond with something event.respondWith(handler(event)); }); ``` ### `waitUntil` The `waitUntil` command extends the lifetime of the `"fetch"` event. It accepts a `Promise`-based task which the Workers runtime will execute before the handler terminates but without blocking the response. For example, this is ideal for [caching responses](https://developers.cloudflare.com/workers/runtime-apis/cache/#put) or handling logging. With the Service Worker format, `waitUntil` is available within the `event` because it is a native `FetchEvent` property. With the ES modules format, `waitUntil` is moved and available on the `context` parameter object. ```js // Format: Service Worker addEventListener('fetch', event => { event.respondWith(handler(event)); }); async function handler(event) { // Forward / Proxy original request let res = await fetch(event.request); // Add custom header(s) res = new Response(res.body, res); res.headers.set('x-foo', 'bar'); // Cache the response // NOTE: Does NOT block / wait event.waitUntil(caches.default.put(event.request, res.clone())); // Done return res; } ``` ### `passThroughOnException` The `passThroughOnException` method prevents a runtime error response when the Worker throws an unhandled exception. Instead, the script will [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), which will proxy the request to the origin server as though the Worker was never invoked. To prevent JavaScript errors from causing entire requests to fail on uncaught exceptions, `passThroughOnException()` causes the Workers runtime to yield control to the origin server. With the Service Worker format, `passThroughOnException` is added to the `FetchEvent` interface, making it available within the `event`. With the ES modules format, `passThroughOnException` is available on the `context` parameter object. ```js // Format: Service Worker addEventListener('fetch', event => { // Proxy to origin on unhandled/uncaught exceptions event.passThroughOnException(); throw new Error('Oops'); }); ``` --- title: Protocols · Cloudflare Workers docs description: Supported protocols on the Workers platform. lastUpdated: 2025-05-29T18:16:56.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/reference/protocols/ md: https://developers.cloudflare.com/workers/reference/protocols/index.md --- Cloudflare Workers support the following protocols and interfaces: | Protocol | Inbound | Outbound | | - | - | - | | **HTTP / HTTPS** | Handle incoming HTTP requests using the [`fetch()` handler](https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/) | Make HTTP subrequests using the [`fetch()` API](https://developers.cloudflare.com/workers/runtime-apis/fetch/) | | **Direct TCP sockets** | Support for handling inbound TCP connections is [coming soon](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/) | Create outbound TCP connections using the [`connect()` API](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/) | | **WebSockets** | Accept incoming WebSocket connections using the [`WebSocket` API](https://developers.cloudflare.com/workers/runtime-apis/websockets/), or with [MQTT over WebSockets (Pub/Sub)](https://developers.cloudflare.com/pub-sub/learning/websockets-browsers/) | [MQTT over WebSockets (Pub/Sub)](https://developers.cloudflare.com/pub-sub/learning/websockets-browsers/) | | **MQTT** | Handle incoming messages to an MQTT broker with [Pub Sub](https://developers.cloudflare.com/pub-sub/learning/integrate-workers/) | Support for publishing MQTT messages to an MQTT topic is [coming soon](https://developers.cloudflare.com/pub-sub/learning/integrate-workers/) | | **HTTP/3 (QUIC)** | Accept inbound requests over [HTTP/3](https://www.cloudflare.com/learning/performance/what-is-http3/) by enabling it on your [zone](https://developers.cloudflare.com/fundamentals/concepts/accounts-and-zones/#zones) in **Speed** > **Optimization** > **Protocol Optimization** area of the [Cloudflare dashboard](https://dash.cloudflare.com/). | | | **SMTP** | Use [Email Workers](https://developers.cloudflare.com/email-routing/email-workers/) to process and forward email, without having to manage TCP connections to SMTP email servers | [Email Workers](https://developers.cloudflare.com/email-routing/email-workers/) | --- title: Security model · Cloudflare Workers docs description: "This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre." lastUpdated: 2025-02-19T14:52:46.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/reference/security-model/ md: https://developers.cloudflare.com/workers/reference/security-model/index.md --- This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre. Since the very start of the Workers project, security has been a high priority — there was a concern early on that when hosting a large number of tenants on shared infrastructure, side channels of various kinds would pose a threat. The Cloudflare Workers runtime is carefully designed to defend against side channel attacks. To this end, Workers is designed to make it impossible for code to measure its own execution time locally. For example, the value returned by `Date.now()` is locked in place while code is executing. No other timers are provided. Moreover, Cloudflare provides no access to concurrency (for example, multi-threading), as it could allow attackers to construct ad hoc timers. These design choices cannot be introduced retroactively into other platforms — such as web browsers — because they remove APIs that existing applications depend on. They were possible in Workers only because of runtime design choices from the start. While these early design decisions have proven effective, Cloudflare is continuing to add defense-in-depth, including techniques to disrupt attacks by rescheduling Workers to create additional layers of isolation between suspicious Workers and high-value Workers. The Workers approach is very different from the approach taken by most of the industry. It is resistant to the entire range of [Spectre-style attacks](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/), without requiring special attention paid to each one and without needing to block speculation in general. However, because the Workers approach is different, it requires careful study. Cloudflare is currently working with researchers at Graz University of Technology (TU Graz) to study what has been done. These researchers include some of the people who originally discovered Spectre. Cloudflare will publish the results of this research as they becomes available. For more details, refer to [this talk](https://www.infoq.com/presentations/cloudflare-v8/) by Kenton Varda, architect of Cloudflare Workers. Spectre is covered near the end. ## Architectural overview Beginning with a quick overview of the Workers runtime architecture: There are two fundamental parts of designing a code sandbox: secure isolation and API design. ### Isolation First, a secure execution environment needed to be created wherein code cannot access anything it is not supposed to. For this, the primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside isolates, which prevent that code from accessing memory outside the isolate — even within the same process. Importantly, this means Cloudflare can run many isolates within a single process. This is essential for an edge compute platform like Workers where Cloudflare must host many thousands of guest applications on every machine and rapidly switch between these guests thousands of times per second with minimal overhead. If Cloudflare had to run a separate process for every guest, the number of tenants Cloudflare could support would be drastically reduced, and Cloudflare would have to limit edge compute to a small number of big Enterprise customers. With isolate technology, Cloudflare can make edge compute available to everyone. Sometimes, though, Cloudflare does decide to schedule a Worker in its own private process. Cloudflare does this if the Worker uses certain features that needs an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their Worker, Cloudflare runs that Worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser’s trusted operator, and therefore has not received as much security scrutiny as the rest of V8. In order to hedge against the increased risk of bugs in the inspector protocol, Cloudflare moves inspected Workers into a separate process with a process-level sandbox. Cloudflare also uses process isolation as an extra defense against Spectre. Additionally, even for isolates that run in a shared process with other isolates, Cloudflare runs multiple instances of the whole runtime on each machine, which is called cordons. Workers are distributed among cordons by assigning each Worker a level of trust and separating low-trusted Workers from those trusted more highly. As one example of this in operation: a customer who signs up for the Free plan will not be scheduled in the same process as an Enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8. At the whole-process level, Cloudflare applies another layer of sandboxing for defense in depth. The layer 2 sandbox uses Linux namespaces and `seccomp` to prohibit all access to the filesystem and network. Namespaces and `seccomp` are commonly used to implement containers. However, Cloudflare's use of these technologies is much stricter than what is usually possible in container engines, because Cloudflare configures namespaces and `seccomp` after the process has started but before any isolates have been loaded. This means, for example, Cloudflare can (and does) use a totally empty filesystem (mount namespace) and uses `seccomp` to block absolutely all filesystem-related system calls. Container engines cannot normally prohibit all filesystem access because doing so would make it impossible to use `exec()` to start the guest program from disk. In the Workers case, Cloudflare's guest programs are not native binaries and the Workers runtime itself has already finished loading before Cloudflare blocks filesystem access. The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local UNIX domain sockets to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox. One such process in particular, which is called the supervisor, is responsible for fetching Worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the Workers that it should be running. For example, when the sandbox process receives a request for a Worker it has not seen before, that request includes the encryption key for that Worker’s code, including attached secrets. The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any Worker for which it has not received the appropriate key. It cannot enumerate known Workers. It also cannot request configuration it does not need; for example, it cannot request the TLS key used for HTTPS traffic to the Worker. Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers. ### API design There is a saying: If a tree falls in the forest, but no one is there to hear it, does it make a sound? A Cloudflare saying: If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run? Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. For Workers to send requests to the world safely, APIs are needed. In the context of sandboxing, API design takes on a new level of responsibility. Cloudflare APIs define exactly what a Worker can and cannot do. Cloudflare must be very careful to design each API so that it can only express allowed operations and no more. For example, Cloudflare wants to allow Workers to make and receive HTTP requests, while not allowing them to be able to access the local filesystem or internal network services. Currently, Workers does not allow any access to the local filesystem. Therefore, Cloudflare does not expose a filesystem API at all. No API means no access. But, imagine if Workers did want to support local filesystem access in the future. How can that be done? Workers should not see the whole filesystem. Imagine, though, if each Worker had its own private directory on the filesystem where it can store whatever it wants. To do this, Workers would use a design based on [capability-based security](https://en.wikipedia.org/wiki/Capability-based_security). Capabilities are a big topic, but in this case, what it would mean is that Cloudflare would give the Worker an object of type `Directory`, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing up the parent directory. Effectively, each Worker would see its private `Directory` as if it were the root of their own filesystem. How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using [Cap’n Proto RPC](https://capnproto.org/rpc.html), a capability-based RPC protocol. (Cap’n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that Cloudflare can strictly limit the sandbox to accessing only the files that belong to the Workers it is running. Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP — both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited; although, Cloudflare plans to support other protocols in the future. As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a UNIX domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service or to the Worker’s zone’s own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the Worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to the Cloudflare network's HTTP caching layer and then out to the Internet. Similarly, inbound HTTP requests do not go directly to the Workers runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a UNIX domain socket to the sandbox process. ## V8 bugs and the patch gap Every non-trivial piece of software has bugs and sandboxing technologies are no exception. Virtual machines, containers, and isolates — which Workers use — also have bugs. Workers rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has pros and cons. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider attack surface than virtual machines. More complexity means more opportunities for something to go wrong. However, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google’s investment does a lot to minimize the danger of V8 zero-days — bugs that are found by malicious actors and not known to Google. But, what happens after a bug is found and reported? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time. It is important that any patch be rolled out to production as fast as possible, before malicious actors can develop an exploit. The time between publishing the fix and deploying it is known as the patch gap. Google previously [announced that Chrome’s patch gap had been reduced from 33 days to 15 days](https://www.zdnet.com/article/google-cuts-chrome-patch-gap-in-half-from-33-to-15-days/). Fortunately, Cloudflare directly controls the machines on which the Workers runtime operates. Nearly the entire build and release process has been automated, so the moment a V8 patch is published, Cloudflare systems automatically build a new release of the Workers runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production. As a result, the Workers patch gap is now under 24 hours. A patch published by V8’s team in Munich during their work day will usually be in production before the end of the US work day. ## Spectre: Introduction The V8 team at Google has stated that [V8 itself cannot defend against Spectre](https://arxiv.org/abs/1902.05178). Workers does not need to depend on V8 for this. The Workers environment presents many alternative approaches to mitigating Spectre. ### What is it? Spectre is a class of attacks in which a malicious program can trick the CPU into speculatively performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on the cache. For more information about Spectre, refer to the [Learning Center page on the topic](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/). ### Why does it matter for Workers? Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model and it is likely that many vulnerabilities exist which have not yet been discovered. These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks are possible. However, the closer together the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various defenses (many of which can come with serious performance impact). In Cloudflare Workers, tenants are isolated from each other using V8 isolates — not processes nor VMs. This means that Workers cannot necessarily rely on OS or hypervisor patches to prevent Spectre. Workers need its own strategy. ### Why not use process isolation? Cloudflare Workers is designed to run your code in every single Cloudflare location. Workers is designed to be a platform accessible to everyone. It needs to handle a huge number of tenants, where many tenants get very little traffic. Combine these two points and planning becomes difficult. A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant’s traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that is plenty. That machine can be hosted in a massive data center with millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users are not nearby. With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in the quest to get as close to the end user as possible, Cloudflare sometimes chooses locations that only have space for a limited number of machines. The net result is that Cloudflare needs to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory — hardly enough space for a call stack, much less everything else that a process needs. Moreover, Cloudflare need context switching to be computationally efficient. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. To handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, the CPU cost can easily be 10x what it is with a shared process. In order to keep Workers inexpensive, fast, and accessible to everyone, Cloudflare needed to find a way to host multiple tenants in a single process. ### There is no fix for Spectre Spectre does not have an official solution. Not even when using heavyweight virtual machines. Everyone is still vulnerable. The industry encounters new Spectre attacks. Every couple months, researchers uncover a new Spectre vulnerability, CPU vendors release new microcode, and OS vendors release kernel patches. Everyone must continue updating. But is it enough to merely deploy the latest patches? More vulnerabilities exist but have not yet been publicized. To defend against Spectre, Cloudflare needed to take a different approach. It is not enough to block individual known vulnerabilities. Instead, entire classes of vulnerabilities must be addressed at once. ### Building a defense It is unlikely that any all-encompassing fix for Spectre will be found. However, the following thought experiment raises points to consider: Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable. However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU. Some have proposed that this can be solved by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out inconsistencies. Many security researchers see this as the end of the story. What good is slowing down an attack if the attack is still possible? ### Cascading slow-downs However, measures that slow down an attack can be powerful. The key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting. Much of cryptography, after all, is technically vulnerable to brute force attacks — technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, this is a sufficient defense. What can be done to slow down Spectre attacks to the point of meaninglessness? ## Freezing a Spectre attack ### Step 0: Do not allow native code Workers does not allow our customers to upload native-code binaries to run on the Cloudflare network — only JavaScript and WebAssembly. Many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats. Both are passed through V8 to convert these formats into true native code. This, in itself, does not necessarily make Spectre attacks harder. However, this is presented as step 0 because it is fundamental to enabling the following steps. Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host’s control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the `CLFLUSH` instruction, an instruction [which is useful in side channel attacks](https://gruss.cc/files/flushflush.pdf) and almost nothing else. Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time. Supporting native code would limit choice in future mitigation techniques. There is greater freedom in using an abstract intermediate format. ### Step 1: Disallow timers and multi-threading In Workers, you can get the current time using the JavaScript Date API by calling `Date.now()`. However, the time value returned is not the current time. `Date.now()` returns the time of the last I/O. It does not advance during code execution. For example, if an attacker writes: ```js let start = Date.now(); for (let i = 0; i < 1e6; i++) { doSpectreAttack(); } let end = Date.now(); ``` The values of `start` and `end` will always be exactly the same. The attacker cannot use `Date` to measure the execution time of their code, which they would need to do to carry out an attack. Note This measure was implemented in mid-2017, before Spectre was announced. This measure was implemented because Cloudflare was already concerned about side channel timing attacks. The Workers team has designed the system with side channels in mind. Similarly, multi-threading and shared memory are not permitted in Workers. Everything related to the processing of one event happens on the same thread. Otherwise, one would be able to race threads in order to guess and check the underlying timer. Multiple Workers are not allowed to operate on the same request concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread. At this point, measuring code execution time locally is prevented. However, it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Such a measurement is likely to be very noisy, as it would have to traverse the Internet and incur general networking costs. Such noise can be overcome, in theory, by executing the attack many times and taking an average. Note It has been suggested that if Workers reset its execution environment on every request, that Workers would be in a much safer position against timing attacks. Unfortunately, it is not so simple. The execution state could be stored in a client — not the Worker itself — allowing a Worker to resume its previous state on every new request. In adversarial testing and with help from leading Spectre experts, Cloudflare has not been able to develop a remote timing attack that works in production. However, the lack of a working attack does not mean that Workers should stop building defenses. Instead, the Workers team is currently testing some more advanced measures. ### Step 2: Dynamic process isolation If an attack is possible at all, it would take a long time to run — hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, there is a large amount of new data that can be used to trigger further measures. Spectre attacks exhibit abnormal behavior that would not usually be seen in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters. Now, the usual problem with using performance metrics to detect Spectre attacks is that there are sometimes false positives. Sometimes, a legitimate program behaves poorly. The runtime cannot shut down every application that has poor performance. Instead, the runtime chooses to reschedule any Worker with suspicious performance metrics into its own process. As described above, the runtime cannot do this with every Worker because the overhead would be too high. However, it is acceptable to isolate a few Worker processes as a defense mechanism. If the Worker is legitimate, it will keep operating, with a little more overhead. Fortunately, Cloudflare can relocate a Worker into its own process at basically any time. In fact, elaborate performance-counter based triggering may not even be necessary here. If a Worker uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less because it switches context less often. So, the runtime might as well use process isolation for any Worker that is CPU-hungry. Once a Worker is isolated, Cloudflare can rely on the operating system’s Spectre defenses, as most desktop web browsers do. Cloudflare has been working with the experts at Graz Technical University to develop this approach. TU Graz’s team co-discovered Spectre itself and has been responsible for a huge number of the follow-on discoveries since then. Cloudflare has developed the ability to dynamically isolate Workers and has identified metrics which reliably detect attacks. As mentioned previously, process isolation is not a complete defense. Over time, Spectre attacks tend to be slower to carry out which means Cloudflare has the ability to reasonably guess and identify malicious actors. Isolating the process further slows down the potential attack. ### Step 3: Periodic whole-memory shuffling At this point, all known attacks have been prevented. This leaves Workers susceptible to unknown attacks in the future, as with all other CPU-based systems. However, all new attacks will generally be very slow, taking days or longer, leaving Cloudflare with time to prepare a defense. For example, it is within reason to restart the entire Workers runtime on a daily basis. This will reset the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets. Cloudflare can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited. In general, because Workers are fundamentally preemptible (unlike containers or VMs), Cloudflare has a lot of freedom to frustrate attacks. Cloudflare sees this as an ongoing investment — not something that will ever be done. --- title: Billing and Limitations · Cloudflare Workers docs description: Billing, troubleshooting, and limitations for Static assets on Workers lastUpdated: 2025-06-20T19:49:19.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/static-assets/billing-and-limitations/ md: https://developers.cloudflare.com/workers/static-assets/billing-and-limitations/index.md --- ## Billing Requests to a project with static assets can either return static assets or invoke the Worker script, depending on if the request [matches a static asset or not](https://developers.cloudflare.com/workers/static-assets/routing/). * Requests to static assets are free and unlimited. Requests to the Worker script (for example, in the case of SSR content) are billed according to Workers pricing. Refer to [pricing](https://developers.cloudflare.com/workers/platform/pricing/#example-2) for an example. * There is no additional cost for storing Assets. * **Important note for free tier users**: When using [`run_worker_first`](https://developers.cloudflare.com/workers/static-assets/binding/#run_worker_first), requests matching the specified patterns will always invoke your Worker script. If you exceed your free tier request limits, these requests will receive a 429 (Too Many Requests) response instead of falling back to static asset serving. Negative patterns (patterns beginning with `!/`) will continue to serve assets correctly, as requests are directed to assets, without invoking your Worker script. ## Limitations See the [Platform Limits](https://developers.cloudflare.com/workers/platform/limits/#static-assets) ## Troubleshooting * `assets.bucket is a required field` — if you see this error, you need to update Wrangler to at least `3.78.10` or later. `bucket` is not a required field. --- title: Configuration and Bindings · Cloudflare Workers docs description: Details on how to configure Workers static assets and its binding. lastUpdated: 2025-07-08T14:55:14.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/static-assets/binding/ md: https://developers.cloudflare.com/workers/static-assets/binding/index.md --- Configuring a Worker with assets requires specifying a [directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory) and, optionally, an [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), in your Worker's Wrangler file. The [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) allows you to dynamically fetch assets from within your Worker script (e.g. `env.ASSETS.fetch()`), similarly to how you might with a make a `fetch()` call with a [Service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/http/). Only one collection of static assets can be configured in each Worker. ## `directory` The folder of static assets to be served. For many frameworks, this is the `./public/`, `./dist/`, or `./build/` folder. * wrangler.jsonc ```jsonc { "name": "my-worker", "compatibility_date": "2024-09-19", "assets": { "directory": "./public/" } } ``` * wrangler.toml ```toml name = "my-worker" compatibility_date = "2024-09-19" assets = { directory = "./public/" } ``` ### Ignoring assets Sometime there are files in the asset directory that should not be uploaded. In this case, create a `.assetsignore` file in the root of the assets directory. This file takes the same format as `.gitignore`. Wrangler will not upload asset files that match lines in this file. **Example** You are migrating from a Pages project where the assets directory is `dist`. You do not want to upload the server-side Worker code nor Pages configuration files as public client-side assets. Add the following `.assetsignore` file: ```txt _worker.js _redirects _headers ``` Now Wrangler will not upload these files as client-side assets when deploying the Worker. ## `run_worker_first` Controls whether to invoke the Worker script regardless of a request which would have otherwise matched an asset. `run_worker_first = false` (default) will serve any static asset matching a request, while `run_worker_first = true` will unconditionally [invoke your Worker script](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run-your-worker-script-first). * wrangler.jsonc ```jsonc { "name": "my-worker", "compatibility_date": "2024-09-19", "main": "src/index.ts", "assets": { "directory": "./public/", "binding": "ASSETS", "run_worker_first": true } } ``` * wrangler.toml ```toml name = "my-worker" compatibility_date = "2024-09-19" main = "src/index.ts" # The following configuration unconditionally invokes the Worker script at # `src/index.ts`, which can programatically fetch assets via the ASSETS binding [assets] directory = "./public/" binding = "ASSETS" run_worker_first = true ``` You can also specify `run_worker_first` as an array of route patterns to selectively run the Worker script first only for specific routes. The array supports glob patterns with `*` for deep matching and negative patterns with `!` prefix. Negative patterns have precedence over non-negative patterns. The Worker will run first when a non-negative pattern matches and none of the negative pattern matches. The order in which the patterns are listed is not significant. `run_worker_first` is often paired with the [`not_found_handling = "single-page-application"` setting](https://developers.cloudflare.com/workers/static-assets/routing/single-page-application/#advanced-routing-control): * wrangler.jsonc ```jsonc { "name": "my-spa-worker", "compatibility_date": "2025-07-16", "main": "./src/index.ts", "assets": { "directory": "./dist/", "not_found_handling": "single-page-application", "binding": "ASSETS", "run_worker_first": ["/api/*", "!/api/docs/*"] } } ``` * wrangler.toml ```toml name = "my-spa-worker" compatibility_date = "2025-07-16" main = "./src/index.ts" [assets] directory = "./dist/" not_found_handling = "single-page-application" binding = "ASSETS" run_worker_first = [ "/api/*", "!/api/docs/*" ] ``` In this configuration, requests to `/api/*` routes will invoke the Worker script first, except for `/api/docs/*` which will follow the default asset-first routing behavior. ## `binding` Configuring the optional [binding](https://developers.cloudflare.com/workers/runtime-apis/bindings) gives you access to the collection of assets from within your Worker script. * wrangler.jsonc ```jsonc { "name": "my-worker", "main": "./src/index.js", "compatibility_date": "2024-09-19", "assets": { "directory": "./public/", "binding": "ASSETS" } } ``` * wrangler.toml ```toml name = "my-worker" main = "./src/index.js" compatibility_date = "2024-09-19" [assets] directory = "./public/" binding = "ASSETS" ``` In the example above, assets would be available through `env.ASSETS`. ### Runtime API Reference #### `fetch()` **Parameters** * `request: Request | URL | string` Pass a [Request object](https://developers.cloudflare.com/workers/runtime-apis/request/), URL object, or URL string. Requests made through this method have `html_handling` and `not_found_handling` configuration applied to them. **Response** * `Promise` Returns a static asset response for the given request. **Example** Your dynamic code can make new, or forward incoming requests to your project's static assets using the assets binding. For example, `env.ASSETS.fetch(request)`, `env.ASSETS.fetch(new URL('https://assets.local/my-file'))` or `env.ASSETS.fetch('https://assets.local/my-file')`. Take the following example that configures a Worker script to return a response under all requests headed for `/api/`. Otherwise, the Worker script will pass the incoming request through to the asset binding. In this case, because a Worker script is only invoked when the requested route has not matched any static assets, this will always evaluate [`not_found_handling`](https://developers.cloudflare.com/workers/static-assets/#routing-behavior) behavior. * JavaScript ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { // TODO: Add your custom /api/* logic here. return new Response("Ok"); } // Passes the incoming request through to the assets binding. // No asset matched this request, so this will evaluate `not_found_handling` behavior. return env.ASSETS.fetch(request); }, }; ``` * TypeScript ```ts interface Env { ASSETS: Fetcher; } export default { async fetch(request, env): Promise { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { // TODO: Add your custom /api/* logic here. return new Response("Ok"); } // Passes the incoming request through to the assets binding. // No asset matched this request, so this will evaluate `not_found_handling` behavior. return env.ASSETS.fetch(request); }, } satisfies ExportedHandler; ``` ## Routing configuration For the various static asset routing configuration options, refer to [Routing](https://developers.cloudflare.com/workers/static-assets/routing/). ## Smart Placement [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/) can be used to place a Worker's code close to your back-end infrastructure. Smart Placement will only have an effect if you specified a `main`, pointing to your Worker code. ### Smart Placement with Worker Code First If you desire to run your [Worker code ahead of assets](https://developers.cloudflare.com/workers/static-assets/routing/worker-script/#run-your-worker-script-first) by setting `run_worker_first=true`, all requests must first travel to your Smart-Placed Worker. As a result, you may experience increased latency for asset requests. Use Smart Placement with `run_worker_first=true` when you need to integrate with other backend services, authenticate requests before serving any assets, or if your want to make modifications to your assets before serving them. If you want some assets served as quickly as possible to the user, but others to be served behind a smart-placed Worker, considering splitting your app into multiple Workers and [using service bindings to connect them](https://developers.cloudflare.com/workers/configuration/smart-placement/#best-practices). ### Smart Placement with Assets First Enabling Smart Placement with `run_worker_first=false` (or not specifying it) lets you serve assets from as close as possible to your users, but moves your Worker logic to run most efficiently (such as near a database). Use Smart Placement with `run_worker_first=false` (or not specifying it) when prioritizing fast asset delivery. This will not impact the [default routing behavior](https://developers.cloudflare.com/workers/static-assets/#routing-behavior). --- title: Direct Uploads · Cloudflare Workers docs description: Upload assets through the Workers API. lastUpdated: 2025-05-22T12:56:11.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/static-assets/direct-upload/ md: https://developers.cloudflare.com/workers/static-assets/direct-upload/index.md --- Note Directly uploading assets via APIs is an advanced approach which, unless you are building a programatic integration, most users will not need. Instead, we encourage users to deploy your Worker with [Wrangler](https://developers.cloudflare.com/workers/static-assets/get-started/#1-create-a-new-worker-project-using-the-cli). Our API empowers users to upload and include static assets as part of a Worker. These static assets can be served for free, and additionally, users can also fetch assets through an optional [assets binding](https://developers.cloudflare.com/workers/static-assets/binding/) to power more advanced applications. This guide will describe the process for attaching assets to your Worker directly with the API. * Workers ```mermaid sequenceDiagram participant User participant Workers API User<<->>Workers API: Submit manifest
POST /client/v4/accounts/:accountId/workers/scripts/:scriptName/assets-upload-session User<<->>Workers API: Upload files
POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true User<<->>Workers API: Upload script version
PUT /client/v4/accounts/:accountId/workers/scripts/:scriptName ``` * Workers for Platforms ```mermaid sequenceDiagram participant User participant Workers API User<<->>Workers API: Submit manifest
POST /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName/assets-upload-session User<<->>Workers API: Upload files
POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true User<<->>Workers API: Upload script version
PUT /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName ``` The asset upload flow can be distilled into three distinct phases: 1. Registration of a manifest 2. Upload of the assets 3. Deployment of the Worker ## Upload manifest The asset manifest is a ledger which keeps track of files we want to use in our Worker. This manifest is used to track assets associated with each Worker version, and eliminate the need to upload unchanged files prior to a new upload. The [manifest upload request](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/assets/subresources/upload/methods/create/) describes each file which we intend to upload. Each file is its own key representing the file path and name, and is an object which contains metadata about the file. `hash` represents a 32 hexadecimal character hash of the file, while `size` is the size (in bytes) of the file. * Workers ```bash curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{script_name}/assets-upload-session \ --header 'content-type: application/json' \ --header 'Authorization: Bearer ' \ --data '{ "manifest": { "/filea.html": { "hash": "08f1dfda4574284ab3c21666d1", "size": 12 }, "/fileb.html": { "hash": "4f1c1af44620d531446ceef93f", "size": 23 }, "/filec.html": { "hash": "54995e302614e0523757a04ec1", "size": 23 } } }' ``` * Workers for Platforms ```bash curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/dispatch/namespaces/{dispatch_namespace}/scripts/{script_name}/assets-upload-session \ --header 'content-type: application/json' \ --header 'Authorization: Bearer ' \ --data '{ "manifest": { "/filea.html": { "hash": "08f1dfda4574284ab3c21666d1", "size": 12 }, "/fileb.html": { "hash": "4f1c1af44620d531446ceef93f", "size": 23 }, "/filec.html": { "hash": "54995e302614e0523757a04ec1", "size": 23 } } }' ``` The resulting response will contain a JWT, which provides authentication during file upload. The JWT is valid for one hour. In addition to the JWT, the response instructs users how to optimally batch upload their files. These instructions are encoded in the `buckets` field. Each array in `buckets` contains a list of file hashes which should be uploaded together. Unmodified files will not be returned in the `buckets` field (as they do not need to be re-uploaded) if they have recently been uploaded in previous versions of your Worker. ```json { "result": { "jwt": "", "buckets": [ ["08f1dfda4574284ab3c21666d1", "4f1c1af44620d531446ceef93f"], ["54995e302614e0523757a04ec1"] ] }, "success": true, "errors": null, "messages": null } ``` Note If all assets have been previously uploaded, `buckets` will be empty, and `jwt` will contain a completion token. Uploading files is not necessary, and you can skip directly to [uploading a new script or version](https://developers.cloudflare.com/workers/static-assets/direct-upload/#createdeploy-new-version). ### Limitations * Each file must be under 25 MiB * The overall manifest must not contain more than 20,000 file entries ## Upload Static Assets The [file upload API](https://developers.cloudflare.com/api/resources/workers/subresources/assets/subresources/upload/methods/create/) requires files be uploaded using `multipart/form-data`. The contents of each file must be base64 encoded, and the `base64` query parameter in the URL must be set to `true`. The provided `Content-Type` header of each file part will be attached when eventually serving the file. If you wish to avoid sending a `Content-Type` header in your deployment, `application/null` may be sent at upload time. The `Authorization` header must be provided as a bearer token, using the JWT (upload token) from the aforementioned manifest upload call. Once every file in the manifest has been uploaded, a status code of 201 will be returned, with the `jwt` field present. This JWT is a final "completion" token which can be used to create a deployment of a Worker with this set of assets. This completion token is valid for 1 hour. ## Create/Deploy New Version [Script](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/methods/update/), [Version](https://developers.cloudflare.com/api/resources/workers/subresources/scripts/subresources/versions/methods/create/), and [Workers for Platform script](https://developers.cloudflare.com/api/resources/workers_for_platforms/subresources/dispatch/subresources/namespaces/subresources/scripts/methods/update/) upload endpoints require specifying a metadata part in the form data. Here, we can provide the completion token from the previous (upload assets) step. ```bash { "main_module": "main.js", "assets": { "jwt": "" }, "compatibility_date": "2021-09-14" } ``` If this is a Worker which already has assets, and you wish to just re-use the existing set of assets, we do not have to specify the completion token again. Instead, we can pass the boolean `keep_assets` option. ```bash { "main_module": "main.js", "keep_assets": true, "compatibility_date": "2021-09-14" } ``` Asset [routing configuration](https://developers.cloudflare.com/workers/wrangler/configuration/#assets) can be provided in the `assets` object, such as `html_handling` and `not_found_handling`. ```bash { "main_module": "main.js", "assets": { "jwt": "", "config" { "html_handling": "auto-trailing-slash" } }, "compatibility_date": "2021-09-14" } ``` Optionally, an assets binding can be provided if you wish to fetch and serve assets from within your Worker code. ```bash { "main_module": "main.js", "assets": { ... }, "bindings": [ ... { "name": "ASSETS", "type": "assets" } ... ] "compatibility_date": "2021-09-14" } ``` ## Programmatic Example * JavaScript ```js import * as fs from "fs"; import * as path from "path"; import * as crypto from "crypto"; import { FormData, fetch } from "undici"; import "node:process"; const accountId = ""; // Replace with your actual account ID const filesDirectory = "assets"; // Adjust to your assets directory const scriptName = "my-new-script"; // Replace with desired script name const dispatchNamespace = ""; // Replace with a dispatch namespace if using Workers for Platforms // Function to calculate the SHA-256 hash of a file and truncate to 32 characters function calculateFileHash(filePath) { const hash = crypto.createHash("sha256"); const fileBuffer = fs.readFileSync(filePath); hash.update(fileBuffer); const fileHash = hash.digest("hex").slice(0, 32); // Grab the first 32 characters const fileSize = fileBuffer.length; return { fileHash, fileSize }; } // Function to gather file metadata for all files in the directory function gatherFileMetadata(directory) { const files = fs.readdirSync(directory); const fileMetadata = {}; files.forEach((file) => { const filePath = path.join(directory, file); const { fileHash, fileSize } = calculateFileHash(filePath); fileMetadata["/" + file] = { hash: fileHash, size: fileSize, }; }); return fileMetadata; } function findMatch(fileHash, fileMetadata) { for (let prop in fileMetadata) { const file = fileMetadata[prop]; if (file.hash === fileHash) { return prop; } } throw new Error("unknown fileHash"); } // Function to upload a batch of files using the JWT from the first response async function uploadFilesBatch(jwt, fileHashes, fileMetadata) { const form = new FormData(); for (const bucket of fileHashes) { bucket.forEach((fileHash) => { const fullPath = findMatch(fileHash, fileMetadata); const relPath = filesDirectory + "/" + path.basename(fullPath); const fileBuffer = fs.readFileSync(relPath); const base64Data = fileBuffer.toString("base64"); // Convert file to Base64 form.append( fileHash, new File([base64Data], fileHash, { type: "text/html", // Modify Content-Type header based on type of file }), fileHash, ); }); const response = await fetch( `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/assets/upload?base64=true`, { method: "POST", headers: { Authorization: `Bearer ${jwt}`, }, body: form, }, ); const data = await response.json(); if (data && data.result.jwt) { return data.result.jwt; } } throw new Error("Should have received completion token"); } async function scriptUpload(completionToken) { const form = new FormData(); // Configure metadata form.append( "metadata", JSON.stringify({ main_module: "index.mjs", compatibility_date: "2022-03-11", assets: { jwt: completionToken, // Provide the completion token from file uploads }, bindings: [{ name: "ASSETS", type: "assets" }], // Optional assets binding to fetch from user worker }), ); // Configure (optional) user worker form.append( "index.js", new File( [ "export default {async fetch(request, env) { return new Response('Hello world from user worker!'); }}", ], "index.mjs", { type: "application/javascript+module", }, ), ); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}`; const response = await fetch(url, { method: "PUT", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, }, body: form, }); if (response.status != 200) { throw new Error("unexpected status code"); } } // Function to make the POST request to start the assets upload session async function startUploadSession() { const fileMetadata = gatherFileMetadata(filesDirectory); const requestBody = JSON.stringify({ manifest: fileMetadata, }); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}/assets-upload-session` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}/assets-upload-session`; const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, "Content-Type": "application/json", }, body: requestBody, }); const data = await response.json(); const jwt = data.result.jwt; return { uploadToken: jwt, buckets: data.result.buckets, fileMetadata, }; } // Begin the upload session by uploading a new manifest const { uploadToken, buckets, fileMetadata } = await startUploadSession(); // If all files are already uploaded, a completion token will be immediately returned. Otherwise, // we should upload the missing files let completionToken = uploadToken; if (buckets.length > 0) { completionToken = await uploadFilesBatch(uploadToken, buckets, fileMetadata); } // Once we have uploaded all of our files, we can upload a new script, and assets, with completion token await scriptUpload(completionToken); ``` * TypeScript ```ts import * as fs from "fs"; import * as path from "path"; import * as crypto from "crypto"; import { FormData, fetch } from "undici"; import "node:process"; const accountId: string = ""; // Replace with your actual account ID const filesDirectory: string = "assets"; // Adjust to your assets directory const scriptName: string = "my-new-script"; // Replace with desired script name const dispatchNamespace: string = ""; // Replace with a dispatch namespace if using Workers for Platforms interface FileMetadata { hash: string; size: number; } interface UploadSessionData { uploadToken: string; buckets: string[][]; fileMetadata: Record; } interface UploadResponse { result: { jwt: string; buckets: string[][]; }; success: boolean; errors: any; messages: any; } // Function to calculate the SHA-256 hash of a file and truncate to 32 characters function calculateFileHash(filePath: string): { fileHash: string; fileSize: number; } { const hash = crypto.createHash("sha256"); const fileBuffer = fs.readFileSync(filePath); hash.update(fileBuffer); const fileHash = hash.digest("hex").slice(0, 32); // Grab the first 32 characters const fileSize = fileBuffer.length; return { fileHash, fileSize }; } // Function to gather file metadata for all files in the directory function gatherFileMetadata(directory: string): Record { const files = fs.readdirSync(directory); const fileMetadata: Record = {}; files.forEach((file) => { const filePath = path.join(directory, file); const { fileHash, fileSize } = calculateFileHash(filePath); fileMetadata["/" + file] = { hash: fileHash, size: fileSize, }; }); return fileMetadata; } function findMatch( fileHash: string, fileMetadata: Record, ): string { for (let prop in fileMetadata) { const file = fileMetadata[prop] as FileMetadata; if (file.hash === fileHash) { return prop; } } throw new Error("unknown fileHash"); } // Function to upload a batch of files using the JWT from the first response async function uploadFilesBatch( jwt: string, fileHashes: string[][], fileMetadata: Record, ): Promise { const form = new FormData(); for (const bucket of fileHashes) { bucket.forEach((fileHash) => { const fullPath = findMatch(fileHash, fileMetadata); const relPath = filesDirectory + "/" + path.basename(fullPath); const fileBuffer = fs.readFileSync(relPath); const base64Data = fileBuffer.toString("base64"); // Convert file to Base64 form.append( fileHash, new File([base64Data], fileHash, { type: "text/html", // Modify Content-Type header based on type of file }), fileHash, ); }); const response = await fetch( `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/assets/upload?base64=true`, { method: "POST", headers: { Authorization: `Bearer ${jwt}`, }, body: form, }, ); const data = (await response.json()) as UploadResponse; if (data && data.result.jwt) { return data.result.jwt; } } throw new Error("Should have received completion token"); } async function scriptUpload(completionToken: string): Promise { const form = new FormData(); // Configure metadata form.append( "metadata", JSON.stringify({ main_module: "index.mjs", compatibility_date: "2022-03-11", assets: { jwt: completionToken, // Provide the completion token from file uploads }, bindings: [{ name: "ASSETS", type: "assets" }], // Optional assets binding to fetch from user worker }), ); // Configure (optional) user worker form.append( "index.js", new File( [ "export default {async fetch(request, env) { return new Response('Hello world from user worker!'); }}", ], "index.mjs", { type: "application/javascript+module", }, ), ); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}`; const response = await fetch(url, { method: "PUT", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, }, body: form, }); if (response.status != 200) { throw new Error("unexpected status code"); } } // Function to make the POST request to start the assets upload session async function startUploadSession(): Promise { const fileMetadata = gatherFileMetadata(filesDirectory); const requestBody = JSON.stringify({ manifest: fileMetadata, }); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}/assets-upload-session` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}/assets-upload-session`; const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, "Content-Type": "application/json", }, body: requestBody, }); const data = (await response.json()) as UploadResponse; const jwt = data.result.jwt; return { uploadToken: jwt, buckets: data.result.buckets, fileMetadata, }; } // Begin the upload session by uploading a new manifest const { uploadToken, buckets, fileMetadata } = await startUploadSession(); // If all files are already uploaded, a completion token will be immediately returned. Otherwise, // we should upload the missing files let completionToken = uploadToken; if (buckets.length > 0) { completionToken = await uploadFilesBatch(uploadToken, buckets, fileMetadata); } // Once we have uploaded all of our files, we can upload a new script, and assets, with completion token await scriptUpload(completionToken); ```
--- title: Get Started · Cloudflare Workers docs description: Run front-end websites — static or dynamic — directly on Cloudflare's global network. lastUpdated: 2025-06-05T13:25:05.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/static-assets/get-started/ md: https://developers.cloudflare.com/workers/static-assets/get-started/index.md --- For most front-end applications, you'll want to use a framework. Workers supports number of popular [frameworks](https://developers.cloudflare.com/workers/framework-guides/) that come with ready-to-use components, a pre-defined and structured architecture, and community support. View [framework specific guides](https://developers.cloudflare.com/workers/framework-guides/) to get started using a framework. Alternatively, you may prefer to build your website from scratch if: * You're interested in learning by implementing core functionalities on your own. * You're working on a simple project where you might not need a framework. * You want to optimize for performance by minimizing external dependencies. * You require complete control over every aspect of the application. * You want to build your own framework. This guide will instruct you through setting up and deploying a static site or a full-stack application without a framework on Workers. ## Deploy a static site This guide will instruct you through setting up and deploying a static site on Workers. ### 1. Create a new Worker project using the CLI [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project: * npm ```sh npm create cloudflare@latest -- my-static-site ``` * yarn ```sh yarn create cloudflare my-static-site ``` * pnpm ```sh pnpm create cloudflare@latest my-static-site ``` For setup, select the following options: * For *What would you like to start with?*, choose `Hello World example`. * For *Which template would you like to use?*, choose `Static site`. * For *Which language do you want to use?*, choose `TypeScript`. * For *Do you want to use git for version control?*, choose `Yes`. * For *Do you want to deploy your application?*, choose `No` (we will be making some changes before deploying). After setting up your project, change your directory by running the following command: ```sh cd my-static-site ``` ### 2. Develop locally After you have created your Worker, run the [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev ``` ### 3. Deploy your project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/). The [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy ``` Note Learn about how assets are configured and how routing works from [Routing configuration](https://developers.cloudflare.com/workers/static-assets/routing/). ## Deploy a full-stack application This guide will instruct you through setting up and deploying dynamic and interactive server-side rendered (SSR) applications on Cloudflare Workers. When building a full-stack application, you can use any [Workers bindings](https://developers.cloudflare.com/workers/runtime-apis/bindings/), [including assets' own](https://developers.cloudflare.com/workers/static-assets/binding/), to interact with resources on the Cloudflare Developer Platform. ### 1. Create a new Worker project [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project: * npm ```sh npm create cloudflare@latest -- my-dynamic-site ``` * yarn ```sh yarn create cloudflare my-dynamic-site ``` * pnpm ```sh pnpm create cloudflare@latest my-dynamic-site ``` For setup, select the following options: * For *What would you like to start with?*, choose `Hello World example`. * For *Which template would you like to use?*, choose `SSR / full-stack app`. * For *Which language do you want to use?*, choose `TypeScript`. * For *Do you want to use git for version control?*, choose `Yes`. * For *Do you want to deploy your application?*, choose `No` (we will be making some changes before deploying). After setting up your project, change your directory by running the following command: ```sh cd my-dynamic-site ``` ### 2. Develop locally After you have created your Worker, run the [`wrangler dev`](https://developers.cloudflare.com/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev ``` ### 3. Modify your Project With your new project generated and running, you can begin to write and edit your project: * The `src/index.ts` file is populated with sample code. Modify its content to change the server-side behavior of your Worker. * The `public/index.html` file is populated with sample code. Modify its content, or anything else in `public/`, to change the static assets of your Worker. Then, save the files and reload the page. Your project's output will have changed based on your modifications. ### 4. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](https://developers.cloudflare.com/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](https://developers.cloudflare.com/workers/ci-cd/builds/). The [`wrangler deploy`](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](https://developers.cloudflare.com/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy ``` Note Learn about how assets are configured and how routing works from [Routing configuration](https://developers.cloudflare.com/workers/static-assets/routing/). --- title: Headers · Cloudflare Workers docs description: "When serving static assets, Workers will attach some headers to the response by default. These are:" lastUpdated: 2025-05-01T19:25:08.000Z chatbotDeprioritize: false source_url: html: https://developers.cloudflare.com/workers/static-assets/headers/ md: https://developers.cloudflare.com/workers/static-assets/headers/index.md --- ## Default headers When serving static assets, Workers will attach some headers to the response by default. These are: * **`Content-Type`** A `Content-Type` header is attached to the response if one is provided during [the asset upload process](https://developers.cloudflare.com/workers/static-assets/direct-upload/). [Wrangler](https://developers.cloudflare.com/workers/wrangler/commands/#deploy) automatically determines the MIME type of the file, based on its extension. * **`Cache-Control: public, max-age=0, must-revalidate`** Sent when the request does not have an `Authorization` or `Range` header, this response header tells the browser that the asset can be cached, but that the browser should revalidate the freshness of the content every time before using it. This default behavior ensures good website performance for static pages, while still guaranteeing that stale content will never be served. * **`ETag`** This header complements the default `Cache-Control` header. Its value is a hash of the static asset file, and browsers can use this in subsequent requests with an `If-None-Match` header to check for freshness, without needing to re-download the entire file in the case of a match. * **`CF-Cache-Status`** This header indicates whether the asset was served from the cache (`HIT`) or not (`MISS`).[1](#user-content-fn-1) Cloudflare reserves the right to attach new headers to static asset responses at any time in order to improve performance or harden the security of your Worker application. ## Custom headers The default response headers served on static asset responses can be overridden, removed, or added to, by creating a plain text file called `_headers` without a file extension, in the static asset directory of your project. This file will not itself be served as a static asset, but will instead be parsed by Workers and its rules will be applied to static asset responses. If you are using a framework, you will often have a directory named `public/` or `static/`, and this usually contains deploy-ready assets, such as favicons, `robots.txt` files, and site manifests. These files get copied over to a final output directory during the build, so this is the perfect place to author your `_headers` file. If you are not using a framework, the `_headers` file can go directly into your [static assets directory](https://developers.cloudflare.com/workers/static-assets/binding/#directory). Headers defined in the `_headers` file override what Cloudflare ordinarily sends. Warning Custom headers defined in the `_headers` file are not applied to responses generated by your Worker code, even if the request URL matches a rule defined in `_headers`. If you use a server-side rendered (SSR) framework, have configured `assets.run_worker_first`, or otherwise use a Worker script, you will likely need to attach any custom headers you wish to apply directly within that Worker script. ### Attach a header Header rules are defined in multi-line blocks. The first line of a block is the URL or URL pattern where the rule's headers should be applied. On the next line, an indented list of header names and header values must be written: ```txt [url] [name]: [value] ``` Using absolute URLs is supported, though be aware that absolute URLs must begin with `https` and specifying a port is not supported. `_headers` rules ignore the incoming request's port and protocol when matching against an incoming request. For example, a rule like `https://example.com/path` would match against requests to `other://example.com:1234/path`. You can define as many `[name]: [value]` pairs as you require on subsequent lines. For example: ```txt # This is a comment /secure/page X-Frame-Options: DENY X-Content-Type-Options: nosniff Referrer-Policy: no-referrer /static/* Access-Control-Allow-Origin: * X-Robots-Tag: nosnippet https://myworker.mysubdomain.workers.dev/* X-Robots-Tag: noindex ``` An incoming request which matches multiple rules' URL patterns will inherit all rules' headers. Using the previous `_headers` file, the following requests will have the following headers applied: | Request URL | Headers | | - | - | | `https://custom.domain/secure/page` | `X-Frame-Options: DENY` `X-Content-Type-Options: nosniff` `Referrer-Policy: no-referrer` | | `https://custom.domain/static/image.jpg` | `Access-Control-Allow-Origin: *` `X-Robots-Tag: nosnippet` | | `https://myworker.mysubdomain.workers.dev/home` | `X-Robots-Tag: noindex` | | `https://myworker.mysubdomain.workers.dev/secure/page` | `X-Frame-Options: DENY` `X-Content-Type-Options: nosniff` `Referrer-Policy: no-referrer` `X-Robots-Tag: noindex` | | `https://myworker.mysubdomain.workers.dev/static/styles.css` | `Access-Control-Allow-Origin: *` `X-Robots-Tag: nosnippet, noindex` | You may define up to 100 header rules. Each line in the `_headers` file has a 2,000 character limit. The entire line, including spacing, header name, and value, counts towards this limit. If a header is applied twice in the `_headers` file, the values are joined with a comma separator. ### Detach a header You may wish to remove a default header or a header which has been added by a more pervasive rule. This can be done by prepending the header name with an exclamation mark and space (`! `). ```txt /* Content-Security-Policy: default-src 'self'; /*.jpg ! Content-Security-Policy ``` ### Match a path The same URL matching features that [`_redirects`](https://developers.cloudflare.com/workers/static-assets/redirects/) offers is also available to the `_headers` file. Note, however, that redirects are applied before headers, so when a request matches both a redirect and a header, the redirect takes priority. #### Splats When matching, a splat pattern — signified by an asterisk (`*`) — will greedily match all characters. You may only include a single splat in the URL. The matched value can be referenced within the header value as the `:splat` placeholder. #### Placeholders A placeholder can be defined with `:placeholder_name`. A colon (`:`) followed by a letter indicates the start of a placeholder and the placeholder name that follows must be composed of alphanumeric characters and underscores (`:[A-Za-z]\w*`). Every named placeholder can only be referenced once. Placeholders match all characters apart from the delimiter, which when part of the host, is a period (`.`) or a forward-slash (`/`) and may only be a forward-slash (`/`) when part of the path. Similarly, the matched value can be used in the header values with `:placeholder_name`. ```txt /movies/:title x-movie-name: You are watching ":title" ``` #### Examples ##### Cross-Origin Resource Sharing (CORS) To enable other domains to fetch every static asset from your Worker, the following can be added to the `_headers` file: ```txt /* Access-Control-Allow-Origin: * ``` This applies the `Access-Control-Allow-Origin` header to any incoming URL. To be more restrictive, you can define a URL pattern that applies to a `*.*.workers.dev` subdomain, which then only allows access from its [preview URLs](https://developers.cloudflare.com/workers/configuration/previews/): ```txt https://:worker.:subdomain.workers.dev/* Access-Control-Allow-Origin: https://*-:worker.:subdomain.workers.dev/ ``` ##### Prevent your workers.dev URLs showing in search results [Google](https://developers.google.com/search/docs/advanced/robots/robots_meta_tag#directives) and other search engines often support the `X-Robots-Tag` header to instruct its crawlers how your website should be indexed. For example, to prevent your `*.workers.dev` URLs from being indexed, add the following to your `_headers` file: ```txt https://*.workers.dev/* X-Robots-Tag: noindex ``` ##### Configure custom browser cache behavior If you have a folder of fingerprinted assets (assets which have a hash in their filename), you can configure more aggressive caching behavior in the browser to improve performance for repeat visitors: ```txt /static/* Cache-Control: public, max-age=31556952, immutable ``` ##### Harden security for an application Warning If you are server-side rendering (SSR) or using a Worker to generate responses in any other way and wish to attach security headers, the headers should be sent from the Worker's `Response` instead of using a `_headers` file. For example, if you have an API endpoint and want to allow cross-origin requests, you should ensure that your Worker code attaches CORS headers to its responses, including to `OPTIONS` requests. You can prevent click-jacking by informing browsers not to embed your application inside another (for example, with an `