Arcjet rate limiting allows you to define rules which limit the number of
requests a client can make over a period of time.
Configuration options
Each rate limit is configured on an exact path with a set of client
characteristics and algorithm specific options.
Fixed window rate limit options
Tracks the number of requests made by a client over a fixed time window. Options
are explained in the Configuration
documentation. See the fixed window algorithm
description for more details about how
the algorithm works.
// Options for fixed window rate limit
// See https://docs.arcjet.com/rate-limiting/configuration
type FixedWindowRateLimitOptions = {
mode ?: " LIVE " | " DRY_RUN " ; // "LIVE" will block requests. "DRY_RUN" will log only
characteristics ?: string []; // how the client is identified. Defaults to the global characteristics if unset
window : string ; // time window the rate limit applies to
max : number ; // maximum number of requests allowed in the time window
Fixed window example
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY !,
characteristics: [ " ip.src " ] , // track requests by IP address
mode: " LIVE " , // will block requests. Use "DRY_RUN" to log only
window: " 60s " , // 60 second fixed window
max: 100 , // allow a maximum of 100 requests
Sliding window rate limit options
Tracks the number of requests made by a client over a sliding window so that the
window moves with time. Options are explained in the
Configuration documentation. See the sliding
window algorithm description for more
details about how the algorithm works.
// Options for sliding window rate limit
// See https://docs.arcjet.com/rate-limiting/configuration
type SlidingWindowRateLimitOptions = {
mode ?: " LIVE " | " DRY_RUN " ; // "LIVE" will block requests. "DRY_RUN" will log only
characteristics ?: string []; // how the client is identified. Defaults to the global characteristics if unset
interval : number ; // the time interval in seconds for the rate limit
max : number ; // maximum number of requests allowed over the time interval
Sliding window example
import arcjet, { slidingWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY !,
characteristics: [ " ip.src " ] , // track requests by IP address
mode: " LIVE " , // will block requests. Use "DRY_RUN" to log only
interval: 60 , // 60 second sliding window
max: 100 , // allow a maximum of 100 requests
Token bucket rate limit options
Based on a bucket filled with a specific number of tokens. Each request
withdraws a token from the bucket and the bucket is refilled at a fixed rate.
Once the bucket is empty, the client is blocked until the bucket refills.
Options are explained in the Configuration
documentation. See the token bucket algorithm
description for more details about how
the algorithm works.
// Options for token bucket rate limit
// See https://docs.arcjet.com/rate-limiting/configuration
type TokenBucketRateLimitOptions = {
mode ?: " LIVE " | " DRY_RUN " ; // "LIVE" will block requests. "DRY_RUN" will log only
characteristics ?: string []; // how the client is identified. Defaults to the global characteristics if unset
refillRate : number ; // number of tokens to add to the bucket at each interval
interval : number ; // the interval in seconds to add tokens to the bucket
capacity : number ; // the maximum number of tokens the bucket can hold
Token bucket example
See the token bucket example for how to specify the
number of tokens to request.
import arcjet, { tokenBucket } from " @arcjet/next " ;
key: process . env . ARCJET_KEY !,
characteristics: [ " ip.src " ] , // track requests by IP address
mode: " LIVE " , // will block requests. Use "DRY_RUN" to log only
refillRate: 10 , // refill 10 tokens per interval
interval: 60 , // 60 second interval
capacity: 100 , // bucket maximum capacity of 100 tokens
Identifying users
Rate limit rules use characteristics
to identify the client and apply the
limit across requests. The default is to use the client’s IP address. However,
you can specify other
characteristics such as a user
ID or other metadata from your application.
In this example we define a rate limit rule that applies to a specific user ID.
The custom characteristic is userId
with the value passed as a prop on the
protect
function. You can use any string for the characteristic name and any
string
, number
or boolean
for the value.
Create a new API route at /app/api/arcjet/route.ts
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Define a custom userId characteristic.
// See https://docs.arcjet.com/architecture#custom-characteristics
characteristics: [ " userId " ] ,
export async function GET ( req : Request ) {
// Pass userId as a string to identify the user. This could also be a number
const decision = await aj . protect (req , { userId: " user123 " } );
if (decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
Create a new API route at /pages/api/arcjet.ts
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Define a custom userId characteristic.
// See https://docs.arcjet.com/architecture#custom-characteristics
characteristics: [ " userId " ] ,
export default async function handler (
// Pass userId as a string to identify the user. This could also be a number
const decision = await aj . protect (req , { userId: " user123 " } );
if (decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Create a new API route at /app/api/arcjet/route.js
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Define a custom userId characteristic.
// See https://docs.arcjet.com/architecture#custom-characteristics
characteristics: [ " userId " ] ,
export async function GET ( req ) {
// Pass userId as a string to identify the user. This could also be a number
const decision = await aj . protect ( req , { userId: " user123 " } );
if ( decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
Create a new API route at /pages/api/arcjet.js
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Define a custom userId characteristic.
// See https://docs.arcjet.com/architecture#custom-characteristics
characteristics: [ " userId " ] ,
export default async function handler ( req , res ) {
// Pass userId as a string to identify the user. This could also be a number
const decision = await aj . protect ( req , { userId: " user123 " } );
if ( decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
To identify users with different characteristics e.g. IP address for anonymous
users and a user ID for logged in users, you can create a custom fingerprint.
See the example in the custom characteristics
section .
Rules
The arcjet
client is configured with one or more rules which take one or many
of the above options.
Example - single rate limit
Set a single rate limit rule on the /api/hello
API route that applies a 60
request limit per hour per IP address (the default if no characteristics
are
specified).
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY !,
Example - dry run mode for new rules
Rate limits can be combined in the arcjet
client which allows you to test new
configurations in dry run mode first before enabling them in live mode. You can
inspect the results of each rule by logging them or using the Arcjet
Dashboard .
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY !,
characteristics: [ " ip.src " ] ,
// This rule is in dry run mode, so will log but not block
characteristics: [ ' http.request.headers["x-api-key"] ' ],
// max could also be a dynamic value applied after looking up a limit
// elsewhere e.g. in a database for the authenticated user
Per route vs middleware
Rate limit rules can be configured in two ways:
Per API route : The rule is defined in the API route itself. This allows
you to configure the rule alongside the code it is protecting which is useful
if you want to use the decision to add context to your own code. However, it
means rules are not located in a single place.
Middleware : The rule is defined in the middleware. This allows you to
configure rules in a single place or apply them globally to all routes, but
it means the rules are not located alongside the code they are protecting.
Per route
If you define your rate limit within an API route Arcjet assumes that the limit
applies only to that route. If you define your rate limit in middleware, you
should either use the Next.js matcher
config to choose which paths to execute
the middleware for, or use request.nextUrl.pathname.startsWith
.
Rate limit only on /api/*
You can use conditionals in your Next.js middleware to match multiple paths.
import type { NextFetchEvent, NextRequest } from " next/server " ;
import arcjet, { createMiddleware, fixedWindow } from " @arcjet/next " ;
// matcher tells Next.js which routes to run the middleware on.
// This runs the middleware on all routes except for static assets.
matcher: [ " /((?!_next/static|_next/image|favicon.ico).*) " ] ,
key: process . env . ARCJET_KEY !,
// Pass any existing middleware with the optional existingMiddleware prop
const ajMiddleware = createMiddleware (aj);
export default function middleware (
// Only run the Arcjet middleware on API routes
if (request . nextUrl . pathname . startsWith ( " /api " )) {
return ajMiddleware (request , event);
Rate limit on all routes
import arcjet, { createMiddleware, fixedWindow } from " @arcjet/next " ;
// matcher tells Next.js which routes to run the middleware on.
// This runs the middleware on all routes except for static assets.
matcher: [ " /((?!_next/static|_next/image|favicon.ico).*) " ] ,
key: process . env . ARCJET_KEY !,
// Pass any existing middleware with the optional existingMiddleware prop
export default createMiddleware (aj);
Avoiding double protection with middleware
If you use Arcjet in middleware and individual routes, you need to be careful
that Arcjet is not running multiple times per request. This can be avoided by
excluding the API route from the middleware
matcher .
For example, if you already have a rate limit defined in the API route at
/api/hello
, you can exclude it from the middleware by specifying a matcher in
/middleware.ts
:
import arcjet, { createMiddleware, fixedWindow } from " @arcjet/next " ;
// The matcher prevents the middleware executing on the /api/hello API route
// because you already installed Arcjet directly in the route
matcher: [ " /((?!_next/static|_next/image|favicon.ico|api/hello).*) " ] ,
key: process . env . ARCJET_KEY !,
export default createMiddleware (aj);
Decision
Arcjet provides a single protect
function that is used to execute your
protection rules. This requires a request
argument which is the request
context as passed to the request handler.
This function returns a Promise
that resolves to an
ArcjetDecision
object. This contains the following properties:
id
(string
) - The unique ID for the request. This can be used to look up
the request in the Arcjet dashboard. It is prefixed with req_
for decisions
involving the Arcjet cloud API. For decisions taken locally, the prefix is
lreq_
.
conclusion
(ArcjetConclusion
) - The final conclusion based on evaluating
each of the configured rules. If you wish to accept Arcjet’s recommended
action based on the configured rules then you can use this property.
reason
(ArcjetReason
) - An object containing more detailed
information about the conclusion.
results
(ArcjetRuleResult[]
) - An array of ArcjetRuleResult
objects
containing the results of each rule that was executed.
ip
(ArcjetIpDetails
) - An object containing Arcjet’s analysis of the
client IP address. See IP analysis in the
SDK reference for more information.
See the SDK reference for more details about the
rule results.
You check if a deny conclusion has been returned by a rate limit rule by using
decision.isDenied()
and decision.reason.isRateLimit()
.
You can iterate through the results and check whether a rate limit was applied:
for ( const result of decision . results ) {
console . log ( " Rule Result " , result);
This example will log the full result as well as each rate limit rule:
Create a new API route at /app/api/route/hello.ts
:
import arcjet, { fixedWindow, detectBot } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
allow: [], // "allow none" will block all detected bots
export async function POST ( req : Request ) {
const decision = await aj . protect (req);
for ( const result of decision . results ) {
console . log ( " Rule Result " , result);
if (result . reason . isRateLimit ()) {
console . log ( " Rate limit rule " , result);
if (result . reason . isBot ()) {
console . log ( " Bot protection rule " , result);
if (decision . isDenied ()) {
return NextResponse . json ({ error: " Forbidden " } , { status: 403 });
return NextResponse . json ({
Create a new API route at /pages/api/hello.ts
:
import arcjet, { fixedWindow, detectBot } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
allow: [], // "allow none" will block all detected bots
export default async function handler (
const decision = await aj . protect (req);
console . log ( " Decision " , decision);
for ( const result of decision . results ) {
console . log ( " Rule Result " , result);
if (result . reason . isRateLimit ()) {
console . log ( " Rate limit rule " , result);
if (result . reason . isBot ()) {
console . log ( " Bot protection rule " , result);
if (decision . isDenied ()) {
. json ({ error: " Forbidden " , reason: decision . reason });
res . status ( 200 ) . json ({ name: " Hello world " });
Create a new API route at /app/api/arcjet/route.js
:
import arcjet, { fixedWindow, detectBot } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
allow: [], // "allow none" will block all detected bots
export async function POST ( req ) {
const decision = await aj . protect ( req );
for ( const result of decision . results ) {
console . log ( " Rule Result " , result );
if ( result . reason . isRateLimit ()) {
console . log ( " Rate limit rule " , result );
if ( result . reason . isBot ()) {
console . log ( " Bot protection rule " , result );
if ( decision . isDenied ()) {
return NextResponse . json ({ error: " Forbidden " } , { status: 403 });
return NextResponse . json ({
Create a new API route at /pages/api/arcjet.js
:
import arcjet, { fixedWindow, detectBot } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
allow: [], // "allow none" will block all detected bots
export default async function handler ( req , res ) {
const decision = await aj . protect ( req );
console . log ( " Decision " , decision );
for ( const result of decision . results ) {
console . log ( " Rule Result " , result );
if ( result . reason . isRateLimit ()) {
console . log ( " Rate limit rule " , result );
if ( result . reason . isBot ()) {
console . log ( " Bot protection rule " , result );
if ( decision . isDenied ()) {
. json ({ error: " Forbidden " , reason: decision . reason });
res . status ( 200 ) . json ({ name: " Hello world " });
Token bucket request
When using a token bucket rule, an additional requested
prop should be passed
to the protect
function. This is the number of tokens the client is requesting
to withdraw from the bucket.
Create a new API route at /app/api/route/hello.ts
:
import arcjet, { tokenBucket } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
characteristics: [ " ip.src " ] ,
export async function GET ( req : Request ) {
// Each request will consume 50 tokens
const decision = await aj . protect (req , { requested: 50 } );
if (decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
Create a new API route at /pages/api/hello.ts
:
import arcjet, { tokenBucket } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
characteristics: [ " ip.src " ] ,
export default async function handler (
// Each request will consume 50 tokens
const decision = await aj . protect (req , { requested: 50 } );
if (decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Create a new API route at /app/api/arcjet/route.js
:
import arcjet, { tokenBucket } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
characteristics: [ " ip.src " ] ,
export async function GET ( req ) {
// Each request will consume 50 tokens
const decision = await aj . protect ( req , { requested: 50 } );
if ( decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
Create a new API route at /pages/api/arcjet.js
:
import arcjet, { tokenBucket } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
characteristics: [ " ip.src " ] ,
export default async function handler ( req , res ) {
// Each request will consume 50 tokens
const decision = await aj . protect ( req , { requested: 50 } );
if ( decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
With a rate limit rule enabled, you can access additional metadata in every
Arcjet decision result:
max
(number
): The configured maximum number of requests applied to this
request.
remaining
(number
): The number of requests remaining before max
is
reached within the window.
window
(number
): The total amount of seconds in which requests are
counted.
reset
(number
): The remaining amount of seconds in the window.
These can be used to return RateLimit
HTTP headers (draft
RFC ) to
offer the client more detail.
We provide the @arcjet/decorate
package for decorating
your responses with appropriate RateLimit
headers based on a decision.
Create a new API route at /app/api/route/hello.ts
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { setRateLimitHeaders } from " @arcjet/decorate " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export async function GET ( req : Request ) {
const decision = await aj . protect (req);
const headers = new Headers ();
setRateLimitHeaders (headers , decision);
if (decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
{ status: 429 , headers } ,
return NextResponse . json (
{ status: 200 , headers } ,
Create a new API route at /pages/api/hello.ts
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { setRateLimitHeaders } from " @arcjet/decorate " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default async function handler (
const decision = await aj . protect (req);
setRateLimitHeaders (res , decision);
if (decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Create a new API route at /app/api/arcjet/route.js
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { setRateLimitHeaders } from " @arcjet/decorate " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export async function GET ( req ) {
const decision = await aj . protect ( req );
const headers = new Headers ();
setRateLimitHeaders ( headers , decision );
if ( decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
{ status: 429 , headers } ,
return NextResponse . json (
{ status: 200 , headers } ,
Create a new API route at /pages/api/arcjet.js
:
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { setRateLimitHeaders } from " @arcjet/decorate " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default async function handler ( req , res ) {
const decision = await aj . protect ( req );
setRateLimitHeaders ( res , decision );
if ( decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Error handling
Arcjet is designed to fail open so that a service issue or misconfiguration does
not block all requests. The SDK will also time out and fail open after 1000ms
when NODE_ENV
or ARCJET_ENV
is development
and 500ms otherwise. However,
in most cases, the response time will be less than 20-30ms.
If there is an error condition, Arcjet will return an ERROR
conclusion
.
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export async function GET ( req : Request ) {
const decision = await aj . protect (req);
if (decision . isErrored ()) {
// Fail open by logging the error and continuing
console . warn ( " Arcjet error " , decision . reason . message );
// You could also fail closed here for very sensitive routes
//return NextResponse.json({ error: "Service unavailable" }, { status: 503 });
if (decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default async function handler (
const decision = await aj . protect (req);
if (decision . isErrored ()) {
// Fail open by logging the error and continuing
console . warn ( " Arcjet error " , decision . reason . message );
// You could also fail closed here for very sensitive routes
//return res.status(503).json({ error: "Service unavailable" });
if (decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export async function GET ( req ) {
const decision = await aj . protect ( req );
if ( decision . isErrored ()) {
// Fail open by logging the error and continuing
console . warn ( " Arcjet error " , decision . reason . message );
// You could also fail closed here for very sensitive routes
//return NextResponse.json({ error: "Service unavailable" }, { status: 503 });
if ( decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default async function handler ( req , res ) {
const decision = await aj . protect ( req );
if ( decision . isErrored ()) {
// Fail open by logging the error and continuing
console . warn ( " Arcjet error " , decision . reason . message );
// You could also fail closed here for very sensitive routes
//return res.status(503).json({ error: "Service unavailable" });
if ( decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Testing
Arcjet runs the same in any environment, including locally and in CI. You can
use the mode
set to DRY_RUN
to log the results of rule execution without
blocking any requests.
We have an example test framework you can use to automatically test your rules.
Arcjet can also be triggered based using a sample of your traffic.
See the Testing section of the docs for details.
Examples
Rate limit by IP address
The example below shows how to configure a rate limit on a single API route. It
applies a limit of 60 requests per hour per IP address. If the limit is
exceeded, the client is blocked for 10 minutes before being able to make any
further requests.
Applying a rate limit by IP address is the default if no
characteristics
are specified.
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export async function GET ( req : Request ) {
const decision = await aj . protect (req);
if (decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler (
const decision = await aj . protect (req);
if (decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export async function GET ( req ) {
const decision = await aj . protect ( req );
if ( decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler ( req , res ) {
const decision = await aj . protect ( req );
if ( decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Rate limit by IP address with custom response
The example below is the same as the one above. However this example also shows
a customized response rather than the default
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export async function GET ( req : Request ) {
const decision = await aj . protect (req);
if (decision . isDenied ()) {
if (decision . reason . isRateLimit ()) {
return NextResponse . json ({ error: " Too Many Requests " } , { status: 429 });
return NextResponse . json (
{ error: " Forbidden " , reason: decision . reason } ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler (
const decision = await aj . protect (req);
if (decision . isDenied ()) {
if (decision . reason . isRateLimit ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
. json ({ error: " Forbidden " , reason: decision . reason });
res . status ( 200 ) . json ({ name: " Hello world " });
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export async function GET ( req ) {
const decision = await aj . protect ( req );
if ( decision . isDenied ()) {
if ( decision . reason . isRateLimit ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
. json ({ error: " Forbidden " , reason: decision . reason });
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler ( req , res ) {
const decision = await aj . protect ( req );
if ( decision . isDenied ()) {
if ( decision . reason . isRateLimit ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
. json ({ error: " Forbidden " , reason: decision . reason });
res . status ( 200 ) . json ({ name: " Hello world " });
Rate limit by AI tokens
If you are building an AI application you may be more interested in the number
of AI tokens rather than the number of HTTP requests. Popular AI APIs such as
OpenAI are billed based on the number of tokens consumed and the number of
tokens is variable depending on the request e.g. conversation length or image
size.
The token bucket algorithm is a good fit for this use case because you can vary
the number of tokens withdrawn from the bucket with every request.
The example below configures a token bucket rate limit using the
openai-chat-tokens library to
track the number of tokens used by a gpt-3.5-turbo
AI chatbot. It sets a limit
of 2,000
tokens per hour with a maximum of 5,000
tokens in the bucket. This
allows for a reasonable conversation length without consuming too many tokens.
See the arcjet-js GitHub repo for a
full example using
Next.js .
// This example is adapted from https://sdk.vercel.ai/docs/guides/frameworks/nextjs-app
import arcjet, { tokenBucket } from " @arcjet/next " ;
import { OpenAIStream, StreamingTextResponse } from " ai " ;
import OpenAI from " openai " ;
import { promptTokensEstimate } from " openai-chat-tokens " ;
// Get your site key from https://app.arcjet.com
// and set it as an environment variable rather than hard coding.
// See: https://nextjs.org/docs/app/building-your-application/configuring/environment-variables
key: process . env . AJ_KEY !,
characteristics: [ " ip.src " ] , // track requests by IP address
mode: " LIVE " , // will block requests. Use "DRY_RUN" to log only
const openai = new OpenAI ( {
apiKey: process . env . OPENAI_API_KEY ?? " OPENAI_KEY_MISSING " ,
// Edge runtime allows for streaming responses
export const runtime = " edge " ;
export async function POST ( req : Request ) {
const { messages } = await req . json ();
// Estimate the number of tokens required to process the request
const estimate = promptTokensEstimate ( {
console . log ( " Token estimate " , estimate);
// Withdraw tokens from the token bucket
const decision = await aj . protect (req , { requested: estimate } );
console . log ( " Arcjet decision " , decision . conclusion );
if (decision . reason . isRateLimit ()) {
console . log ( " Requests remaining " , decision . reason . remaining );
// If the request is denied, return a 429
if (decision . isDenied ()) {
if (decision . reason . isRateLimit ()) {
return new Response ( " Too Many Requests " , {
return new Response ( " Forbidden " , {
// If the request is allowed, continue to use OpenAI
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai . chat . completions . create ( {
// Convert the response into a friendly text-stream
const stream = OpenAIStream (response);
// Respond with the stream
return new StreamingTextResponse (stream);
The Next.js pages router does not support streaming responses so you should use
the app router for this example. You can still use the pages/
directory for
the rest of your application. See the Next.js AI docs for
details .
// This example is adapted from https://sdk.vercel.ai/docs/guides/frameworks/nextjs-app
import arcjet, { tokenBucket } from " @arcjet/next " ;
import { OpenAIStream, StreamingTextResponse } from " ai " ;
import OpenAI from " openai " ;
import { promptTokensEstimate } from " openai-chat-tokens " ;
// Get your site key from https://app.arcjet.com
// and set it as an environment variable rather than hard coding.
// See: https://nextjs.org/docs/app/building-your-application/configuring/environment-variables
characteristics: [ " ip.src " ] , // track requests by IP address
mode: " LIVE " , // will block requests. Use "DRY_RUN" to log only
const openai = new OpenAI ( {
apiKey: process . env . OPENAI_API_KEY ?? " OPENAI_KEY_MISSING " ,
// Edge runtime allows for streaming responses
export const runtime = " edge " ;
export async function POST ( req ) {
const { messages } = await req . json ();
// Estimate the number of tokens required to process the request
const estimate = promptTokensEstimate ( {
console . log ( " Token estimate " , estimate );
// Withdraw tokens from the token bucket
const decision = await aj . protect ( req , { requested: estimate } );
console . log ( " Arcjet decision " , decision . conclusion );
if ( decision . reason . isRateLimit ()) {
console . log ( " Requests remaining " , decision . reason . remaining );
// If the request is denied, return a 429
if ( decision . isDenied ()) {
if ( decision . reason . isRateLimit ()) {
return new Response ( " Too Many Requests " , {
return new Response ( " Forbidden " , {
// If the request is allowed, continue to use OpenAI
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai . chat . completions . create ( {
// Convert the response into a friendly text-stream
const stream = OpenAIStream ( response );
// Respond with the stream
return new StreamingTextResponse ( stream );
The Next.js pages router does not support streaming responses so you should use
the app router for this example. You can still use the pages/
directory for
the rest of your application. See the Next.js AI docs for
details .
APIs are commonly protected by keys. You may wish to apply a rate limit based on
the key, regardless of which IPs the requests come from. To achieve this, you
can specify the characteristics Arcjet will use to track the limit.
The example below shows how to configure a rate limit on a single API route. It
applies a limit of 60 requests per hour per API key, where the key is provided
in a custom header called x-api-key
. If the limit is exceeded, the client is
blocked for 10 minutes before being able to make any further requests.
Caution
If you specify different characteristics and do not include ip.src
, you may
inadvertently rate limit everyone. Be sure to include a characteristic which can
narrowly identify each client, such as an API key as shown here.
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY !,
characteristics: [ ' http.request.headers["x-api-key"] ' ] ,
Global rate limit
Using Next.js middleware allows you to set a rate limit that applies to every
route:
import arcjet, { createMiddleware, fixedWindow } from " @arcjet/next " ;
// matcher tells Next.js which routes to run the middleware on.
// This runs the middleware on all routes except for static assets.
matcher: [ " /((?!_next/static|_next/image|favicon.ico).*) " ] ,
key: process . env . ARCJET_KEY !,
// Pass any existing middleware with the optional existingMiddleware prop
export default createMiddleware (aj);
Response based on the path
You can also use the req
NextRequest
object to customize the response based on the path. In this example, we’ll
return a JSON response for API requests, and a HTML response for other requests.
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextRequest, NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export async function middleware ( req : NextRequest , res : NextResponse ) {
const decision = await aj . protect (req);
if (decision . isDenied ()) {
// If this is an API request, return a JSON response
if (req . nextUrl . pathname . startsWith ( " /api " )) {
return new NextResponse ( JSON . stringify ({ error: " Too many requests " }) , {
headers: { " content-type " : " application/json " } ,
return new NextResponse ( " Too many requests " , {
headers: { " content-type " : " text/html " } ,
Rewrite or redirect
The
NextResponse
object returned to the client can also be used to rewrite or redirect the
request. For example, you might want to return a JSON response for API route
requests, but redirect all page route requests to an error page.
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextRequest, NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export async function middleware ( req : NextRequest , res : NextResponse ) {
const decision = await aj . protect (req);
if (decision . isDenied ()) {
// If this is an API request, return a JSON response
if (req . nextUrl . pathname . startsWith ( " /api " )) {
return new NextResponse ( JSON . stringify ({ error: " Too many requests " }) , {
headers: { " content-type " : " application/json " } ,
return NextResponse . redirect ( " /rate-limited " );
Wrap existing handler
All the examples on this page show how you can inspect the decision to control
what to do next. However, if you just wish to send a generic 429 Too Many Requests
response you can delegate this to Arcjet by wrapping your handler
withArcjet
.
For both the Node or Edge runtime :
import arcjet, { fixedWindow, withArcjet } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export const GET = withArcjet (aj , async ( req : Request ) => {
return NextResponse . json ( {
For the Node (default) runtime:
import arcjet, { fixedWindow, withArcjet } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default withArcjet (
async ( req : NextApiRequest , res : NextApiResponse ) => {
res . status ( 200 ) . json ({ name: " Hello world " });
For the Edge runtime:
import arcjet, { fixedWindow, withArcjet } from " @arcjet/next " ;
import { NextRequest, NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default withArcjet (aj, async ( req : NextRequest ) => {
return NextResponse . json ({
For both the Node or Edge runtime :
import arcjet, { fixedWindow, withArcjet } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export const GET = withArcjet ( aj , async ( req ) => {
return NextResponse . json ( {
For the Node (default) runtime:
import arcjet, { fixedWindow, withArcjet } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default withArcjet ( aj , async ( req , res ) => {
res . status ( 200 ) . json ({ name: " Hello world " });
For the Edge runtime:
import arcjet, { fixedWindow, withArcjet } from " @arcjet/next " ;
import { NextRequest, NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
//characteristics: ["ip.src"],
export default withArcjet ( aj , async ( req ) => {
return NextResponse . json ({
Edge Functions
Arcjet works in Edge Functions and with the Edge
Runtime .
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextRequest, NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler ( req : NextRequest , res : NextResponse ) {
const decision = await aj . protect (req);
if (decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
import type { NextApiRequest, NextApiResponse } from " next " ;
key: process . env . ARCJET_KEY !,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler (
const decision = await aj . protect (req);
if (decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
import arcjet, { fixedWindow } from " @arcjet/next " ;
import { NextResponse } from " next/server " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export async function GET ( req ) {
const decision = await aj . protect ( req );
if ( decision . isDenied ()) {
return NextResponse . json (
error: " Too Many Requests " ,
return NextResponse . json ({
import arcjet, { fixedWindow } from " @arcjet/next " ;
key: process . env . ARCJET_KEY ,
// Tracking by ip.src is the default if not specified
// characteristics: ["ip.src"],
export default async function handler ( req , res ) {
const decision = await aj . protect ( req );
if ( decision . isDenied ()) {
return res . status ( 429 ) . json ({ error: " Too Many Requests " });
res . status ( 200 ) . json ({ name: " Hello world " });
Discussion