Automated clients - scrapers, data harvesters, and script-based attackers - treat AI features as free compute. Without bot protection, every request from a bot reaches your AI provider and inflates your costs.
Arcjet bot detection runs inside your application, before the AI call, so denied requests never reach your provider. It classifies known bots, verifies good bots, and detects emerging threats in real time so you can control access per route with full application context (identity, subscription level, session state).
Get started
Section titled “Get started”In this example we use the Vercel AI SDK to create a simple AI chat endpoint with Next.js, and Arcjet to protect it from abuse. The same principles can be applied to any AI application, including those built with other frameworks.
We assume you already have a Next.js app set up.
Install the dependencies:
# Export your Arcjet API key from https://app.arcjet.comexport ARCJET_KEY="ajkey_..."
npm install @arcjet/next ai @ai-sdk/openaiCreate an AI chat endpoint:
import { openai } from "@ai-sdk/openai";import arcjet, { detectBot, shield } from "@arcjet/next";import type { UIMessage } from "ai";import { convertToModelMessages, streamText } from "ai";
const aj = arcjet({ key: process.env.ARCJET_KEY!, // Get your site key from https://app.arcjet.com rules: [ // Shield protects against common web attacks e.g. SQL injection shield({ mode: "LIVE" }), // Block all automated clients — bots inflate AI costs detectBot({ mode: "LIVE", // Blocks requests. Use "DRY_RUN" to log only allow: [], // Block all bots. See https://arcjet.com/bot-list }), ],});
export async function POST(req: Request) { const decision = await aj.protect(req);
if (decision.isDenied()) { if (decision.reason.isBot()) { return new Response("Automated clients are not permitted", { status: 403, }); } return new Response("Forbidden", { status: 403 }); }
// Arcjet approved - now read the body and call your AI provider const { messages }: { messages: UIMessage[] } = await req.json();
const result = await streamText({ model: openai("gpt-4o"), messages: await convertToModelMessages(messages), });
return result.toUIMessageStreamResponse();}And hook it up to a chat UI:
"use client";
import { useChat } from "@ai-sdk/react";import { useState } from "react";
export default function Chat() { const [input, setInput] = useState(""); const [errorMessage, setErrorMessage] = useState<string | null>(null); const { messages, sendMessage } = useChat({ onError: async (e) => setErrorMessage(e.message), }); return ( <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch"> {messages.map((message) => ( <div key={message.id} className="whitespace-pre-wrap"> {message.role === "user" ? "User: " : "AI: "} {message.parts.map((part, i) => { switch (part.type) { case "text": return <div key={`${message.id}-${i}`}>{part.text}</div>; } })} </div> ))}
{errorMessage && ( <div className="text-red-500 text-sm mb-4">{errorMessage}</div> )}
<form onSubmit={(e) => { e.preventDefault(); sendMessage({ text: input }); setInput(""); setErrorMessage(null); }} > <input className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl" value={input} placeholder="Say something..." onChange={(e) => setInput(e.currentTarget.value)} /> </form> </div> );}Then run the server:
npm run devYou will see requests being processed in your Arcjet dashboard in real time.
In this example we use LangChain to create a simple AI chat server with FastAPI, and Arcjet to protect it from abuse. The same principles can be applied to any AI application, including those built with other frameworks.
Set up the environment and install dependencies (uses uv, but you can also use pip to install the Arcjet Python SDK):
# Export your Arcjet API key from https://app.arcjet.comexport ARCJET_KEY="ajkey_..."export ARCJET_ENV=development
# Export your OpenAI API key (used by LangChain)export OPENAI_API_KEY="sk-..."
# Install dependenciesuv add arcjet fastapi uvicorn langchain langchain-openaiCreate the chat server:
import loggingimport os
from arcjet import ( Mode, arcjet, detect_bot, shield,)from fastapi import FastAPI, Requestfrom fastapi.responses import JSONResponsefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIfrom pydantic import BaseModel
app = FastAPI()
logging.basicConfig(level=logging.INFO)logger = logging.getLogger(__name__)
arcjet_key = os.getenv("ARCJET_KEY")if not arcjet_key: raise RuntimeError("ARCJET_KEY is required. Get one at https://app.arcjet.com")
openai_api_key = os.getenv("OPENAI_API_KEY")if not openai_api_key: raise RuntimeError( "OPENAI_API_KEY is required. Get one at https://platform.openai.com" )
llm = ChatOpenAI(model="gpt-4o-mini", api_key=openai_api_key)
prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assistant."), ("human", "{message}"), ])
chain = prompt | llm | StrOutputParser()
class ChatRequest(BaseModel): message: str
aj = arcjet( key=arcjet_key, # Get your key from https://app.arcjet.com rules=[ # Shield protects your app from common attacks e.g. SQL injection shield(mode=Mode.LIVE), # Create a bot detection rule detect_bot( mode=Mode.LIVE, # An empty allow list blocks all bots, which is a good default for # an AI chat app allow=[ "CURL", # Allow curl so we can test it (see README) # Uncomment to allow these other common bot categories # See the full list at https://arcjet.com/bot-list # BotCategory.MONITOR, # Uptime monitoring services # BotCategory.PREVIEW, # Link previews e.g. Slack, Discord ], ), ],)
@app.post("/chat")async def chat(request: Request, body: ChatRequest): # Call protect() to evaluate the request against the rules decision = await aj.protect(request)
# Handle denied requests if decision.is_denied(): status = 429 if decision.reason.is_rate_limit() else 403 return JSONResponse({"error": "Denied"}, status_code=status)
# All rules passed, proceed with handling the request reply = await chain.ainvoke({"message": body.message})
return {"reply": reply}Then run the server:
uv run uvicorn main:app --reloadAnd send a message to the API endpoint:
curl -X POST http://localhost:8000/chat \ -H "Content-Type: application/json" \ -d '{"message": "What is the capital of France?"}'You will see requests being processed in your Arcjet dashboard in real time.
In this example we use LangChain to create a simple AI chat server with Flask, and Arcjet to protect it from abuse. The same principles can be applied to any AI application, including those built with other frameworks.
Set up the environment and install dependencies (uses uv, but you can also use pip to install the Arcjet Python SDK):
# Export your Arcjet API key from https://app.arcjet.comexport ARCJET_KEY="ajkey_..."export ARCJET_ENV=development
# Export your OpenAI API key (used by LangChain)export OPENAI_API_KEY="sk-..."
# Install dependenciesuv add arcjet flask langchain langchain-openaiCreate the chat server:
import loggingimport os
from arcjet import ( Mode, arcjet_sync, detect_bot, shield,)from flask import Flask, jsonify, requestfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAI
app = Flask(__name__)
logging.basicConfig(level=logging.INFO)logger = logging.getLogger(__name__)
arcjet_key = os.getenv("ARCJET_KEY")if not arcjet_key: raise RuntimeError("ARCJET_KEY is required. Get one at https://app.arcjet.com")
openai_api_key = os.getenv("OPENAI_API_KEY")if not openai_api_key: raise RuntimeError( "OPENAI_API_KEY is required. Get one at https://platform.openai.com" )
llm = ChatOpenAI(model="gpt-4o-mini", api_key=openai_api_key)
prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful assistant."), ("human", "{message}"), ])
chain = prompt | llm | StrOutputParser()
aj = arcjet_sync( key=arcjet_key, # Get your key from https://app.arcjet.com rules=[ # Shield protects your app from common attacks e.g. SQL injection shield(mode=Mode.LIVE), # Create a bot detection rule detect_bot( mode=Mode.LIVE, # An empty allow list blocks all bots, which is a good default for # an AI chat app allow=[ "CURL", # Allow curl so we can test it (see README) # Uncomment to allow these other common bot categories # See the full list at https://arcjet.com/bot-list # BotCategory.MONITOR, # Uptime monitoring services # BotCategory.PREVIEW, # Link previews e.g. Slack, Discord ], ), ],)
@app.post("/chat")def chat(): # Call protect() to evaluate the request against the rules decision = aj.protect(request)
# Handle denied requests if decision.is_denied(): status = 429 if decision.reason.is_rate_limit() else 403 return jsonify(error="Denied"), status
# All rules passed, proceed with handling the request body = request.get_json() message = body.get("message", "") if body else "" reply = chain.invoke({"message": message})
return jsonify(reply=reply)
if __name__ == "__main__": app.run(debug=True)Then run the server:
uv run python app.pyAnd send a message to the API endpoint:
curl -X POST http://localhost:5000/chat \ -H "Content-Type: application/json" \ -d '{"message": "What is the capital of France?"}'You will see requests being processed in your Arcjet dashboard in real time.
Configuring bot detection
Section titled “Configuring bot detection”allow: [] blocks all automated clients. This is the recommended default for AI
routes where no bot traffic is legitimate.
To allow specific categories or named bots from our list of known bots, add them to the allow list:
detectBot({ mode: "LIVE", allow: [ "CURL", // Allow curl-based scripts "CATEGORY:MONITOR", // Uptime monitoring services "CATEGORY:PREVIEW", // Link previewers (Slack, Discord, etc.) ],})Budget control
Section titled “Budget control”Bot protection controls who can call your AI features. To also control how much each user can consume, combine it with AI budget control:
rules: [ detectBot({ mode: "LIVE", allow: [] }), tokenBucket({ // Token bucket rate limiting is best for AI budget control mode: "LIVE", characteristics: ["userId"], // Link limits to users refillRate: 2_000, // Refill 2000 tokens per interval interval: "1h", // Refill interval capacity: 5_000, // Max tokens }),]The get started guide shows the combined pattern.