Animated background wavesAnimated bottom waves
Blog|AIEngineeringFebruary 26, 2026
Abstract cover art
Lava

The Best AI Tools to Use with Lava Spend Keys (February 2026)

AI tools are getting better at letting you bring your own API key. That is a big deal, because it means you control which models you use, how much you spend, and where your requests go. The problem is managing all of it. You end up juggling keys across a dozen tools with no shared budget, no usage visibility, and no way to shut things down if costs spike.

Lava spend keys solve this. One key works across 600+ models from OpenAI, Anthropic, Google, Mistral, xAI, DeepSeek, and more. You set spending limits, restrict which models are allowed, and track usage in real time from a single dashboard. Paste the key into any tool that accepts an OpenAI or Anthropic API key, point it at https://api.lava.so, and you are done.

Here are the best AI tools you can use with Lava spend keys today, organized by what you are trying to do.

600+

Models accessible

Through a single spend key

24+

Compatible tools

Coding, chat, agents, automation

2 min

Setup time

Paste key, set base URL, go

How Lava Spend Keys Work

Before diving into tools, here is the 30-second version. A spend key is an API key that routes through Lava's gateway. You create one in the Lava dashboard, choose OpenAI or Anthropic format depending on the tool, set optional spending limits and model restrictions, and paste it in.

OpenAI format (/v1/chat/completions): works with most tools. Set the API key to your lava_sk_* key and the base URL to https://api.lava.so/v1.

Anthropic format (/v1/messages): works with Claude Code, the Anthropic SDK, and tools that use the Anthropic API directly. Set the API key to your lava_sk_* key and the base URL to https://api.lava.so (without /v1, because the Anthropic SDK appends it).

Every request is tracked with cost, model, and token breakdowns. If you hit a spend limit, the key stops working until the next cycle or until you raise the limit. No surprise bills.

AI Coding Assistants

Coding assistants are the most popular use case for spend keys. These tools burn through tokens fast, especially during agentic sessions where the AI is reading, editing, and running code autonomously. A single afternoon with an AI coding agent can easily cost $20-50 in API calls. Spend keys let you see exactly where that money is going.

ToolAPI FormatCustom Base URLAgentic ModeBest For
CursorOpenAIFull IDE experience
Claude CodeAnthropicTerminal-first coding
ClineOpenAIVS Code agentic coding
Roo CodeOpenAIMulti-agent VS Code workflows
Continue.devOpenAIOpen-source Copilot alternative
Codex CLIOpenAIOpenAI's terminal agent
AiderOpenAIGit-aware terminal pair programming

Cursor

Cursor is the most popular AI code editor right now. It is a VS Code fork with built-in AI completions, multi-file editing, and an agentic mode that can plan and execute multi-step code changes.

To use a Lava spend key: go to Settings > Models > OpenAI API Key, paste your lava_sk_* key, and set the base URL to https://api.lava.so/v1. Select any model Lava supports and start coding. Every completion and chat message is tracked in your AI Spend dashboard.

This is especially useful if you are on Cursor's free tier or want to avoid their Pro subscription. With a spend key, you pay only for the tokens you actually use, and you can switch models on the fly.

Claude Code

Claude Code is Anthropic's terminal-based agentic coding tool. It reads your codebase, writes code, runs commands, and iterates on errors autonomously. It is one of the most token-hungry tools in this list because it maintains a large context window and makes multiple model calls per task.

To use a Lava spend key, set two environment variables before launching:

export ANTHROPIC_AUTH_TOKEN="lava_sk_your_key_here"
export ANTHROPIC_BASE_URL="https://api.lava.so"

Claude Code uses the Anthropic API format, so your spend key needs to be created with the Anthropic (/v1/messages) format in the Lava dashboard.

Watch your Claude Code spend

Claude Code's agentic mode can burn through tokens quickly. Setting a daily or weekly spend limit on your key is a good idea, especially when you are experimenting with larger tasks.

Cline

Cline is an autonomous coding agent that runs inside VS Code. It can create and edit files, run terminal commands, and use the browser. It is fully model-agnostic and designed from the ground up for BYOK.

cline auth -p openai -k lava_sk_your_key_here -b https://api.lava.so/v1 -m claude-opus-4-6

Cline is particularly good for developers who want agentic coding capabilities without leaving VS Code. With a spend key, you get full usage visibility into how much each agentic session costs.

Roo Code

Roo Code is a fork of Cline that adds multi-agent "modes" (Code, Architect, Debug, Ask). Each mode has a different system prompt and behavior optimized for its task. The setup is the same as Cline since it uses the same configuration format.

Continue.dev

Continue is the most popular open-source AI code assistant. It works in VS Code and JetBrains, supports tab completion, chat, and custom code actions, and lets you point at any OpenAI-compatible endpoint via its config file. Set your lava_sk_* key and https://api.lava.so/v1 as the base URL and you are set.

Codex CLI

OpenAI's Codex CLI is a terminal agent that reads your repo and executes multi-step coding tasks. Add Lava as a custom provider in ~/.codex/config.toml:

model = "claude-opus-4-6"
model_provider = "lava"

[model_providers.lava]
name = "Lava"
base_url = "https://api.lava.so/v1"
env_key = "LAVA_SPEND_KEY"
wire_api = "chat"

Aider

Aider is a terminal-based pair programmer that understands your git history and makes clean, well-scoped commits. It supports any OpenAI-compatible endpoint via the --openai-api-base flag:

aider --openai-api-key lava_sk_your_key_here --openai-api-base https://api.lava.so/v1

AI Chat and Interface Tools

Not everything is about writing code. Sometimes you just need a good chat interface for brainstorming, writing, research, or quick questions. These tools let you use frontier models without a monthly subscription, paying only for what you use.

ToolPlatformAPI FormatCustom Base URLBest For
Raycast AImacOSOpenAIQuick AI from anywhere on Mac
TypingMindWebOpenAIPower-user chat interface
LibreChatSelf-hostedOpenAISelf-hosted ChatGPT alternative
LobeChatSelf-hostedOpenAIPlugin-rich self-hosted chat
Open WebUISelf-hostedOpenAIOllama frontend with remote API support
Cherry StudioDesktopOpenAIDesktop AI with offline + remote

Raycast AI

If you are on a Mac, Raycast is one of the best ways to use AI throughout your day. Summon it with a keyboard shortcut, ask a question, translate text, rewrite an email, or generate code, all without leaving what you are doing.

To connect Lava, go to Raycast > Settings > Extensions > AI and add a custom OpenAI-compatible provider with your lava_sk_* key and https://api.lava.so/v1 as the base URL. You can use any model Lava supports, including Claude, GPT, and Gemini.

This replaces Raycast's built-in AI subscription ($8/month for Pro, $16/month for Advanced) with pay-per-token pricing through your spend key.

TypingMind

TypingMind is a one-time-purchase chat interface built entirely around BYOK. You bring your own API keys, it gives you a polished ChatGPT-like experience with team workspaces, plugins, prompt libraries, and custom model support. It accepts any OpenAI-compatible endpoint and base URL, which makes it a perfect match for Lava spend keys.

LibreChat and LobeChat

Both are open-source, self-hosted ChatGPT alternatives. They support custom OpenAI-compatible endpoints out of the box. If you are running either of these for your team, you can point them at Lava and get centralized cost tracking across all users.

Open WebUI

Open WebUI started as a frontend for Ollama (local models) but now supports any OpenAI-compatible backend. If you already use Open WebUI, adding Lava as a remote endpoint gives you access to frontier models alongside your local ones.

AI Agent Frameworks

Agent frameworks are where spend keys become essential. Agents make multiple LLM calls per task, often in loops, and costs can escalate unpredictably. A spend key with a daily limit is the simplest way to prevent a runaway agent from draining your budget.

FrameworkLanguageAPI FormatCustom Base URLBest For
Vercel AI SDKTypeScriptOpenAINext.js and React apps
LangChainPython / TSOpenAIChains, RAG, agents
CrewAIPythonOpenAIMulti-agent collaboration
OpenAI Agents SDKPythonOpenAILightweight multi-agent workflows
Anthropic SDKPython / TSAnthropicDirect Claude API access
DifyVisualOpenAIVisual agent workflow builder

Vercel AI SDK

If you are building AI features in a Next.js or React app, the Vercel AI SDK is the standard. Connecting it to Lava is three lines of code:

import { createOpenAI } from '@ai-sdk/openai';

const lava = createOpenAI({
  apiKey: process.env.LAVA_SPEND_KEY,
  baseURL: 'https://api.lava.so/v1',
});

From there, use lava('claude-opus-4-6') or any other model as your model parameter. Streaming, tool calling, and structured output all work exactly as they do with the native OpenAI provider.

LangChain

LangChain's ChatOpenAI class accepts a custom base URL, which means you can swap in Lava without changing your chains, agents, or RAG pipelines:

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  model: "claude-opus-4-6",
  apiKey: process.env.LAVA_SPEND_KEY,
  configuration: {
    baseURL: "https://api.lava.so/v1",
  },
});

CrewAI

CrewAI is one of the most popular Python frameworks for multi-agent workflows. Each agent in a crew can be configured with a different model and base URL. Point your agents at Lava and you get per-agent cost tracking through your spend key:

from crewai import LLM

llm = LLM(
    model="openai/claude-opus-4-6",
    api_key="lava_sk_your_key_here",
    base_url="https://api.lava.so/v1",
)

OpenAI Agents SDK

OpenAI's lightweight agent framework is designed to be provider-agnostic. Any OpenAI-compatible endpoint works, which makes Lava a natural fit. You can run multi-agent workflows with handoffs, tools, and guardrails while tracking all spend through a single key.

Anthropic SDK

For tools and frameworks that use the Anthropic API directly, create a spend key with the Anthropic format and configure the SDK:

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: process.env.LAVA_SPEND_KEY,
  baseURL: 'https://api.lava.so',
});

Dify

Dify is a visual workflow builder for AI agents and RAG pipelines. In the model provider settings, add a custom OpenAI-compatible provider with your spend key and Lava's base URL. Every agent run flows through Lava with full cost tracking.

Workflow Automation

AI is increasingly embedded in automation platforms. If you are building AI-powered workflows, a spend key gives you budget guardrails on what would otherwise be open-ended API consumption.

n8n

n8n is the leading open-source workflow automation platform. Its AI Agent node accepts any OpenAI-compatible LLM endpoint. Add your Lava spend key as a credential, set the base URL to https://api.lava.so/v1, and every AI node in your workflows will route through Lava. You can build multi-step AI automations (summarize emails, classify support tickets, generate reports) with centralized cost visibility.

Flowise and Langflow

Both are visual, drag-and-drop builders for LangChain-based AI workflows. They expose custom ChatOpenAI nodes where you enter an API key and base URL. Same pattern: paste your spend key, point at Lava, and you get cost tracking on every workflow execution.

One key across all your tools

The best part of using a Lava spend key is that you can use the same key across multiple tools. Create one key for your coding work, another for your automation workflows, and a third for your team's chat interface. Each key has its own spend limit, model restrictions, and usage tracking. The AI Spend dashboard shows everything in one place.

Setting Up Your First Spend Key

Getting started takes about two minutes:

  1. Sign up at lava.so and add funds to your wallet.
  2. Go to AI Spend in the dashboard and click Create Spend Key.
  3. Choose your API format: OpenAI for most tools, Anthropic for Claude Code and the Anthropic SDK.
  4. Set your limits: pick which models are allowed and set an optional daily, weekly, or monthly spend cap.
  5. Copy the key and paste it into your tool of choice.

That is it. Every request through that key shows up in your dashboard with the model used, tokens consumed, and cost.

For step-by-step setup instructions for specific tools, see our Tool Setup Guides.

Why Use a Spend Key Instead of Direct API Keys?

You might wonder why you would route through Lava instead of using provider keys directly. A few reasons:

One key, all models. Instead of signing up for OpenAI, Anthropic, Google, Mistral, and xAI separately, you get access to 600+ models through one key. Switch models by changing a string. No new accounts, no new billing relationships.

Budget controls that actually work. Direct API keys have no spending limits. You set a billing alert and hope for the best. Lava spend keys enforce limits at the gateway level. When your daily cap is hit, requests stop. No surprise bills. For more on why this matters, see our guide to AI spend management.

Visibility across tools. When you use the same spend key across Cursor, Claude Code, and your n8n workflows, all the usage shows up on one dashboard. You can see exactly which tool is costing you what.

Model restrictions. You can create a spend key that only allows specific models. Give your intern a key that only works with GPT-4o Mini. Give your production agent a key locked to Claude Opus. This is not possible with direct provider keys.

Instant revocation. If a key leaks, revoke it from the dashboard and it stops working immediately. No waiting for provider support. No worrying about what else that key has access to.

How Lava Helps

Lava gives you a single gateway to 600+ AI models with built-in spend controls, usage tracking, and budget enforcement. Spend keys are the simplest way to use AI tools without managing multiple provider accounts or worrying about runaway costs.

The gateway is free. There are no per-request fees and no markup on provider costs. You pay only for the tokens you use at the provider's list price, plus a small service fee.

Whether you are a solo developer using Claude Code and Cursor, a team running AI workflows in n8n, or a company building with the Vercel AI SDK, spend keys give you one place to manage all of it.

Create your first spend key and start using it with any of the tools in this guide.

Related Articles

Ready to simplify your AI billing?

Lava handles metering, billing, and payouts so you can focus on building your AI product.