LLM Token Counter
Estimate token counts and costs for GPT-4, Claude, Llama, and other LLM models in real time
Input Text
0 characters
Characters0
Words0
Lines0
GPT-4 est. tokens0~4 chars/token
Model Comparison
Click a price to edit it. Prices are per 1M tokens in USD.
| Model | Provider | Tokens | Context | Input $/1M | Output $/1M | Input cost | Output cost |
|---|---|---|---|---|---|---|---|
| GPT-4o | OpenAI | — | — | — | — | ||
| GPT-4 Turbo | OpenAI | — | — | — | — | ||
| GPT-3.5 Turbo | OpenAI | — | — | — | — | ||
| Claude 3.5 Sonnet | Anthropic | — | — | — | — | ||
| Claude 3 Opus | Anthropic | — | — | — | — | ||
| Claude 3 Haiku | Anthropic | — | — | — | — | ||
| Llama 3 70B | Meta | — | — | — | — | ||
| Llama 3 8B | Meta | — | — | — | — | ||
| Gemini 1.5 Pro | — | — | — | — | |||
| Gemini 1.5 Flash | — | — | — | — |
GPT-style (BPE)Byte-pair encoding approximation — ~4 chars/token. Used for OpenAI and Google models.
Claude-styleAnthropic tokenizer approximation — ~3.5 chars/token, slightly more efficient vocabulary.
Llama (SentencePiece)SentencePiece BPE approximation — ~3.8 chars/token. Used for Meta Llama models.
WhitespaceSimple split on whitespace — counts words only. Useful as a lower-bound baseline.
Note: All counts are estimates. Exact token counts require the model's official tokenizer library (e.g., tiktoken for OpenAI).