Laravel TOON: Cut AI Token Costs by 40–60%

Laravel TOON: Cut AI Token Costs by 40–60%

Use Laravel TOON to compress JSON data and reduce AI token usage by up to 60%, enabling larger prompts with lower API costs.

In the era of AI-integrated web development, interacting with Large Language Models (LLMs) like OpenAI’s GPT-4, Claude, or Mistral is a daily task for many Laravel developers. However, we all face the same bottleneck: Token limits and API costs.

Every character you send to an LLM counts. When you are sending large datasets—like user logs, booking histories, or product catalogs—standard JSON payloads become incredibly expensive. The repeated keys, braces, and quotes of JSON eat up your context window before the model even starts processing the actual data.

What is Laravel TOON?

TOON stands for Token-Optimized Object Notation. It is a package designed specifically to solve the verbosity problem of JSON when communicating with AI models.

Based on the TOON format specification, this Laravel adapter transforms your PHP arrays or collections into a flattened, column-oriented format. It looks somewhat like a mix between YAML and a CSV, designed specifically to be readable by LLMs while using the minimum amount of tokens possible.

The "JSON vs. TOON" Comparison

To understand the power of TOON, you have to look at the structure.

The Problem: Standard JSON

JSON is great for APIs, but terrible for token economy. Notice how many times keys are repeated and how much syntax ({, }, ") is required:

{
    "artist": {
        "name": "Amelie Lens",
        "id": "art1"
    },
    "event": {
        "venue": {
            "city": "Amsterdam",
            "country": "NL"
        }
    }
}

The Solution: TOON Output

TOON flattens this structure using dot notation in headers and a tabular layout for values:

items[1]{artist.name,artist.id,event.venue.city,event.venue.country}:
  Amelie Lens,art1,Amsterdam,NL

The result? The structure remains understandable to the AI, but the token count drops drastically because redundant keys and syntax are stripped away.

Real-World Savings: The Benchmarks

Does this actually save money? According to production benchmarks and analysis on table data efficiency, the answer is a resounding yes.

The package consistently delivers savings in the 40–60% range across various dataset sizes. Here is the breakdown:

Records JSON Size TOON Size Tokens Saved Savings %
10 2,868 1,223 1,645 57.4%
25 7,355 3,002 4,353 59.2%
50 14,720 5,924 8,796 59.8%
100 30,316 12,723 17,593 58.0%

If you are paying for API usage per million tokens, cutting your input size by nearly 60% effectively doubles your budget or halves your bill.

Key Use Cases

1. AI-Driven Chatbots & Analysis

When your application needs to "feed" database records to an AI for summarization or RAG (Retrieval-Augmented Generation), TOON ensures you can fit more data into the context window.

2. Laravel MCP Integration

If you are using the Model Context Protocol (MCP) to expose your app's data to AI assistants, TOON acts as the perfect encoding layer to ensure those responses are lightweight.

3. Cost-Sensitive High-Volume Apps

For backend services that make thousands of AI calls daily, the efficiency gains from TOON move straight to your bottom line.

Installation and Usage

Getting started is simple. You can view the source code on the GitHub Repository or install the package immediately via Composer:

composer require mischasigtermans/laravel-toon

Encoding Data for AI

When you prepare your prompt, simply encode your data:

use Toon;

$data = User::all()->toArray();

// Compress the data structure $encodedData = Toon::encode($data);

// Send to your LLM $response = OpenAI::chat()->create([ ‘model’ => ‘gpt-4’, ‘messages’ => [ [‘role’ => ‘user’, ‘content’ => "Analyze these users: \n" . $encodedData], ], ]);

Decoding AI Responses

If the AI returns data in TOON format (or if you need to reverse the process), you can easily decode it:

$originalArray = Toon::decode($encodedData);

Configuration & Fine-Tuning

Laravel TOON isn't a "one size fits all" black box. You can configure it to optimize further by:

  • Aliasing Keys: Shorten long database column names to single letters for headers.
  • Omitting Values: Automatically strip null values or timestamps that the AI doesn't need.
  • Truncation: Set limits on string lengths to prevent outliers from blowing up your token count.

Conclusion

Laravel TOON allows developers to stop worrying about the verbosity of their data and focus on the quality of their AI interactions. By reducing token usage, you not only save money but also unlock the ability to send more complex, richer contexts to your models.

If you are building AI features in Laravel, this package is a must-have utility in your toolbox.


Resources & References

Related posts

Explore more product news and best practices for teams building with HeyGeeks.