Laravel 13: Built for the age of AI
From a first-party AI SDK to typed PHP attributes and smarter caching — Laravel 13 is the most significant release in years. Here's everything you need to know, with real implementation steps.
In this article
First-party AI SDK
Laravel 13 ships with an official, first-party AI SDK — laravel/ai.
Instead of managing separate packages for each provider, you get one
unified interface for text generation, tool-calling agents, embeddings,
audio, image generation, and vector databases. Switch from Claude to
GPT-4o by changing a single environment variable.
-
Install the AI SDK via Composer
composer require laravel/ai -
Publish the config and database migrations This creates
config/ai.phpand migration tables for agent conversation history.php artisan vendor:publish --provider="Laravel\Ai\AiServiceProvider" php artisan migrate -
Add your API keys to .env
# .env ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxx OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxx -
Configure providers in config/ai.php Set your default provider and enable automatic failover. If Anthropic is the default and OpenAI is the fallback, the SDK handles the switch transparently.
// config/ai.php return [ 'default' => env('AI_PROVIDER', 'anthropic'), 'providers' => [ 'anthropic' => [ 'driver' => 'anthropic', 'key' => env('ANTHROPIC_API_KEY'), 'fallback' => 'openai', // auto failover ], 'openai' => [ 'driver' => 'openai', 'key' => env('OPENAI_API_KEY'), ], ], ]; -
Generate an Agent class Agents are the core building block — each agent is a dedicated PHP class with instructions, tools, and output schema.
php artisan make:agent SupportAssistant -
Define your agent — with per-provider options The
providerOptions()method lets you pass provider-specific settings. For Anthropic, you can enable extended thinking; for OpenAI, you can set reasoning effort and frequency penalties.// app/Ai/Agents/SupportAssistant.php namespace App\Ai\Agents; use Laravel\Ai\Contracts\Agent; use Laravel\Ai\Contracts\HasProviderOptions; use Laravel\Ai\Enums\Lab; use Laravel\Ai\Promptable; class SupportAssistant implements Agent, HasProviderOptions { use Promptable; public function instructions(): string { return 'You are a helpful customer support agent. Answer clearly and escalate when unsure.'; } public function providerOptions(Lab|string $provider): array { return match($provider) { Lab::Anthropic => [ 'thinking' => ['budget_tokens' => 1024], ], Lab::OpenAI => [ 'reasoning' => ['effort' => 'low'], 'frequency_penalty' => 0.5, ], default => [], }; } } -
Prompt your agent from a controller or service
use App\Ai\Agents\SupportAssistant; use Laravel\Ai\Enums\Lab; // Use the default provider (Anthropic) $response = (new SupportAssistant)->prompt( 'Customer says: my order has not arrived after 2 weeks.' ); // Explicitly target OpenAI for this request $response = (new SupportAssistant)->prompt( 'Summarise this legal disclaimer for plain-English display.', provider: Lab::OpenAI ); // Use failover: try Anthropic first, fall back to OpenAI $response = (new SupportAssistant)->prompt( 'Classify this ticket as urgent or routine.', provider: [Lab::Anthropic, Lab::OpenAI] );
Lab::Anthropic and Lab::OpenAI enum values instead of plain strings — they're type-safe and auto-complete in your IDE.
The SDK also includes built-in SimilaritySearch tools for RAG (retrieval-augmented generation), provider tools like WebSearch and WebFetch, and first-class testing support via SupportAssistant::fake().
PHP 8.3 is now mandatory
Laravel 13 drops support for PHP 8.1 and 8.2. PHP 8.3 is the minimum, and PHP 8.5 (released November 2025) is also supported — bringing further JIT improvements and native URI handling.
Key benefits unlocked by requiring 8.3:
- JIT improvements — better throughput for CPU-bound workloads, including AI inference pipelines
- Typed class constants — constants now enforce type safety at the PHP level
json_validate()— validate JSON without decoding it, useful when handling large AI responses
// PHP 8.3: typed class constants
class ApiConfig {
const string DEFAULT_MODEL = 'claude-sonnet-4-6';
const int MAX_TOKENS = 4096;
}
// json_validate() — no need to decode first
if (json_validate($aiResponse)) {
$data = json_decode($aiResponse, true);
}
php -v and update your hosting environment or Docker image to PHP 8.3+ before deploying a Laravel 13 app.
Native PHP Attributes for cleaner code
Laravel 13 leans heavily into PHP Attributes — the #[...]
syntax — across models, jobs, commands, and routing. This reduces the
need for config-heavy patterns and makes code more declarative and
self-documenting.
// Before: config-based job setup
class ProcessAiReport implements ShouldQueue {
public string $queue = 'ai';
public int $tries = 3;
public int $timeout = 120;
}
// After: attribute-driven (Laravel 13)
#[Queue('ai'), Tries(3), Timeout(120)]
class ProcessAiReport implements ShouldQueue {
// clean and concise
}
Attributes can also be used on Eloquent models for casting, relations, and validation, reducing the need for long $casts arrays and boot methods. The code becomes easier to scan at a glance.
Cache::touch() — smart cache refresh
A small but practical addition: Cache::touch()
extends a cache entry's TTL without re-fetching or rewriting the value.
Ideal for keeping active sessions alive, or preventing expensive AI
embeddings from expiring mid-session.
// Extend TTL without touching the value
Cache::touch('user:embeddings:42');
// Extend with a specific duration
Cache::touch('conversation:context', now()->addHours(2));
// Practical use: refresh AI session context on every request
if (Cache::has('ai:session:'.$userId)) {
Cache::touch('ai:session:'.$userId);
}
Performance improvements
Laravel 13's performance gains come from two directions: PHP 8.3's improved JIT compiler and internal framework optimisations to request handling and service resolution. For most apps, this is a free throughput boost with no code changes required.
For AI-heavy workloads — like running multiple concurrent agent prompts, processing embeddings in queued jobs, or streaming large language model responses — the combination of JIT improvements and smarter service container resolution means more requests handled per second at the same infrastructure cost.