PHP isn't the first language that comes to mind for a lot of people when they think about languages to use for building AI systems (Python, R, and Julia are great for the AI model layer), but I've found that it can be powerful as the orchestration and delivery layer when paired with a framework like Laravel. Laravel has a lot to offer for building production AI apps with its features for Auth, request validation, queues, rate limiting and more.
I've built numerous AI agents into production Laravel projects, and I've found that the package ecosystem really compliments the core framework features. This lets you move even quicker. In this article, I'll give you give a quick overview of the packages I've found most useful so far.
OpenAI PHP
Github link (Laravel version)
This is a robust client for OpenAI's API that is maintained by Nuno Maduro and other open source contributors. There's also a non-Laravel version of the client if you prefer that.
I've leaned heavily on this package for working with OpenAI's chat and embeddings models in my apps. The package supports many OpenAI endpoints, so check out the docs!
One project where I've used this is adding an AI search agent to an industry-specific map/search website I built for my company. The model (currently gpt-4o gives best results) can answer questions about the website or the industry or control the map to show records based on a user request (ex: "show me all locations in Texas").
pgvector-php
Github link
For RAG and countless other AI/ML applications, a vector database is imperative, and this package brings support for the pgvector
extension to Laravel.
For Laravel devs, this package gives you a vector
type to use in database migrations and a HasNeighbors
trait for your models. Setting these up allows you do store vectors and query to find the closest other records with eloquent.
If you are using openai-php/laravel
to get text embeddings, you might do something like this in a migration to prepare your database to store the vector data:
Schema::create('items', function (Blueprint $table) {
$table->vector('embedding', 1536); // for text-embedding-3-small
});
Pretty neat!
People think of vectors as being only (or at least primarily) useful for RAG, but I believe they are more versatile than many people realize. As an example, as a fun project for myself, a built an app where I can generate "music" from text prompts to AI and store my created samples. I stored the embeddings of the prompts in a vector column in postgres, and made a feature where I could search through my "ai music" clip collection through vector nearest neighbors. So if I typed "bluesy, improv guitar solo" into the search, it would find the clips where the prompt was closest to my search semantically in this higher-dimension space.
probots-io/pinecone-php
Github link
If you aren't running PostgreSQL or otherwise need an external/dedicated vector database, Pinecone can be a solid choice as a hosted option. This package gives you a php client for it, which I've used in projects including a Statamic addon that's in-progress.
ChatGPT Agent (Filament Plugin)
Github link
This is a newer one, but it makes it super easy to add an OpenAI-powered chatbot to your filament panels. Powerful features include the ability to use the current page content as context and define functions in your Laravel app that the AI is allowed to call.
In a multi-tenant app, you can do something like this: ->enabled(fn() => Filament::getTenant()->ai_assistant_enabled ?? false)
(along with an appropriate database migration on the tenant table) when adding the plugin to a panel provider so that tenants can choose whether they want their data exposed to the AI service!
Honorable Mention: webR — ML in the Frontend?
A surprising project I’ve been keeping an eye on is webR, which compiles the R programming language (focused on stats and often used in data science) to WebAssembly so it can run entirely in the browser. I even built a quick demo that fits a linear model (glm) and plots it using R’s ggplot2 — all without any server calls to R! (note: not mobile optimized)
While my example is simple and not particularly practical, it highlights 2 nontrivial points
- You can run code for fitting models in the browser
- Many packages from R's ecosystem can be used in this environment (matrix math, ML visualization, etc.)
I was surprised to find that even the ranger
package for fast random forests works inside webR. That means you can run real ensemble-based ML models in the browser.
It’s obviously not built for production deep learning, but it hints at a broader trend: lightweight machine learning and statistical modeling happening right in the frontend. WebAssembly will continue to drive a proliferation of options for devs looking to deploy models to the frontend. That could mean better privacy (no user data sent to a server), server cost savings, and some creative UX possibilities.
What could your frontend do with this kind of power?
Conclusion
AI is transforming what users and stakeholders expect out of our web apps, and Laravel is more than capable of rising to the challenge. Whether you’re integrating powerful APIs like OpenAI, adding semantic search with vectors, building admin tools with conversational agents, or even experimenting with client-side modeling, the Laravel ecosystem gives you the structure and flexibility to ship fast. PHP might not be the star of the AI model layer, but it’s absolutely a reliable driver of production-ready AI apps.
I hope the packages and tips I've mentioned here help you ship AI features quickly for your Laravel apps! Let me know in the comments if I've missed one of your favorites.
Join the conversation and feel free to ask any questions you may have! Please note that submitted comments may be subject to approval before appearing on the site.