Vercel highlighted in its May 11, 2026 weekly update that it had open sourced deepsec, a security harness powered by coding agents. The main announcement, Introducing deepsec, explains that the tool is designed to surface hard-to-find vulnerabilities in large codebases and can run on your own infrastructure.
The interesting part is not only “AI for security”. It is the workflow: use agents to investigate code, revalidate findings and turn results into actionable work.
Why deepsec fits the current web stack
Modern web applications have more surface area than before:
- public endpoints;
- webhooks;
- AI integrations;
- agent actions;
- internal dashboards;
- automations touching sensitive data.
A traditional scanner can catch known patterns, but many real vulnerabilities depend on data flow, context and decisions spread across files. That is where a coding agent can add value if the workflow is controlled.
The workflow worth watching
Vercel describes a process with several phases: initial scan, agent investigation, revalidation, enrichment and export. That is more serious than simply asking a model to “find bugs”.
Revalidation matters because security already has enough noise. If an agent produces 80 false positives, the team will stop trusting the tool. If a second pass checks severity and evidence, the output can become closer to a useful work queue.
How I would use it in Laravel or Astro
In a Laravel/PHP product, I would start with sensitive areas:
- authentication controllers;
- policies and gates;
- Stripe, GitHub or external provider webhooks;
- endpoints that call AI models;
- file uploads;
- administrative panels.
In Astro, I would review forms, API routes, Open Graph generation, server-side integrations and any endpoint that processes external input.
A useful finding should look like this:
{
"area": "webhook_signature_validation",
"severity": "high",
"evidence": "missing timestamp tolerance check",
"recommended_owner": "backend",
"requires_manual_review": true
}
Without evidence and ownership, a finding is not work. It is noise.
AI does not replace security judgment
The risk with tools like deepsec is over-delegation. An agent can find strange paths through a codebase, but it does not know every business constraint. It also should not change sensitive code without human review.
My rule would be: agents for exploration and preparation; humans for prioritization, decision and merge.
Takeaway for teams that deploy quickly
deepsec points to a real need: codebases are growing faster than humans can review them manually. If agents can investigate sensitive areas and deliver better structured findings, they help.
But the value only appears when the team keeps discipline: clear scopes, reviewable results, CI, tests and human review. Agentic security is not magic. It is a new way to do technical investigation with broader coverage.