Google published a practical guide on May 12, 2026 about long-running agents with ADK. The example uses employee onboarding, but the important point is broader: a production agent cannot depend on an infinite chat history.
This distinction matters for web teams. A chatbot answers in minutes. A real workflow can last days: waiting for a signature, confirming a payment, receiving support logs, reviewing a document or approving an action.
Why stateless agents break
Many agent examples work by saving every message and replaying the whole conversation to the model. That can be acceptable for a short session. It breaks down when the process spans days or weeks.
The failure modes are predictable:
- old context pollutes the current decision;
- token cost grows without adding value;
- the model may assume steps that never happened;
- a container restart can leave the process ambiguous.
Google’s guide treats these workflows as durable processes with explicit state and resume events. That looks much more like serious backend engineering than a chat demo.
Durable state beats infinite history
The idea I care about most is the state machine. Instead of asking the model to remember where the workflow is, the system tells it exactly which step is current.
type OnboardingStep =
| "START"
| "WELCOME_SENT"
| "DOCUMENTS_SIGNED"
| "IT_PROVISIONED"
| "HARDWARE_DELIVERED"
| "COMPLETED";
In Laravel, this maps naturally to a workflow table, queued jobs and external events. In Astro or content-heavy products, it could apply to editorial reviews, draft generation or approvals before publication.
Events, webhooks and real pauses
The practical insight is that an agent should not block while waiting. If the process needs a signature or confirmation, it should sleep and resume through a webhook, event or scheduled job.
That changes the design:
- the model decides fewer things;
- the system stores more explicit state;
- external integrations trigger clear transitions;
- each resume receives only relevant context.
For product teams, this reduces ambiguity. The agent does not “think” something happened. It knows because a durable state change or verified event says so.
How I would apply it in a web app
A realistic example is a B2B support workflow:
{
"case_id": "support_1842",
"current_step": "WAITING_FOR_CUSTOMER_LOGS",
"last_verified_event": "diagnostic_request_sent",
"next_allowed_actions": ["summarize_logs", "ask_followup_question"]
}
The agent can draft, summarize or propose. The important transitions still live in backend code, with logs and rules.
Takeaway for full-stack teams
Google’s ADK guide confirms a clear trend: useful agents look less like chats and more like small distributed systems. They need state, events, limits and observability.
For teams using Laravel, PHP, Astro or Vue, the lesson is straightforward: if an AI workflow lasts longer than a conversation, design it as backend product infrastructure from the start. AI can reason, but the system must remember.