












The Buzzword Problem
Agentic AI has a credibility problem. Every vendor deck in 2026 uses the term. Most of them mean a chatbot with a better prompt. A few mean something closer to autonomous software that reasons, remembers, and acts — but the slide decks blur the line, and buyers are left sorting hype from architecture on their own.
That gap between language and reality was the starting point for a room full of CX leaders and enterprise decision-makers gathered at Dillingers 1903 in Greenbelt, Makati on a Tuesday evening. The occasion: an intimate event co-hosted by Aether Global Technology and LivePerson, built around one question — what does agentic AI actually look like in production, and why are so few organizations getting there?
The Number That Sets the Room Straight
Tom Tokita, President of Aether Global Technology, opened his keynote with a stat that reframed the entire evening. According to the Philippine AI Report 2025, 92% of Philippine organizations have experimented with AI. Only 3% have operationalized it. BusinessWorld covered the same finding — deployment across the country remains shallow, stuck at the proof-of-concept stage while leadership teams debate frameworks they haven’t tested.
Ninety-two percent experimented. Three percent shipped. That is not an adoption curve. That is a graveyard of pilot programs.
Tom Tokita on the Architecture Gap
Tokita’s keynote did not pitch a product. It diagnosed a pattern. The enterprises stuck in POC purgatory share a common failure mode: they chose tools before designing systems. They bought licenses, ran demos, impressed a boardroom — and then discovered that a standalone AI capability with no memory, no guardrails, and no connection to their operational data is just an expensive parlor trick.
Tokita laid out a plain-language framework for what production-grade agentic AI actually requires: specialized agents scoped to specific jobs, persistent memory across interactions, automated guardrails that enforce business rules without manual babysitting, and a human-in-the-loop layer for decisions that carry real consequence. Four pillars. No acronyms.
The details of each pillar stayed in the room — Tokita made it clear the framework is something he has built and tested in production, not something that fits on a single slide. But the throughline was unmistakable: architecture matters more than any individual tool. The model is maybe 30% of the problem. The other 70% is how you connect it to your people, your data, and your processes.
LivePerson: Conversational AI in the Wild
If Tokita set the strategic frame, LivePerson CEO John Sabino and the LivePerson Team brought the live proof. They flew in from the US with his APAC team to demonstrate what conversational AI looks like when it is engineered for real customer experience — not a sandbox, not a mockup.
The demo scenario: a telco customer’s internet goes down. They call the support line. Instead of a hold queue, the voice system deflects to a messaging channel — seamless, no friction. An AI agent picks up the conversation, identifies the issue, runs triage, checks the account, and determines the outage requires a technician dispatch. At the exact moment the conversation exceeds what the AI can resolve, it hands off to a human agent. Not a cold transfer. The human gets the full context — account details, troubleshooting steps already completed, customer sentiment. The customer never repeats themselves. The customer never sees the seams.
It was a sharp demonstration of what conversational AI should feel like from the customer’s side: fast, context-aware, and smart enough to know when to step back. LivePerson, as a global leader in conversational AI, has been building this orchestration layer for years. Sabino’s message was direct — AI agents do not replace human agents. They make human agents dramatically more effective by handling volume, preserving context, and routing intelligently.
The room responded. Several attendees noted afterward that the handoff moment — the AI recognizing its own limits and escalating with full context — was the most compelling part of the evening. That is the part most agentic AI demos skip.
The Coexistence Model
One of the sharper points Tokita made during the evening was about ecosystem design. Enterprises do not need one platform to rule them all. They need platforms that know their role.
Salesforce serves as the system of record — pipeline, customer data, operational workflows. LivePerson handles the customer-facing conversational AI layer — messaging, voice, digital engagement. Aether, as a Salesforce consulting partner, sits in the integration and delivery seat — connecting platforms so that internal agents and external-facing agents operate from the same data without stepping on each other.
Internal automation and customer experience are different problems with different requirements. Trying to solve both with a single tool creates compromises in both directions. The coexistence model — Salesforce for CRM and operations, LivePerson for conversational AI, Aether for the connective tissue — means each platform does what it does best. The customer experience stays coherent because the architecture is coherent.
Why the Gap Compounds
Tokita closed with an observation that landed hard: the 92/3 gap is not static. It is compounding.
Organizations that operationalize agentic AI — even one process, even one department — generate operational data that makes the next deployment faster, cheaper, and more accurate. Their agents get smarter. Their teams learn how to work alongside AI instead of around it. Each quarter, the gap between them and the organizations still circling POC purgatory widens.
This is not a technology gap. It is a momentum gap. And momentum, once lost, is expensive to recover.
Three Steps. No Magic.
For the leaders in the room weighing their next move, Tokita offered a framework that fits on a napkin:
- Start with one process. Not a moonshot. One painful, repetitive, data-rich workflow.
- Prove it in 60 days. Measurable outcome. Not a demo — a deployment with real users, real data, real feedback.
- Expand from evidence. Let the results fund the next deployment. Board approval follows production metrics, not slide decks.
The tool is 30% of the cost. Operations — integration, change management, guardrails, measurement — is the other 70%. Organizations that budget only for the tool end up back in the POC graveyard.
What Comes Next
The evening at Dillingers 1903 was not a product launch. It was a reality check — delivered by an AI-powered Salesforce consulting partner and a conversational AI leader whose platform handles millions of customer interactions globally.
The 3% who have operationalized AI are not smarter. They just started. The other 89% are not behind because the technology is hard. They are behind because they are still planning instead of deploying.
For enterprise leaders ready to close that gap — or at least ready for an honest conversation about what it takes — Aether’s door is open. No pitch deck required.
—
Tom Tokita is President of Aether Global Technology Inc., an AI-powered Salesforce consulting partner in the Philippines specializing in CRM strategy, integration, and intelligent automation. Aether helps enterprises bridge the gap between AI experimentation and operational deployment.









