The Q2C Problem
No One Is Solving
with AI

After 15 years inside enterprise Quote-to-Cash systems — CPQ, CLM, billing, revenue recognition — I can tell you exactly why every AI POC in this space fails before it reaches production. And what the architecture looks like when it doesn't.

Q2C Agentic AI CPQ CLM GCP Vertex AI

I spent 15 years designing and delivering Quote-to-Cash systems. CPQ on Salesforce. Contract lifecycle management on Conga. Billing, revenue recognition, asset management. Across manufacturing, medtech, and software businesses managing hundreds of millions of euros in annual recurring revenue.

In that time, I watched the same failure mode repeat across almost every AI initiative that touched Q2C: a promising POC, impressive demo, enthusiastic stakeholders — and then nothing in production twelve months later. The AI project is quietly shelved. The spreadsheets return.

I've thought hard about why this keeps happening. It's not the technology. It's a structural misunderstanding of what the Q2C problem actually is.

What Q2C actually is — and why it breaks AI

Quote-to-Cash is not a workflow. Most people who haven't lived inside it think of it as a sequence: quote, approve, contract, order, fulfil, invoice, collect. A pipeline with stages. Something that looks good on a slide.

The reality is nine silos, each with its own system of record, its own data model, its own approval hierarchy, and its own definition of what a "deal" is. Sales owns the opportunity. Legal owns the contract. Finance owns the revenue schedule. Service owns the warranty terms. Revenue accounting owns the ASC 606 allocation. None of these teams use the same vocabulary. None of their systems share a common schema.

This is why the AI POC fails. The team demonstrates a model that predicts deal close probability from CRM data. Impressive. But the CFO's actual problem is that the 47-day average from signed contract to first invoice isn't caused by the sales team being slow — it's caused by the contract terms not matching the product configuration, which means a manual correction cycle between Legal and Operations that nobody has instrumented or measured. The AI is answering the wrong question because nobody mapped the actual bottleneck.

The root cause

Q2C AI fails because the problem is treated as a prediction problem when it's actually an orchestration problem. The bottleneck isn't information — it's handoffs. And AI doesn't fix handoffs unless the architecture replaces the handoff with an agent that owns the transition.

The real bottlenecks — and which ones AI can actually solve

Let me be specific about where value actually lives in the Q2C process, because "AI for Q2C" is too broad to be useful.

1

Configuration accuracy at quote time

A salesperson builds a complex product configuration. 30% of configurations in complex B2B environments contain errors that are caught by engineering or legal after the quote is sent. An AI agent trained on historical configurations and product rules can validate in real time — before the quote leaves the building. This is solvable, and the ROI is immediate: fewer revision cycles, faster close.

2

Contract clause extraction and risk scoring

Legal spends 40–60% of contract review time on boilerplate clauses that haven't changed in years. A RAG-based contract intelligence agent can extract, classify, and risk-score 200+ clause types in minutes, surfacing only the genuinely non-standard items for human review. This is where Gemini's 1M token context window changes the calculus — the entire contract history can be held in context while the agent reasons across it.

3

Revenue recognition classification

ASC 606 performance obligation identification is rule-based — but the rules are complex and the contract language is ambiguous. A hybrid classification model (structured contract data + LLM clause interpretation) can automate 70–80% of standard recognition decisions and route the rest to a Finance Controller with a pre-populated explanation. This is not replacing the accountant. It's eliminating the data entry that prevents them from doing actual accounting work.

4

Predictive maintenance and warranty reserve

For equipment businesses, the warranty reserve is a consequence of not knowing when things will fail. A two-tier ML system — fleet-level RUL prediction plus unit-level anomaly detection on field telemetry — can reduce the warranty over-reserve by 30–40%. The architecture here is well-understood: Pub/Sub for telemetry ingestion, Vertex AI for the prediction models, BigQuery for the fleet-wide feature store.

Why the POC works and production doesn't

The POC uses clean, curated data from one system. Production requires integrating with the CRM, the CLM, the ERP, the billing system, and the asset register — each with its own API, its own authentication model, its own rate limits, and its own concept of what a customer ID is.

This is the integration problem that kills Q2C AI at scale. And the architecture that solves it is not a series of point integrations. It's a common event fabric — Pub/Sub topics that carry canonicalised business events regardless of which system of record produced them — and a Feature Store that assembles the cross-domain feature set the ML models need.

On Google Cloud, this means: Pub/Sub for the event fabric, Dataflow for transformation, BigQuery as the analytical substrate, Vertex AI Feature Store for the ML layer, and Firestore for the agent state that bridges operational and analytical systems. When you build it this way, adding a new data source means adding a new publisher to Pub/Sub — not rebuilding the integration layer.

The architectural principle

Q2C AI at production scale requires a common data fabric before it requires ML models. The models are straightforward once the data is canonicalised. The data canonicalisation is where 80% of the delivery effort goes — and where 80% of teams underestimate the work.

The agentic architecture that actually works

The architecture that I've found most durable for Q2C automation is a specialist agent swarm, where each agent owns a specific domain and communicates via a common event fabric rather than point-to-point calls.

A CPQ Agent owns configuration validation and pricing. A Contract Guard Agent owns clause extraction, risk scoring, and the negotiation workflow. A Revenue Agent owns the ASC 606 classification and the journal entry preparation. An Asset Agent owns the installed base, service history, and warranty reserve. Each agent has a defined autonomy boundary — a set of actions it can take without human approval — and a set above that threshold where it prepares a decision brief and routes to the appropriate human.

The agents don't call each other directly. They publish events to Pub/Sub and subscribe to the events they need. This means any agent can be replaced, upgraded, or taken offline without affecting the others. It means the audit trail for a complete deal lifecycle is a stream of events in Pub/Sub — immutable, ordered, and query-able. It means compliance is a structural property of the system, not a report you run at month end.

What makes this different from the POC that gets shelved is not the models. It's the event fabric, the agent contracts, and the HITL checkpoints. Those three things are what allow the system to go to production in a regulated environment — and to stay there.

Where the 90-day entry point is

If you're a CTO or VP of Engineering at an enterprise with a complex Q2C process and a failed or stalled AI initiative, here's where I'd suggest starting:

Map the actual handoffs — not the process diagram, the real handoffs, the ones that produce email threads and Slack escalations and missed SLAs. Quantify the time each one takes and the error rate. That analysis will tell you which of the four bottlenecks above is worth addressing first, and what ROI looks like if you automate it.

Then design the data architecture before the ML architecture. The Feature Store design, the event schema, the canonical data model — this work is unglamorous and often skipped. It's also the difference between a POC and a production system.

The Q2C AI problem is solvable. It's just harder than the demo makes it look — and the teams that succeed are the ones who understand it's an orchestration and data problem first, and an ML problem second.

← The EU AI Act post Next: TOGAF & enterprise AI →