AdmissionsbyOpenAI'sCOO
Generic AI is a product. Enterprise AI is an integration problem. And integration is, inherently, custom.
In February 2026, OpenAI's Chief Operating Officer sat down with TechCrunch and said the quiet part out loud: "We have not yet really seen AI penetrate enterprise business processes." The COO of the most-funded AI company on earth. Over 20 billion dollars in annualised revenue. And the admission is that the product has not yet made it into how real businesses actually run.
The same interview reported something else. To close that gap, OpenAI has signed on McKinsey, BCG, Accenture, and Capgemini. The world's most-funded AI company now needs the world's largest consultancies to make its product useful inside a company. If you are an operations lead or a founder reading this on month three of a barely-used enterprise AI licence, that sentence is permission to trust your own instinct. You did not buy the wrong tool. You bought a tool that was never designed to understand your operations.
Generic AI is a product. Enterprise AI is an integration problem. And integration is, inherently, custom.
The easy read of the COO's comment is that OpenAI sold too much licence and not enough implementation. The more accurate read is harder, and more useful.
Horizontal AI tools are built against the general case. ChatGPT is extraordinary at the general case. Drafting an email. Summarising a document. Transcribing a meeting. Explaining a concept. These are tasks where the context fits inside the prompt, the output is a text artefact, and success can be judged in the moment. Generic AI is very good at generic work.
Enterprise processes are the opposite shape. They live across systems (CRM, ERP, ticketing, billing, data warehouse, file storage, legacy databases). They involve approvals, audit trails, and rules nobody wrote down. They depend on data the model has never seen. The horizontal model does not know who your customers are, what your SKUs mean, where a specific contract sits in your Drive, or which of your two accounts-receivable workflows you actually use.
That is not a model problem. It is a connection problem.
This is the admission underneath the admission. OpenAI is not saying the model is not smart enough. It is saying the model has not reached the systems where work happens.
If the model was the hard part, OpenAI would not need a partner list.
The reason the TechCrunch piece names McKinsey, BCG, Accenture, and Capgemini is not brand marketing. It is supply-side reality. Those four are where enterprise integration capacity lives. Process mapping. Workflow redesign. Change management. Legacy system archaeology. Data governance. The non-glamorous engineering work that has to happen before a chat interface can do anything that compounds.
This pattern is not new. Salesforce did it. SAP did it. Oracle did it. Every category-defining enterprise software company in the last thirty years ended up selling through, or alongside, the large consultancies. The part of the job the vendor cannot do is the part where the software has to meet the organisation's reality. The vendor ships a tool. The consultancy shapes the tool to the company.
With AI, the gap is wider because the tool is more general. A CRM ships with defaults for customer records. A horizontal AI model ships with defaults for language itself. The distance between "language" and "how your business runs" is larger than the distance between "customer record" and "how your business runs". More of the custom work is required.
McKinsey's November 2025 State of AI survey put a number on it. Only 1% of enterprises describe their generative AI strategies as mature. Ninety-nine out of a hundred do not.
That statistic is usually read as a capability problem. Nine out of ten companies "struggle with AI". The cleaner reading is the one OpenAI's COO gave us: most AI has not reached the places where value is created. The model is capable. The integration is not.
A 2025 MIT study reached a related finding: 95% of generative AI pilots at companies fail to deliver measurable business impact. Again, the popular interpretation is that AI is overhyped. The more useful interpretation is that 95% of pilots were scoped as tool rollouts when they needed to be scoped as operational redesigns.
McKinsey's 2025 State of AI report offered the clearest framing we have seen. AI high performers are 2.8x more likely to fundamentally redesign workflows before deploying AI. 55% of high performers rework their processes end-to-end. Only 20% of the rest do.
That is the difference between the 5% and the 95%. It is not a model licence. It is whether the organisation treated the work as integration or as procurement.
It helps to separate the two categories cleanly.
A tool is something you open, use, and close. The value it creates is bounded by the session. ChatGPT drafting a sales email is a tool. Copilot writing a formula is a tool. Nothing compounds outside the session because nothing is connected.
An integration is something that runs without you opening it. It reads from the systems your business already uses. It writes back to them. It follows rules. It leaves a trail. It gets better because your data gets larger. A customer intelligence layer that reads your CRM, your support tickets, your billing events, your product telemetry, and your sales calls, and returns an actionable signal to the right person at the right time, that is an integration.
Most companies bought the tool and expected the integration.
Deloitte's 2026 State of Generative AI survey found that 60% of enterprise AI leaders named integration with legacy systems as their primary agentic AI challenge. Not model quality. Not prompt engineering. Not GPU capacity. The boring part that has always been the hard part.
Most mid-to-large companies we talk to have one of two things. A Microsoft 365 Copilot rollout that nobody uses beyond Outlook drafting. Or an OpenAI Enterprise licence sitting on the IT invoice with adoption numbers the CFO does not want to look at.
Neither is a sunk cost. Neither is a failed procurement. They are the wrong level of ambition for the problem.
The Copilot licence is fine. Keep it. It will continue to help your people draft, summarise, and translate. That is real value and it compounds a little at the person level.
The work that compounds at the company level is different. It is the AI layer that sits between your data and your decisions. It is customised because no two operations run identically, and the parts that differ are where your margin lives. It is the thing your horizontal AI licence was never designed to be.
The question we would ask every operations lead in your position: when you imagine AI actually working inside your company, is it describing what you are doing, or is it doing something you are not already doing? If it is doing, the licence will not take you there on its own. Something has to be built.
The shape is consistent across the organisations that have made it work.
A data and process audit before anything is deployed. This is the stage McKinsey's high performers invest in and almost everyone else skips. If your data is not unified and your process is not mapped, no model will compensate for either.
A narrow first use case with a measurable business outcome. Not "roll out AI to the sales team". Rather, "reduce average quote response time from 48 hours to under 4 hours for our top 20% of accounts". Scoped to a number a CFO can recognise.
An integration layer that reads and writes to the systems where work actually happens. The Model Context Protocol (MCP) is becoming the default pattern here. It is the standardised way an AI system connects to your CRM, your ERP, your internal tools, and your documentation. One protocol, many systems. The plumbing that makes "intelligent" also "useful".
A human in the loop for the 10 to 20% of decisions the model should not make alone. This is not a concession. It is the design. AI amplifies human expertise, it does not replace it. The organisations that skip this step build confident systems that are wrong at scale.
An operations owner, not an innovation lab. AI that lives with the ops team, the finance team, the customer team, and is measured against their KPIs. Not AI that lives inside a proof-of-concept folder.
None of that is what you get from a generic licence. It is what the licence was never supposed to be.
When OpenAI's COO says AI has not penetrated enterprise processes, he is describing the gap his own product sits on one side of.
OpenAI is on the model side of the gap. Its partner list (McKinsey, BCG, Accenture, Capgemini) sits on the integration side. The consultancies are there because that is where the work lives. The moment a model meets a company, the work stops being about the model.
For the mid-market and upper-mid-market buyer, the large-consultancy price point is rarely the right fit. But the integration is still the work. The vendor is still different from the integrator. The answer is to separate the two: keep your horizontal AI licence for horizontal tasks, and build the layer that connects AI to your operations where the compounding value is.
We have built that layer for clients in hospitality, logistics, e-commerce, and professional services. It is never the same twice. The part that matters is always unique. That is why the integration has to be custom.
Generic AI is a product. Enterprise AI is an integration problem. Your OpenAI licence is fine. The gap between it and your operations is the thing that has to be built.