Encoding organisational decisions for the agentic age - part two
Your organisation's content was written for people. Agents need something different. This second post looks at what changes when content becomes operational input.
The context layer – making organisations machine-usable
Every organisation runs on content: strategy decks, product specs, pricing rules, brand guidelines, compliance policies, approval authorities. Most of this is created for people: to persuade, to inform, to cover a base. It lives in slide decks, PDFs and hidden SharePoint folders and, most importantly, in the heads of experienced staff who know which bits matter and what can be quietly ignored.
This has always been inefficient, but it’s sort of worked – because we humans are great at navigating ambiguity. We bridge the gap between documented and enacted reality every day, informally, expensively, but well enough.
Agentic AI changes the economics of this gap. When software can act – recommend a product, adjust a price, escalate a complaint – the content it operates on stops being background reference and becomes operational input. And operational input that is ambiguous, scattered, stale or contradictory produces operational outcomes with the same qualities, only faster and at scale.
This difficulty runs deeper than messy documentation. As I argued in the first post in this series, many organisations do not have clearly articulated, actionable strategies. If the strategic layer is vague, the policies and procedures beneath it cannot ladder up coherently – so the content that agents act on is unanchored.
Levels of content
An agent making a decision at the point of action needs content at several levels, each serving a different purpose:
Strategic intent: The choices and trade-offs that define how an organisation competes. Are we optimising for growth or Customer Lifetime Value (CLV)? Where do we accept higher costs (and where don’t we)? This is the ‘why’ and the ‘what we have decided.’
Policy and rules: The codified constraints that govern how intent translates into action. Legal and compliance requirements, discounting authorities, escalation thresholds. This is the ‘what is permitted?’
Domain knowledge: The reference material an agent needs to reason well. Product catalogues, customer segmentation models, competitor positioning, regulatory context. This is the ‘what do we know?’
Operational parameters: The specific values, thresholds and configurations that tune behaviour. A maximum discount percentage, a risk score threshold for escalation, a response time target. This is the ‘how tightly?’
The challenge is not that this content doesn’t exist: it almost always does, somewhere. But it was never designed to be retrieved, applied and maintained as a system. The layers of content – and thinking – within an organisation are rarely connected in ways that agents can navigate.
Most organisations are, in practice, inverted: thin and vague at the top, thick and specific at the bottom. Lower-level content, like system configurations, compliance checklists and process documentation, often exists in reasonable detail because it has to. Someone needs to pass an audit, configure a platform, or onboard new staff. These decisions are small, low ambiguity and often easy to codify. The gaps tend to widen as you move up the stack. Strategic intent and the policies that should ladder from it are where ambiguity has longest been tolerated – but where agents need clarity most.
Why does this matter? Because agents are sensitive to misalignment between layers in ways humans are not. A human salesperson can hold ‘our strategy is CLV’ and ‘my bonus is on conversion’ in their head simultaneously and navigate the contradiction through judgement (though I’ve yet to meet one who won’t lean towards the bonus). An agent given contradictory context from different layers will either pick one arbitrarily or behave inconsistently. Conflicts between layers need to be resolved before the agent encounters them, not after.
Making this concrete: a D2C sales agent
Meet Tarra, a direct-to-consumer skincare brand. She has premium positioning, a subscription-led model, and roughly fifty SKUs across three tiers: an entry range (£15–25), a core range (£30–50) and a clinical range (£60–90). The team at Tarra are building an AI sales agent: a guided shopping tool that recommends products based on skin type and concern, handles objections about price and ingredients, and can apply promotional discounts autonomously within defined bounds.
The strategic tension across horizons
This quarter, Tarra’s head of growth (Zara) wants to hit a revenue target. Zara is pushing the agent to lead with the clinical range, where average order value is highest, and to offer 15% first-purchase discounts to reduce basket abandonment. A sales agent optimised for Zara’s context will push premium, discount early, and close fast. It will look productive.
Over the year, Tarra’s retention lead (Clara) cares about CLV. Her data shows that customers who start with the core range and build a routine are three times more likely to subscribe than those who enter through a discounted clinical purchase. The same agent, given Clara’s context, would recommend the core range first and hold back on discounting – a smaller basket today, but a customer who returns.
Over the longer term, the brand matters. Tarra’s positioning depends on trust: dermatologist-backed formulations, honest ingredient communication, no hard sell. An agent that aggressively upsells the clinical range to someone with a simple hydration concern, or makes efficacy claims the formulations cannot support, erodes that trust in ways that do not show up in this quarter’s dashboard.
These are the same tensions that human sales teams navigate daily, mediated by management, culture and experience. But the agent is not in the room when Clara, Zara and the rest of the leadership subtly shift towards a retention focus on their weekly call. Nor is it standing by the watercooler. It will do what its digital context tells it to do.
Content as a managed operational asset
This is the shift that matters, and it is easy to understate. It is not a question of documentation, but a change in how organisations manage, maintain and value their content – because in an agentic context, that content is not reference material for people. It is the context that fuels the business at scale (1).
A useful analogy is data management. Two decades ago, most organisations had data scattered across operational systems – CRMs, ERPs, spreadsheets, email. The data existed, but it was not usable for analytics or decision-making at scale. The response was ‘ETL’: Extract data from source systems, Transform it into a consistent and queryable structure, and Load it into a warehouse where it could be used.
Organisational content now faces a similar transition. The raw material – strategy documents, policies, product data, operational rules – needs to be extracted from its native formats, transformed into representations that agents can retrieve and apply, and maintained in systems where it can be versioned, governed and kept current. We need to create machine-usable content warehouses.
Source content vs. derivative content
The ETL analogy helps surface what may be the most underestimated risk in building a content layer: organisations are full of derivative content, and much of it looks authoritative.
Let’s return to Tarra. The board deck contains the strategic positioning. But the sales enablement team has produced a pitch deck emphasising the clinical range, because that is what retail buyers want to hear. The customer service team has a one-pager that summarises the returns policy in friendly language, omitting several conditions. The marketing team has rewritten the ingredient claims for social media, softening the regulatory caveats. Each is a retelling – filtered, simplified, sometimes unconsciously editorialised.
They have created derivative content. If the agent’s context layer is built from these retellings rather than the sourcematerial, it inherits multiple, partial interpretations of reality.
Nobody at Tarra deliberately created contradictory information. Each retelling shifts the emphasis, not through malice or neglect, but through the entirely normal process of human communication. This gradual drift between what was decided and what gets retrieved is one way content layer misalignment enters the system. Derivative content is necessary for human audiences: it is not a safe foundation for agents that need to act. The content layer must be built from the ground truth: actual strategy choices, actual policies and brand guidelines, actual product data.
Library, instruction manual, charter
Not all content functions the same way – and treating it uniformly is a mistake. Some content functions like a library: reference material the agent looks up as needed. Product specifications, regulatory requirements, competitor positioning. The agent queries it, uses what is relevant, and moves on. In technical terms, this is where retrieval-augmented generation (RAG) – systems that search a knowledge base and feed relevant content to the agent at the point of decision – does its work.
Other content functions like an instruction manual: procedural knowledge that tells the agent how to act in specific situations. Tarra’s discounting rules, escalation logic, the decision of when to recommend the clinical range versus the core range. This content does not just inform – it directs (in my next post, I will cover how this type of content gets encoded as callable Skills).
Lastly, some content functions like a charter: the strategic choices and trade-offs that express what the organisation has decided and why. This is the content that resolves conflicts between competing objectives – the reason Tarra’s agent should recommend the right product for the customer’s skin concern rather than the highest-margin SKU, even though this quarter’s revenue target says otherwise.
Each content type has different characteristics. Libraries need breadth, freshness and good retrieval. Instruction manuals need precision and testability. Charters need authority and clarity. Treating them all the same – and dumping everything into a knowledge base and hoping retrieval sorts it out – is the content equivalent of loading raw operational data into a reporting tool and wondering why the dashboards are wrong.
Format and structure: practical considerations
If content becomes operational input, the way it is structured matters more than it ever has. A few principles are emerging, though this is an area where practice is still ahead of consensus.
Density and fidelity: There is a tension between completeness and usability. A 47-page compliance policy might contain everything an agent needs, but retrieving from it will return noise alongside the signal. Yet a one-paragraph summary will be clean but miss the edge case that matters when a customer invokes a specific regulatory right.
The practical answer is layering. Strategic choices and trade-offs should be concise and opinionated – a page, not a deck. Policies need enough detail to be applied in specific situations but should be chunked by decision domain, not structured as monolithic documents. Product data should be structured with attributes that support the decisions agents will make, not written as marketing prose.
Structure over prose where the agent must act: For content that directs behaviour – decision rules, escalation logic, pricing constraints – structured formats (decision tables, rule sets, parameterised thresholds) produce more consistent results than natural language. An agent interpreting ‘use your judgement on discounting for high-value customers’ will behave variably. An agent applying ‘maximum 15% discount for tier-1 customers on the core range, 10% on clinical, approval required above these thresholds’ will not. These patterns translate well into machine-usable Python.
Prose is still appropriate for content the agent needs to reason about rather than execute: strategic rationale, brand positioning, customer context. But even here, concise and well-structured prose retrieves better and produces more consistent reasoning than long, discursive documents.
Chunking by decision, not by document: Most organisational content is structured by authorship. The compliance team wrote a compliance policy, the brand team wrote brand guidelines, the commercial team wrote a pricing framework. Tarra’s agent making a discounting decision needs fragments from all three. Content that is chunked by decision domain – ‘everything relevant to the discount decision’ – is more retrievable than content chunked by organisational origin. Interestingly, this suggests that Conway’s Law is in effect around agentic behaviour.
Carry metadata for governance: Every piece of content an agent uses should carry enough metadata to support auditability and make clear which version is current: who owns it, when it was last reviewed, and its authority (regulatory, commercial, advisory). This should not be considered an overhead: it is the mechanism that allows you to trace a bad decision back to the context that produced it, to distinguish a regulatory constraint from a preference someone introduced three years ago, and to support retrieval filtering and weighting – ensuring the agent searches the right subset of content and prioritises appropriately authoritative sources.
Keeping the content layer alive
Artefacts decay. For many organisations, building an optimised content layer will be hard, but maintaining it may be even harder.
Many of the rules used for knowledge base and information system management are still relevant, but with greater rigour around provenance, versioning, and evaluation:
Ownership must sit with SMEs: Not in the AI team, not in a documentation function, but in the part of the organisation that makes the decisions the content encodes. If information ownership is delegated to people who do not live inside the decisions, the content layer becomes fiction.
Version management needs rigour: When an agent makes a bad decision, you need to know which version of which content informed it. This is not perfectionism, but the minimum requirement for meaningful oversight. For the same reasons, provenance matters: without it, everything may look equally binding to an agent.
Evaluation: The moment the content layer drifts from enacted reality, agent behaviour drifts with it – and unlike a human team, agents will not flag the contradiction. Detecting that drift requires systematic evaluation of whether the content agents are retrieving is still producing the outcomes you expect – a discipline that connects directly to the evaluation frameworks I have written about previously.
A practical starting point
Pick one bounded decision domain – something repeatable where the stakes are real. For Tarra, that might be the product recommendation decision: given a customer’s skin type and concern, which range does the agent lead with, what claims can it make, and what discounting is it permitted to offer?
Map what actually happens today (as opposed to what the policy says should happen). Write down the strategic choices that should govern the decision, the rules that constrain it, and the parameters that calibrate it. Identify the source content: not the sales deck or the internal summary, but the actual policy, the actual product data, the actual regulatory constraints. Structure that material so it can be retrieved and applied. Give it an owner who lives inside the decision.
You will learn more from making one decision domain machine-usable than from simply cataloguing twenty. And if you have ever built a data pipeline – extracted messy source data, transformed it into something queryable, and fought to keep it governed and current – the discipline will feel strangely, sadly familiar.
In my next post, I’ll look at Skills: how context becomes callable, testable procedures that agents can execute – and how teams need to organise to build and maintain them.
Sources
1. Anthropic – Effective context engineering for AI agents https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
Finding Product Breaks valuable? Please consider sharing it with your friends - or subscribe if you haven’t already.



