Systems Thinking for Product Managers: Viable System Model (VSM)
How applying the Viable System Model (VSM) to Product Management can help define system boundaries and identify structural weaknesses.
In previous articles, I explored how systems thinking techniques such as Soft Systems Methodology (SSM), System Dynamics (SD), and Critical Systems Heuristics (CSH) can help product managers understand complexity.
Related articles:
The basic principles of Systems Thinking and how it can be used in Product Management.
How applying Soft Systems Methodology (SSM) to can help you understand complex socially rooted problems (Peter Checkland).
How applying Systems Dynamics (SD) can help you understand complex situations and causality (Jay W. Forrester).
How applying Critical Systems Heuristics (CSH) to Product Management can help identify assumptions, boundaries and power dynamics (Werner Ulrich).
These approaches are powerful, but they focus primarily on understanding systems, but they do not tell us whether the system itself is capable of functioning.
Viable System Model
The Viable System Model (VSM), developed by Stafford Beer in the 1970s, addresses that gap. Rather than improving processes, VSM asks a more fundamental question:
What must exist for a system to remain viable?
Beer argued that any viable system, whether an organisation, service or product ecosystem must contain five interacting functions:
System 1: Operations - The parts that deliver value (teams, services, products)
System 2: Coordination - Mechanisms that prevent instability between operational units
System 3: Control - Internal governance, optimisation and resource allocation
System 4: Intelligence - The ability to sense and respond to the external environment
System 5: Policy - Purpose, identity and ultimate decision-making authority
These are not organisational layers. They are functions that must exist somewhere within the system. When they are missing, weak, or disconnected, the system may continue to operate but it will not produce coherent outcomes.
Applying VSM
VSM is most useful as a diagnostic and design lens, rather than a rigid method. Its value in Product Management comes from helping us ask whether the system around a product, service or programme is actually capable of functioning coherently over time. A practical way to apply it is to work through the following steps.
Step 1: Define the system of interest
Start by deciding what system you are actually analysing. This matters more than it first appears. If the boundary is too narrow, important causes of failure will sit outside it. If it is too broad, the exercise becomes too vague to act on.
The system of interest might be:
a single product area
a service end to end
a programme spanning multiple teams
a wider operating model around a product
The key question is:
What is the smallest meaningful system that must function well in order for the intended outcome to be achieved?
For example, if a team owns only the frontend journey but the real problem depends on policy, operations or supplier constraints, then the actual system is larger than the product team itself.
Step 2: Clarify the system purpose
Before mapping structure, define what the system is there to do. This sounds obvious, but many product environments contain multiple competing versions of success:
delivery to plan
user value
policy compliance
operational efficiency
commercial return
VSM works best when you ask:
What is the core purpose of this system?
What outcome is it supposed to produce?
For whom?
Over what time horizon?
If there is no shared answer, that is already a structural signal. It often points to weakness in System 5: Policy, where purpose and identity should sit.
Step 3: Identify the operational units
Next, identify the parts of the system that actually do the work.
These are System 1: Operations. In product contexts, these might include:
delivery teams
service teams
operational teams
customer support functions
supplier teams
platform teams
regional or business units
The important thing is not your formal org chart, but the units that genuinely carry out value-producing activity.
Ask:
Where is value created or delivered?
Which teams or units are operationally distinct?
Which parts of the system need to function well day to day?
At this stage, many people discover that the system is more distributed than they assumed.
Step 4: Map coordination mechanisms
Once you know the operational units, look at how they are coordinated.
This is System 2: Coordination. Its role is to prevent instability, duplication, conflict and fragmentation between operational units.
In product environments, coordination mechanisms might include:
ceremonies across teams
shared standards
common definitions of done
design systems
architecture principles
roadmap alignment
dependency management
operating rhythms
Ask:
How do these units avoid working against one another?
Where are dependencies managed?
What keeps the overall system stable?
If the answer is mostly informal heroics, escalation, or “people just talk”, coordination is probably weak.
Step 5: Identify control and governance
Now look at how the system allocates resources, monitors performance and makes internal decisions.
This is System 3: Control. It includes the mechanisms that optimise the system internally and ensure operational units are functioning as expected.
In product settings, this may include:
portfolio governance
leadership forums
budgeting
performance review processes
prioritisation decisions
risk and assurance
delivery oversight
Ask:
Who decides what gets funded, prioritised or stopped?
How is performance monitored?
Where does operational accountability sit?
What happens when teams drift off course?
This step often reveals imbalance. Many organisations have very strong System 3 functions and relatively weak coordination or intelligence.
Step 6: Identify intelligence and adaptation
Then examine how the system senses external change and adapts.
This is System 4: Intelligence. It is responsible for looking outward and forward.
In product terms, this could include:
user research
market analysis
policy scanning
technology strategy
service performance trends
horizon scanning
experimentation and learning
Ask:
Who is looking beyond current delivery?
How does the system understand user, market, policy or environmental change?
How are future risks and opportunities brought into decisions?
A common pattern is that organisations gather research or insight, but it is disconnected from actual decision-making. That means System 4 exists in fragments, but is not functioning effectively.
Step 7: Identify policy, purpose and authority
Next, identify where the system’s overall identity and direction are defined.
This is System 5: Policy. It is concerned with purpose, values, strategic coherence and ultimate authority.
In practice, ask:
Who decides what this system is ultimately for?
Who resolves tensions between short-term delivery and long-term direction?
Where is the final authority on trade-offs?
Is there a coherent identity or just competing demands?
If no one can clearly answer these questions, the system may be operationally active but strategically incoherent.
Step 8: Look for gaps, overloads and distortions
Once the five functions have been mapped, look at where the system is structurally weak. Common patterns include:
strong operations but weak coordination
heavy governance but poor intelligence
unclear purpose with competing authorities
insight gathered but not acted on
responsibilities split across boundaries with no one holding the whole
This is the point where VSM becomes especially useful. Instead of saying “the team is struggling”, you can say something more precise, such as:
coordination between operational units is weak
policy is unclear
control dominates intelligence
the system boundary excludes a critical dependency
That shifts the conversation from blame to structure.
Step 9: Test the system boundary
This is one of the most important steps. Once you have mapped the system, ask whether the boundary you drew is actually valid.
For example:
Are key decisions sitting outside the system?
Are important operational dependencies excluded?
Is the team being held accountable for outcomes it cannot influence?
Is policy or governance outside the analysed boundary even though it shapes performance?
If so, redraw the boundary and test again. This is where VSM becomes particularly useful for Product Managers in complex environments. It helps reveal when the “product system” is actually inseparable from policy, operations, supplier relationships or service design.
Step 10: Use the findings to redesign, not just diagnose
The final step is to decide what needs to change. This is where the exercise becomes practical. The goal is not just to produce a map, but to identify structural interventions such as:
introducing clearer cross-team coordination
strengthening user or market intelligence
clarifying decision rights
redefining the scope of the product system
reducing overbearing governance
aligning policy and delivery around a shared outcome
The question at the end is:
What changes would make this system more viable, not just more efficient?
That distinction matters, as a system can be highly efficient at producing the wrong outcomes.
A simple workshop approach
If applying this with a team, a straightforward structure is:
Define the system of interest.
Agree the system purpose.
List the operational units.
Map current coordination, control, intelligence and policy functions.
Identify missing or weak functions.
Test whether the system boundary is correct.
Prioritise structural improvements.
This can be done on a whiteboard or in Miro without needing to use all of the formal VSM language. In many cases, the concepts are more useful than the terminology.
Example: When delivery works but the system doesn’t
In a large public sector programme, multiple teams were delivering against a shared objective. Each team:
maintained a backlog
delivered regularly
reported progress through structured governance
From a delivery perspective, the programme appeared healthy, work was moving, milestones were being met and activity was visible. But when asked a simple question:
“What has improved as a result of this work?”
The answer was unclear. Outcomes were inconsistent and progress towards the overarching goal was difficult to demonstrate.
Initial diagnosis: a delivery problem
The initial assumption was that delivery needed improvement. The focus turned to:
refining backlog structure
improving prioritisation
increasing reporting clarity
strengthening delivery discipline
These interventions improved visibility and consistency, but they did not change the underlying outcome.
Applying a VSM lens
Instead of continuing to optimise delivery, the programme was examined using the Viable System Model. The system of interest was defined as:
the set of teams, governance and supporting functions responsible for delivering the programme outcome
Each of the five VSM functions was then assessed.
System 1: Operations (present and functioning)
Delivery teams were active and capable.
work was being delivered
teams were structured
outputs were produced consistently
There was no fundamental issue with execution at a team level.
System 2: Coordination (weak)
Coordination between teams was limited.
dependencies were managed reactively
teams worked to local priorities
duplication and gaps appeared across workstreams
There was no strong mechanism ensuring the system behaved as a coherent whole.
System 3: Control (strong)
Governance and oversight were well established.
reporting structures were clear
progress was monitored closely
escalation routes existed
However, this control function focused on:
activity
delivery status
adherence to plan
rather than:
system effectiveness
outcome alignment
System 4: Intelligence (fragmented)
Insight into the external environment existed, but was not integrated.
user research was conducted inconsistently
policy and regulatory context evolved independently of delivery
learning was not systematically fed back into prioritisation
As a result, the system struggled to adapt.
System 5: Policy (unclear)
There was no single, shared definition of success. Different parts of the programme optimised for different outcomes:
delivery to plan
compliance
stakeholder expectations
local team objectives
This created a lack of coherence across the system.
The key realisation
At this point, the issue became clear. The problem was not that teams were failing to deliver. It was that:
the system they were operating within was not structurally capable of producing coherent outcomes
What changed as a result
The response shifted away from delivery optimisation towards structural change. Three key interventions followed.
1. Strengthening coordination (System 2)
Introduced cross-team planning and dependency mapping
Established shared artefacts and alignment points
Reduced duplication and fragmentation
2. Reconnecting intelligence to delivery (System 4 → System 3)
Regularly integrated user, policy and performance insight into prioritisation
Ensured external context influenced internal decisions
Shifted focus from activity to relevance
3. Clarifying purpose and decision authority (System 5)
Defined a shared outcome for the programme
Established clearer ownership of trade-offs
Aligned teams around a common direction
The result
The volume of delivery did not increase significantly. But:
work became more coherent
decisions became more consistent
outcomes became clearer and easier to demonstrate
The system had not been made more efficient, it had been made more viable.
Why this matters
Without applying a structural lens, the programme would likely have continued:
refining processes
improving reporting
increasing delivery discipline
All of which would have made the system appear healthier, while leaving the underlying issue unresolved.
What this means for product management
The example above is not unusual.
Across product organisations, a similar pattern emerges. Delivery is active, governance is visible and yet outcomes remain inconsistent. The instinctive response is to improve execution. E.g. Refine prioritisation, restructure backlogs, introduce clearer processes. But this assumes the system itself is capable of producing the intended outcome.
In many cases, it is not.
What appears as a delivery problem is often structural. Teams can be effective, disciplined and productive, while operating within a system that cannot produce coherent results. Improving execution in this context increases activity, not impact.
A key part of this is coordination. In many organisations, coordination is not designed, it is assumed. Teams are expected to align, but dependencies are managed reactively, duplication and gaps emerge and coherence depends on individual effort rather than system structure. Without deliberate coordination, the system behaves as a collection of parts rather than a whole.
At the same time, control is often overrepresented. Governance, reporting and oversight are well established because they are easy to formalise. They provide visibility and reassurance. But they tend to focus on internal activity rather than external relevance. Meanwhile, intelligence, understanding users, policy context, or environmental change is fragmented or disconnected from decision-making. The result is a system that is well controlled, but poorly adapted.
The boundary of the system plays a central role in this. It defines what can be influenced, what can be improved and what remains out of reach. In many product environments, teams are held accountable for outcomes that depend on elements outside their defined system. Policy, operations, suppliers or governance sit beyond the boundary, yet shape the result. This is not a failure of delivery. It is a failure of system definition.
This is why improving efficiency alone is insufficient. Techniques such as prioritisation, roadmap refinement or value stream mapping can improve flow within a system, but they do not make an unviable system viable. If key functions are missing or misaligned, increasing efficiency will simply produce more output, more effort and the same outcomes.
This points to a different understanding of Product Management in complex environments. The role is not only to optimise delivery, but to define and shape the system itself. That means asking what the system includes, how it functions, where its boundaries sit, and whether it is capable of achieving its purpose.
Without that, product management risks becoming well-executed delivery within an unworkable system. And if the system is not viable, improving delivery will only produce more of the same.
Systems Thinking for Product Managers (to be continued..)
This is the fourth article in a series I’ll release within Product Breaks, focusing on Systems Thinking methodologies and Product Management. Further articles will aim to cover the topics:
Strategic Options Development and Analysis (SODA)
Hard Systems Thinking (HSM)
Systems Failures Method (SFM)





