Systems Thinking for Product Managers: Critical Systems Heuristics (CSH)
How applying Critical Systems Heuristics (CSH) to Product Management can help identify assumptions, boundaries and power dynamics.
In previous articles, I outlined:
The basic principles of Systems Thinking and how it can be used in Product Management.
How applying Soft Systems Methodology to can help you understand complex socially rooted problems.
How applying Systems Dynamics can help you understand complex situations and causality.
Critical Systems Heuristics (CSH)
Critical Systems Heuristics (CSH) was developed by Werner Ulrich in 1983. Ulrich observed that traditional systems design often assumed objectivity, overlooking the reality that every decision is shaped by boundaries, values, and power dynamics.
CSH was created to expose these hidden assumptions by asking structured “boundary questions” that highlight who benefits, who decides, whose knowledge counts and who is excluded. Rooted in critical social theory, its purpose is to make decision-making more transparent, inclusive, and accountable, particularly in complex environments where multiple stakeholders are affected.
In Product Management, CSH can be applied by regularly using its boundary questions to check whether the product’s vision, backlog, and metrics truly reflect the needs of all affected users, not just the most powerful stakeholders.
For example, Product Managers can use it within:
Product discovery – to surface whose needs are prioritised vs overlooked.
Roadmap shaping – to test if long-term goals reflect only dominant stakeholders or also marginalised users.
Backlog refinement – to validate whether stories serve broader citizen/user groups, not just policy or commercial pressure.
Retrospectives – to periodically revisit boundaries (“who’s missing from this conversation?”).
Applying CSH
There are 12 boundary questions, which are grouped into four categories.
Motivation (why the system/product exists)
Control (who has power and resources)
Knowledge (what counts as valid knowledge)
Legitimacy (who is affected and represented)
The questions within each category are designed to:
Expose hidden assumptions about beneficiaries, decision-makers, and legitimacy.
Challenge boundary choices (who/what is in or out of scope).
Support reflection on ethical, social, and systemic impacts of product decisions.
Encourage reframing so that design and delivery include those often overlooked.
Questions
The 12 boundary questions.
Motivation
Who is (ought to be) the intended beneficiary of the system? Clarifies whose needs and interests are prioritised.
What is (ought to be) the purpose of the system? Makes explicit the claimed purpose versus hidden agendas.
What is (ought to be) the measure of improvement or success? Surfaces which metrics matter, and whether they serve all stakeholders.
Control
Who is (ought to be) the decision-maker? Identifies who holds authority and whether this is legitimate.
What resources are (ought to be) controlled by the decision-maker? Shows which resources or levers are available and who commands them.
What conditions are (ought to be) outside the decision-maker’s control? Acknowledges external constraints and limits of influence.
Knowledge
Who is (ought to be) considered an expert? Highlights which voices are valued for expertise.
What expertise is (ought to be) consulted, and why? Surfaces bias in which knowledge is prioritised.
What or who is (ought to be) assumed as the source of knowledge? Exposes reliance on particular data, models, or narratives.
Legitimacy
Who is (ought to be) affected but not involved? Brings forward marginalised or excluded stakeholders.
Who is (ought to be) the guarantor of those affected? Identifies who speaks or advocates for the excluded.
What worldview is (ought to be) assumed and legitimised? Surfaces underlying cultural, political, or ethical assumptions shaping the system.
Process
The process to apply CSH within Product Management, you can use the following steps:
Frame the product context: Define the service or feature under consideration and clarify the decision or problem you want to explore.
Engage stakeholders: Involve a diverse mix of voices from delivery teams, leadership, users, and affected groups so that multiple perspectives are represented.
Work through the 12 boundary questions: Organise a workshop or structured exercise to capture both explicit and implicit assumptions.
Surface assumptions and tension: Compare different stakeholder responses and identify contradictions or gaps.
Analyse implications: Reflect on how current boundary choices shape fairness, inclusion, and outcomes, pay attention to who is excluded or privileged.
Reframe product decisions: Adjust the vision, roadmap, backlog, or success metrics to address gaps and improve inclusivity.
Document and communicate findings: Use a concise method to share the findings with stakeholders for transparency.
Revisit: Review at key milestones so that boundary critique becomes a continuous improvement practice rather than a one-off exercise.
Example
To show an example, we’ll use a fictional scenario, and we’ll apply the first 5 steps, assuming engaged and diverse stakeholders from step 2.
Product context
A government department is developing a digital portal for managing housing benefit applications. The aim is to reduce processing times and improve access for citizens, but there are competing pressures around efficiency, cost, and inclusivity.
12 boundary questions
Motivation
Beneficiaries: Policy leads say ‘all claimants’, delivery team narrows it to ‘digitally literate claimants’, housing charities argue the main beneficiaries must be ‘vulnerable groups least able to access digital services’.
Purpose: Senior leaders stress ‘cost savings and efficiency’, product team emphasises ‘improved user experience’, charities prioritise ‘ensuring no one is excluded from accessing entitlements’.
Success criteria: Finance teams cite ‘reduced processing cost per application’, user researchers propose ‘reduced failure demand and higher satisfaction scores’, caseworkers value ‘fewer manual interventions’.
Control
Decision-maker: Department’s digital director holds authority, but delivery teams highlight strong influence from Treasury funding conditions.
Resources controlled: IT budget and developers are department-controlled; housing charities highlight that ‘frontline support funding’ is not.
Constraints: Legislative rules around benefits, cybersecurity standards, and procurement frameworks are acknowledged as outside local control.
Knowledge
Expertise recognised: Policy and technical SMEs are seen as experts; user researchers argue ‘lived experience of claimants’ should also count.
Knowledge sources: Quantitative transaction data is prioritised, researchers and charities push for ‘ethnographic insights and community input’.
Assumptions about knowledge: Leaders assume ‘data dashboards give a complete picture’, advocates argue this misses hidden barriers such as low digital literacy.
Legitimacy
Affected but not involved: Vulnerable claimants with no internet access, people with language barriers, and those with disabilities are identified.
Guarantors of the affected: Housing charities and local councils see themselves as advocates but note they’re rarely included in decision-making.
Underlying worldview: Department frames benefits as a transactional service to be streamlined; advocates argue it should be seen as ‘a right and a lifeline for vulnerable citizens’.
Assumptions and tensions
Efficiency vs. equity: The department assumes digital automation automatically equals fairness, while advocates argue it risks excluding those most in need.
Knowledge hierarchy: Quantitative data is treated as “hard evidence,” while qualitative lived experience is sidelined, creating tension about what counts as valid input.
Representation gap: Vulnerable groups are acknowledged as affected but structurally excluded from design conversations.
Purpose clash: Cost savings are the official purpose, but teams working directly with claimants see citizen wellbeing as equally important.
Implications
If boundaries remain as currently assumed, the product risks becoming a ‘digital barrier’ rather than an enabler, cutting processing costs but worsening access for digitally excluded claimants. Failure demand may rise (more calls, complaints, appeals), undermining efficiency claims. Ignoring lived experience creates reputational and ethical risks, the service could be criticised as discriminatory or non-compliant with accessibility standards.
Reframing assumptions by broadening “beneficiaries” to explicitly include vulnerable groups, recognising lived experience as valid expertise, and adjusting success metrics to include fairness would align the product with both efficiency and inclusivity, strengthening legitimacy and reducing long-term risks.
Insights
This insight teaches us that assumptions about efficiency, data, and control can unintentionally create exclusion if they aren’t challenged. By surfacing hidden boundaries, such as who counts as a beneficiary, what evidence is considered valid, and who is absent from decision-making we see that a product designed to “streamline” may in fact widen inequality.
It also shows that different stakeholders hold fundamentally different views of success: senior leaders may prioritise cost savings, while frontline staff and advocates prioritise citizen wellbeing. Unless these tensions are made explicit, the product risks drifting towards narrow objectives that undermine legitimacy.
Most importantly, the exercise highlights that inclusion is not automatic in digital transformation, it must be deliberately designed into goals, metrics, and governance. Applying CSH early reveals these gaps, giving Product Managers the chance to reframe decisions and balance efficiency with fairness, leading to services that are both effective and equitable.
Conclusion
Critical Systems Heuristics provides product managers with a structured way to make hidden boundaries, assumptions, and power dynamics visible in complex delivery contexts. Its process, framing the product, engaging diverse stakeholders, working through boundary questions, surfacing tensions, analysing implications, and reframing decisions ensures that decision-making is not dominated by narrow metrics or single perspectives.
The housing-benefit portal example illustrates how CSH exposes a clash between efficiency and equity: leaders valued cost savings, while advocates stressed inclusivity for vulnerable citizens. By surfacing assumptions (that digital equals fair access, or that dashboards tell the whole story), the process revealed risks of exclusion, reputational harm, and failure demand. Analysing these implications showed that reframing beneficiaries, success measures, and sources of expertise could align efficiency with fairness.
In short, CSH equips product teams to build not just effective services, but also legitimate and equitable ones. It reminds us that technology is never neutral, the boundaries we set determine who benefits, who is excluded, and how public value is defined.
Systems Thinking for Product Managers (to be continued..)
This is the fourth article in a series I’ll release within Product Breaks, focusing on Systems Thinking methodologies and Product Management. Further articles will aim to cover the topics:
Viable System Model (VSM)
Strategic Options Development and Analysis (SODA)