Why We Built a Scoring Framework Instead of a Checklist
Clarity Index

Why We Built a Scoring Framework Instead of a Checklist

The simplest version of what we do would be a checklist. Does the brand use organic materials? Check. Does it pay fair wages? Check. Is it certified? Check. Three green ticks, approved, move on.

We did not build a checklist. We built a framework. The difference matters.

A checklist is binary. A brand either passes or it does not. This creates a problem immediately: what counts as "fair wages"? Fair by whose standard? The local minimum wage, a living wage benchmark, or the wage the artisan themselves considers fair for their time and skill? A checklist forces you to pick one definition and apply it universally. A framework lets you document what the brand claims, what evidence supports it, and how strong that evidence is.

The Clarity Index operates across five dimensions, but the critical insight is not the dimensions themselves. It is how they interact.

A brand might score exceptionally well on environmental claims. Organic materials, verified certifications, published data on water usage and waste reduction. But that same brand might score poorly on governance: no published supplier list, no named leadership, no public reporting. The environmental score does not erase the governance gap. Both exist simultaneously, and both matter.

A checklist would give that brand a passing grade because it checked enough boxes. The Clarity Index shows the full picture, strengths and gaps together, so that you can decide what matters most to you.

The other problem with checklists is that they are static. A brand earns its checkmarks once and carries them forward indefinitely. But practices change. Ownership changes. Certifications expire. Supply chains shift. What was true two years ago may not be true today.

The Clarity Index is designed to be re-evaluated. When evidence changes, when a certification lapses, ownership transfers, or new information surfaces, the assessment updates. This is not a one-time stamp of approval. It is an ongoing evaluation that reflects the current state of verifiable evidence.

This design choice was intentional, and it reflects a broader principle: measurement systems should describe reality, not simplify it into a verdict.

We borrowed this thinking from data infrastructure, where the challenge is often the same. Administrative data, operational records, heterogeneous sources with undocumented gaps. You have to establish what the data can and cannot support before you build anything on top of it. You define the grain. You identify the entity. You document the constraints. Then you build outward.

The Clarity Index follows the same logic. Before scoring a brand, we define what evidence is admissible and what is not. We catalog every source type, assign reliability tiers, and establish rules for recency, verifiability, and exclusion. The evidence taxonomy is published. The rules are explicit. Nothing is scored on intuition.

This makes the framework harder to build and slower to apply than a checklist. It also makes it defensible. When we say a brand is a Verified Leader, that designation traces back through specific evidence, from specific sources, evaluated against specific criteria. You can follow the trail.

A checklist tells you what someone decided. A framework shows you how they decided it, and lets you decide whether you agree.