Blog Twrt 5
Início » Kanban & Lean » Métricas Kanban: Lead Time e Throughput

Kanban Metrics: Lead Time and Throughput

Full Content

Quick Answer (Featured Snippet):
Kanban metrics are quantifications of workflow (how much, how long, variability). Key metrics: lead time (idea → ready), throughput (items/period), WIP (work in progress), CFD (flow visualization), predictability (lead time variation), SLA (items on time). Interpreted together, they reveal bottlenecks, efficiency, and predictability.

TL;DR (5 bullets):
Lead time: average time from idea to completion (days). Target: reduce by 30-50%.
Throughput: items completed per period (per week). Target: increase or stabilize.
CFD: visualization of flow over time. Bottlenecks are where the band does not progress.
Variability: some days 2 items, others 8? Target: stabilize.
Predictability: can I say “next release in 30 days”? Target: yes, ±15%.


Full Article

Why Metrics Matter

Scenario: Team A says, "We're fast, we can do the feature in a week." Team B says, "So can we, a week." But:

  • Team A: some features in 3 days, others in 3 weeks (variable)
  • Team B: every feature in 8-9 days (consistent)

Without metrics, you don't see the difference. With metrics:

  • Team A: average 8 days, max 21 days (variable, unpredictable)
  • Team B: average 8 days, max 10 days (stable, predictable)

Team B is more reliable (same average time, fewer surprises).


The 6 Essential Metrics

1. Lead Time (Total Time)

Definition: time from the start of the order to delivery to the customer.

Timeline de Feature X:

Jan 1 (ideia)
  ↓
Jan 5 (entra no backlog)
  ↓
Jan 12 (aproved, pronto para dev)
  ↓
Jan 19 (dev termina)
  ↓
Jan 26 (testes passam)
  ↓
Feb 2 (em produção)
  ↓
Feb 3 (cliente vendo)

LEAD TIME: Jan 1 → Feb 3 = 33 dias

Interpretation:
– Average lead time: 20 days
– Min: 5 days
– Max: 45 days
– 95th percentile: 35 days

Action: “95% of features are released within 35 days. We can confidently promise 35 days.”


2. Cycle Time (Work Time)

Definition: time from the start of work to delivery.

Mesma Feature X:

Jan 12 (move para "In Progress")
  ↓
Jan 26 (testes passam, pronto)

CYCLE TIME: Jan 12 → Jan 26 = 14 dias
WAITING TIME: Lead Time - Cycle Time = 33 - 14 = 19 dias

Insight: 19 days waiting (in backlog, after approval, in test queue). Opportunity: eliminate waiting.

Action: "If you reduce the wait time from 19 to 5 days, the lead time drops to 19 days (vs. 33)."


3. Throughput

Definition: number of items completed per period.

Semana 1: 5 features completadas
Semana 2: 6 features
Semana 3: 4 features (QA em férias)
Semana 4: 7 features

Average Throughput: 5.5 features/semana
Variação: 4-7 (boa estabilidade)

Interpretation:
– If throughput is stable (5-7), I can promise "5-7 features per week."
– If it varies greatly (2-10), it is difficult to make promises.

Action: “Measure throughput for 4-8 weeks, find variation, eliminate causes.”


4. Work in Progress (WIP)

Definition: number of simultaneous items in progress.

Monday: 8 items in Dev, 4 in Testing, 2 in Deploy = 14 WIP
Tuesday: 9 items in Dev (alguém iniciou), 3 in Testing = 12 WIP
Wednesday: 7 items in Dev (2 completadas), 5 in Testing = 12 WIP

Ideal WIP for Dev: 8 (based on team size)
Current WIP: 7-9 (ótimo, no target)

Interpretation:
– Aligned WIP = predictable lead time
– High WIP = long lead time, low quality

Action: WIP limits, automatic enforcement.


5. Cumulative Flow Diagram (CFD)

Chart showing the evolution of items by column:

Y-axis: # items
X-axis: time (weeks)

Backlog: cresce (sempre há mais demanda)
Ready: estável (WIP limit)
In Progress: estável (team capacity)
Testing: CRESCENDO (band getting wider = bottleneck!)
Done: lentamente crescendo

Insight: Testing é gargalo. Banda está alargando, items ficando presos.

Action: “Increase testing capacity (hire QA, automation) or reduce Dev WIP.”


6. Predictability (Lead Time Variance)

Definition: How much does the lead time vary? Target: Little variation = predictable.

Feature lead times em 20 realizações:

Distribuição:
10 dias: 1 item (5%)
15 dias: 8 items (40%)
20 dias: 8 items (40%)
25 dias: 2 items (10%)
30 dias: 1 item (5%)

Análise:
Média: 18 dias
Mediana: 18 dias
Desvio padrão: 5 dias
Percentis: 
  50th (median): 18 dias
  85th: 23 dias
  95th: 27 dias

Previsibilidade: "95% das features saem em 27 dias. Podemos prometer isso."

Auxiliary metric: Coefficient of Variation

CV = (Desvio Padrão / Média) × 100
    = (5 / 18) × 100 = 28%

Interpretação:
< 20%: excelente previsibilidade
20-40%: bom
40-60%: razoável
> 60%: baixa previsibilidade

Secondary Metrics (Nice-to-Have)

Service Level (SLA Compliance)

Target: 95% de features saem em < 25 dias
Atual: 90% saem em < 25 dias

Performance: 90/95 = 95% SLA compliance (missing target)
Ação: Aumentar capacity ou reduzir escopo

Quality Rate

Defects after release: 3 bugs em 10 features
Quality rate: (10-3) / 10 = 70%
Target: > 95%

Ação: Increase testing, improve code review

Escaped Defects (bugs that made it into production)

Regression: 15% de features têm bugs relatados em prod
Trend: Antes 20%, agora 15% (improving)

How to Interpret & Act

Scenario 1: Increasing Lead Time

Semana 1-4: média 15 dias
Semana 5-8: média 18 dias
Semana 9-12: média 22 dias
Trend: aumentando

Causas potenciais:
1. WIP aumentou (mais itens paralelos)
2. Complexidade aumentou (features maiores)
3. Teste queue cresceu (QA bottleneck)
4. Integração ficou complexa

Ação: CFD para ver aonde está o atraso (qual coluna está lenta?)

Scenario 2: Unstable Throughput

Semana 1: 8 features
Semana 2: 3 features (bug, redirecionou equipe)
Semana 3: 10 features (compensação)
Semana 4: 5 features (falta de stories ready)

Variação: 3-10 (200% spread, ruim)

Ação: 
- Spike: investigar semana 2 (qual foi o bug?)
- Planejamento: ter sempre 2+ semanas de stories ready
- Previsão: não prometer 8, prometer 5-10 (range)

Scenario 3: CFD with Bottleneck in Testing

CFD mostra:
- Backlog: crescendo (normal)
- Dev: flat (estável)
- Testing: CRESCENDO (stuck!)
- Done: muito lento

Insight: itens completam Dev, ficam em fila de Testing.

Causas:
1. QA overloaded (1 QA, 5 devs)
2. Testes manuais lentos (não há automation)
3. Testes frágeis (precisam re-rodar)

Ação:
A) Contrate 1 QA (capacidade)
B) Automation framework (velocity)
C) Pair testing (dev + qa, simultâneo)

For Technicians:

Calculations and formulas:

Lead Time = Data saída - Data entrada
Cycle Time = Data saída - Data início trabalho
Waiting Time = Lead Time - Cycle Time

Throughput = # items completed / period
Average Throughput = Sum(weekly throughput) / # weeks

WIP (Work in Progress) = count(items not in "Done")

CFD Area = integral of WIP over time

Standard Deviation = sqrt(sum((value - mean)^2) / count)
Coefficient of Variation = std_dev / mean

Percentile = valor em posição (n * percentile / 100)
  Ex: 95th percentile em 20 items = item #19

Tracking data:

Per item:
- id, title, status, start_date, end_date
- type (feature, bug, techdebt)
- size_estimate
- actual_effort
- completed_date
- defects_found_post_release

Agregações:
- Daily/weekly snapshot de WIP (quantos items em cada coluna)
- Lead time distribution (histogram)
- CFD (cumulative)
- SLA compliance (% on time)

Checklist: Implementing Kanban Metrics

  • [ ] Set collection period: 4-8 weeks minimum (outliers stabilize)
  • [ ] Tracking: all items have start_date, end_date
  • [ ] Tools: AgilePlace, Jira, or spreadsheet with automation
  • [ ] Calculations: lead time, cycle time, throughput, variability
  • [ ] Visualizations: lead time histogram, CFD, throughput trend
  • [ ] Reviews: weekly (team), monthly (management), quarterly (exec)
  • [ ] Action: metrics reveal problem, team proposes improvement
  • [ ] Validation: measure the impact of the improvement in the next collection

If You Only Do 3 Things...

  1. Track lead time: average, min, max, percentiles. Focus on the 95th percentile (what you can promise).

  2. Plot CFD: clear visualization of where the bottleneck is (which column is widening?).

  3. Measure throughput + variation: how many items do we complete per week? Is it stable? If not, investigate causes.


Frequently Asked Questions

Q: Which metric is the most important?
A: Lead time. Everything revolves around it: shorter lead time = more deliveries, better forecasting, fewer bottlenecks.

Q: Should I focus on throughput or lead time?
A: Both. Throughput without short lead time is “doing a lot, slowly.” Lead time without stable throughput is “fast, but unpredictable.”

Q: How do I explain CFD to an executive?
A: “Widening band = bottleneck. Item leaving the band quickly = good flow. Band reaching ‘Done’ slowly = long lead time.”

Q: How long until I see improvement in metrics?
A: 2–4 weeks (quick changes). 8–12 weeks (deep optimizations). Trends appear in 4–8 weeks.


Reading & References

  • Little's Law in Queueing Theory
  • Kanban: Successful Change Management (David Anderson)
  • Planview AgilePlace Metrics Guide

Final CTA:

“Kanban metrics are blindingly obvious when someone shows them to you. We implement a metrics framework that reveals bottlenecks, justifies action, and proves value. 2-hour workshop: we map your team's metrics and start tracking. Schedule now.”


Related Reading

author avatar
Eduardo Salerno
Eduardo Salerno is a specialist in IT portfolio and project management, with extensive experience in Planview implementations and digital transformation. At TWRT, he leads initiatives that connect business strategy with technological execution.
Scroll up