Purpose: Document the validation process before publishing the model. Complete all three tests. Stakeholder agreement is not validation — empirical testing is. Record residual limitations so practitioners and managers understand the model's known boundaries.
Rule: A model that has been agreed upon has not been validated. Stakeholder approval is not a substitute for the three empirical tests below. Complete all three before publishing.
Document Header
Test 1 — Exceptional Performer Test Can the model describe your highest performers distinctively?
Take your 3 highest-performing practitioners — not average high performers, but outliers. Do their distinctive behavioral signatures appear in the Advanced column of the Behavioral Indicator Matrix?
| High-Performer Signature (from Phase 1) | Captured in Model? | Location (Competency + Level) | Gap Action |
|---|
Test 2 — Diverse High-Performer Test Does the model accommodate all paths to high performance?
High performers in CS roles succeed through different configurations. A model that only works for the modal high performer systematically undervalues practitioners who succeed through alternate configurations.
| High-Performer Archetype | Model Accommodation — How Does the Model Describe This Path? | Status |
|---|---|---|
| Technical depth specialist | ||
| Relationship density specialist | ||
| Analytical rigor specialist | ||
Test 3 — Manager Calibration Test Can the model be applied consistently by different managers?
Have 3 managers independently rate the same practitioner using the Behavioral Indicator Matrix. Calculate agreement. Target: ≥ 70%. Below 50% requires Phase 3 rebuild for affected domains.
| Competency | Manager A | Manager B | Manager C | Agreement % | Status |
|---|
Additional Checks
| Check | Status | Notes |
|---|---|---|
| All Phase 1 role responsibilities are mapped to at least one competency in the model | ||
| No significant behavioral indicator overlap across competencies | ||
| Stakeholder review completed |
Final Model Status