How Bad Is 0.22? Putting PE CEO Selection Accuracy in Context
The best ML model achieves d-prime of 0.22 and AUC of 0.562 -- ranking the successful CEO higher only 56.2% of the time vs 50% for a coin flip.
Verata Research
2025-04-20

In this article
The Finding
The best-performing machine learning model in our study achieves a d-prime of 0.22 and an AUC of 0.562. In practical terms, this means the model ranks the successful CEO higher than the unsuccessful one only 56.2% of the time. A coin flip gets you 50%.
To appreciate how small this number is, consider that d-prime is the standard signal detection metric used across psychology, radiology, and engineering. A d-prime of 0.00 means pure chance -- no ability to distinguish signal from noise. A d-prime of 1.0 is considered moderate discrimination. Our best model, trained on every observable career characteristic in 12,174 PE-backed CEO appointments, lands at 0.22. That is not moderate. That is not even weak in the conventional sense. It is barely distinguishable from random.
The AUC of 0.562 tells the same story from a different angle. AUC measures the probability that a classifier ranks a randomly chosen positive instance higher than a randomly chosen negative one. Perfect discrimination is 1.0; chance is 0.5. Six percentage points above chance is the total predictive power of every observable career characteristic combined -- education, prior titles, employer pedigree, industry experience, tenure patterns, all of it.
Why This Matters
Private equity firms routinely pay $100,000 to $500,000 per executive search engagement. The implicit promise behind that fee is that a structured evaluation of career credentials will identify candidates with a meaningfully higher probability of success. Our data shows the ceiling on that promise is eight percentage points above a coin flip.
Imagine a diagnostic test that correctly identifies a disease 56% of the time. No hospital would rely on it. No regulator would approve it. No patient would consent to treatment based on it. Yet this is the accuracy ceiling for the most sophisticated credential-based model we could build, using the same resume signals that drive virtually every CEO search in the industry.
The implications extend far beyond search firm economics. Every time a PE operating partner screens out a candidate because they lack a prior CEO title, or ranks one candidate above another because of a prestige employer on their resume, they are implicitly claiming that these signals carry meaningful predictive power. A d-prime of 0.22 says they do not -- or more precisely, that the signal is so weak it is practically indistinguishable from noise at the individual decision level.
What the Data Shows
We tested every reasonable model architecture against the full dataset of 12,174 PE-backed CEO appointments spanning 18 years. Logistic regression, random forests, gradient-boosted trees, and neural networks all converge on the same result: d-prime hovers near 0.22, and no model reliably exceeds an AUC of 0.57.
The feature importance analysis is equally revealing. The model assigns non-trivial weight to era/timing, prior role tenure, and industry match -- but even the most predictive individual feature contributes less than you would need for actionable discrimination. When you combine all features, you get a model that is technically better than chance in aggregate but functionally useless for any individual hiring decision.
- AUC 0.562: The model ranks the successful CEO higher 56.2% of the time
- d-prime 0.22: Barely above the 0.00 threshold for random classification
- No model architecture breaks through: Logistic regression, random forest, XGBoost, and neural nets all plateau at the same ceiling
- All observable career features included: Education, prior titles, employer brand, industry tenure, role progression -- none push the needle
The Counterargument
The most common pushback we hear is that 0.22 represents the floor, not the ceiling -- that experienced search consultants and operating partners bring judgment, pattern recognition, and reference-based insights that a model cannot capture. This is a reasonable hypothesis, and it may even be true. But it shifts the burden of proof in an important direction.
If the value of executive search lies in factors that are not captured by observable career characteristics, then the industry should stop selling those characteristics as the basis for candidate selection. The typical search spec -- prior CEO title required, top MBA preferred, 15+ years in vertical -- is a credential filter. If credentials do not predict outcomes, then the credential filter is not adding value. Whatever the search consultant brings beyond the filter may matter, but the filter itself is demonstrably noise.
More importantly, the 0.22 figure represents the best case for credential-based prediction. We gave the model every advantage: a large dataset, multiple architectures, extensive feature engineering. If there were a strong signal hiding in the resume, we would have found it. The signal exists somewhere -- some CEOs genuinely do outperform others -- but the resume does not contain it. This is actually good news. It means the search for better prediction should focus on what resumes cannot capture, not on more sophisticated ways to read them.
What This Means for Your Firm
A d-prime of 0.22 should change how you allocate resources and evaluate risk in CEO selection. If the observable career characteristics that dominate search specifications explain barely six percentage points of variance above chance, then the time and money spent filtering on those characteristics is largely wasted.
This does not mean CEO selection is hopeless. It means the industry is looking in the wrong place. The predictive signal -- if it exists -- lives in factors like contextual fit, problem-solving approach, team dynamics, and strategic alignment with the specific value creation plan. These are harder to measure than a resume credential, but they are at least pointed in the right direction.
- Stop treating resume credentials as predictive: A d-prime of 0.22 means they are not, in any practically meaningful sense
- Reallocate diligence time: Shift from credential verification to structured assessment of how a candidate would approach the specific problems your portfolio company faces
- Widen your candidate pool: If credentials do not predict outcomes, you are artificially narrowing the field by filtering on them
- Demand evidence from your search partners: Ask what their placement-to-successful-exit rate is. If they cannot answer, the engagement is built on belief, not data
Get the Full Research Report
This insight is from “From Pedigree to Performance” — the complete analysis of 12,174 CEO appointments. Download the full report with methodology, statistical tables, and recommendations.
Related Insights
Less Than 1% of PE Exit Outcomes Are Explained by CEO Background
The flagship finding: all observable CEO career traits combined explain less than 1% of PE exit variance across 12,174 appointments.
The Flatline: Every CEO Trait's Effect on Exit Outcomes
A forest plot of every testable CEO trait shows nearly every confidence interval crosses the 1.0 'no effect' line. The resume doesn't differentiate outcomes.
The Vanishing Findings: What Happens When You Apply Real Statistics
22 CEO traits narrowed to 9 with raw significance, 4 after FDR correction, and just 2 that survived era-robustness checks. Medical-grade rigor applied to CEO research.
Related Guides
Ready to Move Beyond Resume-Based Selection?
See how Verata helps PE firms make better executive hiring decisions with relationship intelligence.