Methodology: how we source and calculate platform engineering cost figures
Every number on this site traces back to a public source. This page lists the sources, the formulas they feed into, what we deliberately do not publish, and when the figures are reviewed.
Most platform engineering "cost benchmarks" on the open web come from vendor blogs whose authors have an interest in the number landing high. We try to do the opposite: pull from public datasets, show our working, and refresh when the underlying inputs change rather than on a marketing cadence. This page is the audit trail.
Sources
1. Wage data
Salary figures (the salary page tables and the cost-side inputs to the calculator) are reconciled across six US-market sources:
- BLS Occupational Employment and Wage Statistics (OEWS). US Bureau of Labor Statistics, annual release. Primary anchor for software-developer median wage. Public, free, methodologically conservative.
- Salary.com. Employer-side survey data, useful for loaded-cost context (benefits and payroll-tax assumptions exposed in their methodology pages).
- Glassdoor. Self-reported, large-N but selection-biased upward. Used as a higher-bound check.
- Indeed Hiring Lab. Job-posting data, useful for hiring-rate movement at the entry level.
- ZipRecruiter. Posting and applicant data, useful where regional cross-checks are needed.
- 6figr. Verified-compensation reports for senior and staff platform engineering roles where the other sources thin out.
Per-role bands shown on the salary page are the intersection of these six sources, with outliers above the 90th percentile and below the 10th trimmed. We do not publish individual-source figures because the methodologies differ enough that direct comparison misleads more than it informs.
2. Team-structure and ratio data
The platform-to-product ratio guidance (1:8 to 1:12 for healthy mature teams) is the consensus from three sources:
- Gartner peer-community threads. The public-facing portions of Gartner's engineering-leader discussion content, which surface real headcount ratios from director-level practitioners.
- CNCF Platform Engineering survey. Annual community survey, n typically 200-400 organisations, captures team sizing and tooling adoption.
- platformengineering.org reference content. The community-maintained reference site that aggregates practitioner write-ups on team structure.
3. ROI and engineering-output benchmarks
The ROI page references three datasets, none of which we run ourselves:
- DORA "State of DevOps" report. Annual Google Cloud / DORA research, the canonical source for deployment frequency, lead time, change-failure rate, and recovery time benchmarks. We cite the 2024 report's elite-performer thresholds.
- DX Core 4 research. DX's study across roughly 360 organisations on the relationship between platform maturity and developer productivity. Source for the 3-12 percent efficiency-gain figure.
- GetDX research. Adjacent research and reports from the same team, useful for cross-checking the developer-experience side of platform value.
4. Tooling-cost bands
The 8-15 percent tooling-budget-of-total figure is derived from four input streams, none of which is a vendor pricing API:
- Vendor public pricing pages. Sticker rates only, not negotiated discounts. We do not publish individual figures because they change frequently.
- AWS Marketplace and Azure Marketplace listed price points. Public listed rates for category-typical tools.
- FinOps Foundation public research and benchmarks. The FinOps Foundation publishes annual benchmark and survey data on cloud-spend ratios that gives a useful anchor for the cloud-cost share within total platform spend.
- Vantage cloud-cost research. Vantage publishes anonymised cost-band analysis (their Cloud Cost Report and ongoing public benchmarks) that helps triangulate cloud-spend assumptions feeding the calculator.
- Public engineering-blog post-mortems and r/devops threads. Real numbers from real organisations, anonymised. Lower N, higher variance, but useful sanity checks.
Calculation framework
Loaded-cost formula
Loaded cost per engineer = base salary × 1.3. The 1.3 multiplier covers employer-side payroll tax (FICA, FUTA, SUTA), employer benefits contribution (typically 15-20 percent of base for full health + 401k match + standard PTO accrual), equipment and per-head SaaS, and amortised recruiting cost. This is conservative; some organisations run higher (1.35-1.45) when stock-based compensation is significant, but the multiplier we cite is the cash-cost frame which is what most finance teams want for budget modelling.
Cost per developer served
Cost per developer served = total platform spend / product engineers. The denominator is product engineers (excluding the platform team itself), because the metric is supposed to measure leverage. Including platform engineers in the denominator artificially flatters teams that have hired aggressively, and is one of the easier ways to lie with this number. The healthy band of $11k-$18k is for organisations between 80 and 200 engineers, where the team is large enough to be specialised but small enough that fixed costs still dominate.
Healthy-band derivation
The $11k-$18k range comes from crossing loaded team cost (typically 70-85 percent of platform spend), tooling band (8-15 percent of spend), and cloud share (3-8 percent for the platform's own usage, not the products it supports). For a 100-engineer organisation with 8-12 platform engineers, this works out to $1.1M-$1.8M annual all-in. Smaller organisations sit higher per-developer because fixed costs of a minimum-viable platform team are non-zero. Above 250 engineers, the band tightens because tooling spend specialisation gives leverage and the platform team grows sub-linearly.
What we deliberately do not publish
- Specific named-vendor contract values. We do not have visibility into negotiated rates, and publishing list prices would be misleading.
- Negotiated discount levels. Public estimates of "typical" enterprise discounts vary by vendor, deal size, and timing in ways that make general claims unreliable.
- Side-by-side vendor feature grids. The site is independent specifically because we do not run a buyer's-guide model. Adjacent sites (CNCF reference architecture, Gartner Peer Insights, G2) do that better.
Update cadence
We refresh substantively when one of four triggers fires, not on a calendar:
- Major pricing-model shift from a category-leading vendor (e.g. a move from seat-based to consumption-based pricing in a tooling category).
- A new entrant breaks the existing tooling band.
- Marketplace-listed price changes greater than 10 percent within a category.
- A major acquisition or retirement that changes the category structure.
Cosmetic date-bumps are not made. The "Updated" date in the footer reflects the most recent substantive review of the underlying inputs, not the last time a typo was fixed.
Editorial position
Independent, no vendor sponsorship in the platform-engineering tooling space, no buyer-guide model. Full editorial policy on the about page, including the affiliate-programme stance and the categories we do and do not accept commercial relationships in. If you find a number on this site that disagrees with a public source we cite, write in and we will reconcile or correct.