Why some questions are worth more than others
The CDP scoring methodology weights a small number of questions disproportionately. Identifying these and preparing rigorous answers is the highest leverage move in any cycle. Drawing on the 2024 corporate questionnaire structure, here are the 10 questions that most often separate B from C and A from B.
The pattern is the same across sectors. The questionnaire is identical for every respondent, but the difficulty of each question varies by industry. Manufacturing teams struggle with C7.30 energy sub questions; financial institutions with C7.8 category 15 financed emissions; food and beverage with C8 forest commodities. The 10 questions below carry weight regardless of sector.
1. C2.2.2: the full DIRO process description
This is the foundation question. Scorers want a multi paragraph description with: who is involved, how often the process runs, what frameworks are used (TCFD, TNFD, ENCORE, WRI Aqueduct), how it integrates with enterprise risk management, and how outputs feed governance and strategy decisions. Generic descriptions cap the entire C2 to C3 score.
2. C3.1.1: substantive environmental risks with quantification
Each risk needs: type (acute physical, chronic physical, policy, technology, market, reputation), description, time horizon, likelihood, magnitude, financial figure, and management response. Risks without financial figures (3.1.2) score poorly even when the narrative is strong. A rough order of magnitude is better than “not yet quantified”.
3. C4.1.2: board oversight detail
Required: which committee, which named position has ultimate accountability (do not include individual names), frequency of agenda inclusion, what specific environmental issues are reviewed, integration into terms of reference. “The board reviews climate annually” is not an answer that scores.
4. C4.5.1: monetary incentives
Specific positions, type of incentive (annual bonus, LTIP, equity), the environmental KPI tied to it, percentage of compensation at risk, and target setting frequency. Statements without quantified linkage do not score.
5. C7.5: base year and base year emissions
The base year must be justified, recent enough to be operational, and reconcile with the targets in 7.53. Base year recalculation policy must be stated. A vague base year undermines every target downstream and breaks the consistency check scorers run between modules.
6. C7.8: gross global Scope 3 emissions
Each of the 15 categories needs: relevance, calculation methodology (GHG Protocol method), data quality, percentage of category covered. Spend based estimates are accepted in year one but penalised in material categories without a transition plan to activity based methods.
For deeper guidance, see the Scope 3 data collection playbook.
7. C7.9.1, 7.9.2, 7.9.3: verification statements
Limited assurance for Scope 1 and 2 is the minimum to escape the Disclosure band. Reasonable assurance for Scope 1 and 2, and limited for material Scope 3 categories, is what Leadership requires. Attach the assurance statement signed by the accredited verifier; do not just describe it.
The full requirements live in the CDP verification and assurance guide.
8. C7.53.1: absolute emissions targets
Each target must include: scope coverage, base year and base year emissions, target year, percentage reduction, methodology, SBTi validation status. Anything missing flags incompleteness. The target must reconcile mathematically with the C7.5 base year and C7.6 to C7.8 inventory.
9. C7.55.2: emissions reduction initiatives implemented
For each initiative: stage, estimated annual CO2e saved, payback period, lifetime, scope affected, methodology. C7.55.1 also asks for totals at each development stage. Listing initiatives without quantified savings is one of the most common scoring losses.
10. C9.2.2 and C9.2.4: water volumes including stressed areas
Total withdrawals, discharges, and consumption with year on year change and forecast. C9.2.4 specifically isolates withdrawals from water stressed areas, mapped to WRI Aqueduct or equivalent. Failing to identify stressed area withdrawals caps the entire C9 score.
How to prepare these answers
The 10 questions share a common evidence pattern: a documented process, primary data, third party verification, and a clear narrative linking governance to action to outcome. Companies that build this evidence trail once, structured around the canonical inventory, can answer all 10 consistently across modules and across years.
What to fix first
If you have to prioritise: C2.2.2, C4.1.2, and C7.55.2 are the three questions where a small amount of additional evidence yields the largest score uplift. They are also the cheapest to fix because the evidence often already exists in board papers, internal risk registers, and project tracking systems. The work is not generating new data; it is structuring what already exists into the format CDP scorers reward.
The other seven questions require more substantive investment: assurance contracts, Scope 3 data collection programmes, SBTi validation. Plan those over a two to three cycle horizon rather than the weeks before a deadline. The companies that move from C to B in one cycle and B to A in the next are those that sequence this work deliberately instead of trying to compress it.
To see how Dcycle structures the canonical data layer that feeds these answers, request a demo. For the broader context on how scoring works, see the scoring methodology guide.