Open Information, Reproducible Research, and Climategate

In #wordpress_migration

David Donoho, the creator of Wavelab is featured in an article about [reproducible research in the journal CISE (Computing in Science and Engineering)]. I am struck by the resonance of a couple quotes, as they apply to Climategate and Climate Modeling:

The scientific method's central motivation is the ubiquity of error--the awareness that mistakes and self-delusion can creep in absolutely anywhere and that the scientist's effort is primarily expended in recognizing and rooting out error.

[...]

In stark contrast to the sciences relying on deduction and empiricism, computational science is far less visibly concerned with the ubiquity of error. At conferences and in publications, it's now completely acceptable for a researcher to simply say "here is what I did, and here are my results." Presenters devote almost no time to explaining why the audience should believe that they found and corrected errors in their computations. The presenter's core isn't about the struggle to root out error--as it would be in mature fields--but is instead a sales pitch...

[...]

Many users of scientific computing aren't even trying to follow a systematic, rigorous discipline that would in principle allow others to verify the claims they make. How dare we imagine that computational science, as routinely practiced, is reliable!

On ClimateAudit, there is an older article (2005) about the Hockey Stick plot. Ross McKitrick makes the suggestion of an audit panel,

A group of experts fully independent of the IPCC should
be assembled immediately after the release of any future IPCC Reports to
prepare an audit report which will be released under the imprimatur of the IPCC itself. The audit will
identify the key studies on which the Report’s conclusions have been
based, and scrutinize those studies, with a view to verifying that, at a
minimum:

  • data are publicly available,
  • The statistical methods were fully described, correctly
    implemented and the computer code is published
  • If the findings given maximum prominence are
    at odds with other published evidence, good reason is provided in the
    text as to why these findings have been given prominence.

Any competent scientist can assess these things. My strong
recommendation is that such a panel be
drawn from the ranks of competent mathematicians, statisticians,
physicists and computer scientists
outside the climatology profession, to prevent the conflict of
interest that arises because
climatologists face career repercussions from
publicly criticizing the IPCC. Also, participation should
exclude officials from environment ministries,
because of the conflict of interest entailed in the fact
that environment ministries are the main financial
beneficiaries of the promotion of global warming fears.

The second recommendation is for a "counter-weight panel", whose job would be to actively try to find holes in the analysis, assumptions, etc...

I'm not sure how I feel about the second one (I'll have to think about it), but the audit panel to me makes total sense. Why don't the scientific journals do this as a matter of policy?