Kamis, 23 Agustus 2007

Cameron Neylon on Open Notebook Science

There has been a lot of discussion lately about the philosophy of Open Science in general terms.

This is certainly worthwhile but I think it is even more interesting to discuss the mechanics of its implementation. That is what I was trying to push a little more by setting up the "Tools of Open Science" session on SciFoo Lives On.

That's why I've been very impressed by Cameron Neylon's recent posts in his blog "Science in the Open".

He has been discussing details of the brand of Open Science that interests me most: Open Notebook Science, where a researcher's laboratory notebook is completely public.

Cameron has been looking at how our UsefulChem experiments could be mapped onto his system and this has sparked off some interesting discussion. I am becoming more convinced than ever that the differences between how scientific fields and individual researchers operate are much deeper than we usually assume.

By focussing almost entirely on the sausage (traditional articles), we tend to forget just how bloody it actually is to make it and we probably assume that everybody makes their sausage the same way.

The basic paradigm of generating a hypothesis then attempting to prove it false is certainly a cornerstone of the scientific process but it is certainly not the whole story. However, after reading a lot of papers and proposals, one gets the impression that science is done as an orderly repetition of that process.

What I have observed in my own career, after working and collaborating with several chemists, most of the experiments we do are done for the purpose of writing papers! The reasoning is that if it is not published in a journal, it never happened. This often leads to the syndrome of sunk costs, similar to a gambler throwing good money after bad, trying to win back his initial loss.

After a usually brief discovery phase, the logical scientist will try to conceive of the fewest number of experiments (preferably of lowest cost and difficulty) to obtain a paper. In this system, like in a courtroom, an unambiguous story and conclusion is the prefered outcome. Reality rarely cooperates that easily and that is why the selection of experiments to perform is truly an artform.

We're currently going through that process. We have an interesting result observed for a few compounds and a working hypothesis. That's not enough for a paper in my field. We cannot prove the hypothesis without doing an infinite number of experiments but we are expected to make a decent attempt at trying to falsify it. I know from experience roughly the number of experiments we need with clear cut outcomes to write a traditional paper.

So how much more value to the scientific community is that paper relative to the single experiment where this effect was first disclosed on our wiki then summarized on our blog?

Is this really the most efficient system for doing science or is this the tail wagging the dog?

When the scientific process becomes more automated, I predict that the single experiments will be of more value than standard articles created for human consumption and career validation.

Sometimes the pieces just don't fit in the sausage maker. Does that mean we shouldn't eat them (making sure to cook them first, of course)?




One of the most useful outcomes of Open Notebook Science (and why I'm highlighting Cameron's work) might be the insight it will bring to the science of how science actually gets done. (Researchers like Heather Piwowar should appreciate that)

Tidak ada komentar:

Posting Komentar