I just read a very well-written article on reproducible research, giving 10 simple but important rules for making your results (more) reproducible:
Happy reading!
I just read a very well-written article on reproducible research, giving 10 simple but important rules for making your results (more) reproducible:
Happy reading!
It’s probably the way things go, but still I feel sad about it. One of the reproducible research tools linked on our site does not seem to exist anymore: ResearchAssistant (by Daniel Ramage). Typical story: PhD student graduates, and moves on to another position (I did find a “now at Google” when searching what happened), and the web pages with useful links and tools disappear. RIP.
(If you read this blog in the next month, and you know that ResearchAssistant is still alive, let me know. I promise to keep the link alive for another month.)
Given the fact that you are reading this blog, I am assuming you are into sharing your code and data. However, what is not so clear to me, is where we should be sharing these data (assuming data includes code, which people often seem to forget).
Maybe I am too critical about this, or too old-fashioned. Or just too commercially oriented, and not open enough to share with everyone potentially interested in my work. Who will tell?
The latest issue of IEEE Computing in Science and Engineering is a special issue on reproducible research. It features several articles on tools and approaches for reproducible research.
I also contributed a paper “Code Sharing Is Associated with Research Impact in Image Processing“, where I show that there is a relation between making code available online for your paper and the paper’s number of citations. For academics, I believe this is one of the most important motivations for making code available online.
Have fun reading the entire issue!
I recently learned about the RunMyCode portal, a website where you can easily create a companion website for a paper, allowing others to redo your experiments (thanks, Victoria). The website looks very nice, and it seems a very attractive proposition.
Yet another case of scientific fraud caught a lot of media attention in The Netherlands in the past months. In social psychology, Mr Stapel, former professor at Tilburg University, got caught after years of scientific misconduct.
Remark that in this case we are not talking about removing an outlier, or ‘enhancing’ some results. No, Mr Stapel actually made up the data for (some of) his publications entirely. Over the past years, his work has received quite some media attention, with research findings like “meat eaters are more selfish than vegetarians”, or “disordered environments make people more prone to stereotyping and discrimination”. Both results have been withdrawn.
When preparing my post on ICIP 2011 and the reproducible research round table, I ran into a presentation which Steve Eddins (The Mathworks) gave at ICIP in 2006:
Maybe 5 years old now, but still very actual. Recommended reading for all software writers among us (and I guess most are these days writing code in some way or another)!
At this year’s ICIP conference (IEEE International Conference on Image Processing) in Brussels, a round table was organized on reproducible research. Martin Vetterli (EPFL) was one of the panel members, the others were Thrasos Pappas (Northwestern Univ.), Thomas Sikora (Technical University of Berlin), Edward Delp (Purdue University), and Khaled El-Maleh (Qualcomm). Unfortunately, I was not able to attend the panel discussion myself, but I’d be very happy to read your feedback and comments on the discussion in the comments below. And let the discussion continue here…!
The conference also particularly mentioned in the call for papers that they would give a “Reproducible code available” label. A best code prize would also be awarded, however, I did not hear anything about it later anymore. I am curious how many submissions would have been received. When scanning through the papers, I could find 9 papers mentioning something about their code being available online:
I wrote two months ago about the mini-symposium “Store-Share-and-Cite” at TU Delft, where I gave a talk. The slides for all presentations are available online now. Enjoy!
Early this year, IEEE has changed its policy with respect to making your publications available online. Now you are only allowed to put a (final) preprint on your personal web page (or your institution’s), mentioning the copyright and final referencing data. This holds for all papers published after January 1st, 2011. Before, you were also allowed to make the published paper itself available online.
While I do understand that this protects (some of the) additional work done by IEEE to make that final publication look nice, and thus should encourage people to subscribe, I am not happy with this measure. Maybe this is just aligning the IEEE policy with what most publishers do already, but still.
Why do I prefer the published one? First of all, this makes sure only a single version of a paper circulates on the web. I personally find it very annoying to see a paper, start reading it because it looks different from what you’ve seen before, and notice that it is actually the same, but in different typesetting. Even more so if the two would have differences. The final published one would be the most correct one, I assume. Secondly, it also increases the chances that a paper is cited correctly. Because, let’s face it, not everyone will nicely add the “full citation to the original IEEE publication and a link to the article in the IEEE Xplore digital library“.
Correctly citing a paper may become even more difficult…