I just read the article about Netflix’ Million Dollar Programming Prize on IEEE Spectrum.
Robert M. Bell, Jim Bennett, Yehuda Koren, and Chris Volinsky, The Million Dollar Programming Prize, IEEE Spectrum Online, http://www.spectrum.ieee.org/may09/8788.
Interesting article, showing again how contests proposing a challenge can inspire a lot of great work, and allow an ‘objective’ comparison between algorithms. I think they provide a great way to motivate researchers to work on real problems, with testing on standardized datasets.
I am glad to let you know that our paper has been published in the latest issue of IEEE Signal Processing Magazine:
P. Vandewalle, J. Kovacevic and M. Vetterli, Reproducible Research in Signal Processing – What, why, and how, IEEE Signal Processing Magazine, Vol. 26, Nr. 3, pp. 37-47, 2009, DOI: 10.1109/MSP.2009.932122.
Have you ever tried to reproduce the results presented in a research paper? For many of our current publications, this would unfortunately be a challenging task. For a computational algorithm, details such as the exact data set, initialization or termination procedures, and precise parameter values are often omitted in the publication for various reasons, such as a lack of space, a lack of self-discipline, or an apparent lack of interest to the readers, to name a few. This makes it difficult, if not impossible, for someone else to obtain the same results. In our experience, it is often even worse as even we are not always able to reproduce our own experiments, making it difficult to answer questions from colleagues about details. Following are some examples of e-mails we have received: “I just read your paper X. It is very completely described, however I am confused by Y. Could you provide the implementation code to me for reference if possible?” “Hi! I am also working on a project related to X. I have implemented your algorithm but cannot get the same results as described in your paper. Which values should I use for parameters Y and Z?”
Enjoy reading! And feel free to post your comments!
Last month, a few former colleagues at LCAV did some cross-testing of the reproducible research compendia available at rr.epfl.ch. And I must say, from the results I have seen so far, it is quite a sobering experience. Many of those which I considered to be definitely reproducible didn’t pass the test (entirely). I guess that shows again how difficult it is to make work really reproducible, even if you fully intend to do it. So that also leads me to my conviction that for papers that do not have code and data online, it is almost impossible to reproduce the exact results. There is work to be done on the road to reproducible research!
I’ll need to look further into the reasons why even some of my own work did not pass the test.
I am glad to announce you our new website on reproducible research: www.reproducibleresearch.net. Yes, as I already discussed before, various sites on this topic recently (or less recently) popped up. However, I still think this site can add something extra to the existing sites. First of all, it is mainly addressing the signal/image processing community, a research domain not specifically addressed in the other sites yet.
It contains information on reproducible research and how to make signal processing research reproducible. It also lists references to articles about reproducible research, a discussion forum, and various other related links.
And then, in my opinion an important extra to signal processing interested people. We added a listing of links to papers for which code/data are available (with of course links to them). I really believe this can be extremely useful when doing research. For copyright reasons, we cannot (in most cases) host the PDF on our own site, and I am also not sure we should want to. But if developed and maintained well, this can give a one-stop site when looking for code/data related to a paper. So please feel free to send me your additions. I will be happy to add all signal/image processing related works!
I’m really excited about this site, so let me know what you think!
The current issue of Computing in Science and Engineering (CiSE) is a special issue on reproducible research, edited by two pioneers in the field: Jon Claerbout and Sergey Fomel. They have assembled a great set of articles from experts with a lot of first-hand, personal reproducible research experience, so I would highly recommend this to my colleague researchers!
I got a pointer earlier this week to a New York Times article about R. A very interesting article about the use of R in scientific communities and industrial research, mainly for statistical analysis. R is open source software, so it is free and has already taken advantage from contributions made by various authors. And (although I haven’t used it myself yet), it is a great tool for reproducible research. Using the package Sweave, authors can write a single document containing their article and the R code to reproduce the results and put them in place. This ensures that all the material is in a single place.
It also shows something about the amazing power of open source software developed by a community of authors (and typically users at the same time).
I seem to be dwelling quite some time on the web lately… After my post about the lifetime of URLs, here’s one about domain names and reproducibility. I recently noticed when looking around that there are quite some websites and domain names related to reproducible research.
reproducibleresearch.org is an overview website by John D. Cook containing links to reproducible research projects, articles about the topics, and relevant tools. It also contains a blog about reproducible ideas.
reproducibleresearch.com is owned by the people at Blue Reference, who created Inference for Office, a commercial tool to perform reproducible research from within Microsoft Office.
reproducibility.org is used by Sergey Fomel and his colleagues as home for their Madagascar open source package for reproducible research experiments.
reproducible.org is a reproducible research archive maintained by R. Peng at Johns Hopkins School, where the goal is to host a place for reproducible research packages.
Quite a range of domain names containing the word “reproducible” (or a derivative), if you ask me! And then I didn’t even start about the Open Research or Research 2.0 sites. Let’s hope this also means that research itself will soon see a big boost in reproducibility!
I am getting worried these days about the volatility of URLs and web pages. I guess you all know the problem: it is very easy to create a web page, and hence many people do so. Great! However, after some years, only few of those web pages are still available. Common reasons include people retiring, or moving to other places, and therefore their web pages at their employer’s site disappear. Similarly, registering a domain name at some point in time does not mean you will keep on paying the yearly fees forever. Or also, web sites getting an entire re-design often result in broken URLs.
Why does this worry me so much?
Last week, I attended the Berlin 6 Open Access Conference in Düsseldorf (Germany). It was an interesting conference, on different aspects of Open Access: making publications freely available online. There was a wide variety of talks, from publishers’ perspectives over financial models for Open Access and open standards, to benefits of Open Access for developed and developing countries.
One of the sessions was organized by Mark Liberman around the topic of reproducible research. I gave a talk there about my experiences with reproducible research, but that’s not what I want to talk about here. I found it very interesting to see the wide range of subjects and perspectives that Mark gathered in that session. Slides of the entire session are available here for those who are interested.
Reproducible research, literate programming, open science, and science 2.0. All different namings, and (in my opinion) all covering largely the same topic: sharing code and/or data complementing a publication as a presentation of your research work. While literate programming is more focused on adding documentation to code, and science 2.0 seems to include the assumption that you put work in progress online, there really seems to be a very large intersection between these topics.
This clearly shows that from various sides of the scientific community, in very different fields of science, the same ideas pop up. That is a really exciting thing! And at the same time it also shows that there is a clear need for such open publication of a piece of research. And I think everyone will agree that there would be nothing nicer than being able to really start from the current state-of-the-art when starting to do research in a certain field?
Should all these efforts be merged under a single “label”? It would definitely be exciting. And it would create a huge impact, as a joint effort for “open science”, “reproducible research”, or whatever the name may be, would receive a lot of attention, and cannot be overlooked by anyone anymore. At the same time, every research domain needs other specifics or finetuning, and it is not clear to me now what the “best” setup would be for the type of work I am doing now. So maybe we should let these variations co-exist for some more time, and see later which ones survive, are the simplest to use, and which tools can be combined to create an optimal method for research.
But of course (if anyone is reading these posts), I would be very happy to hear your own opinion on this!