I recently learned about the RunMyCode portal, a website where you can easily create a companion website for a paper, allowing others to redo your experiments (thanks, Victoria). The website looks very nice, and it seems a very attractive proposition.
At two recent occasions, I heard about Elsevier’s Executable Paper contest. The intention was to show concepts for the next generation of publications. Or as Elsevier put it:
Executable Paper Grand Challenge is a contest created to improve the way scientific information is communicated and used.
- How can we develop a model for executable files that is compatible with the user’s operating system and architecture and adaptable to future systems?
- How do we manage very large file sizes?
- How do we validate data and code, and decrease the reviewer’s workload?
- How to support registering and tracking of actions taken on the ‘executable paper?’
By now, the contest is over, and the winners have been announced:
First Prize: The Collage Authoring Environment by Nowakowski et al.
Second Prize: SHARE: a web portal for creating and sharing executable research papers by Van Gorp and Mazanek.
Third Prize: A Universal Identifier for Computational Results by Gavish and Donoho.
Congratulations to all! At the AMP Workshop where I am now, we were lucky to have a presentation about the work by Gavish and Donoho, which sounds very cool! I also know the work by Van Gorp and Mazanek, using virtual machines to allow others to reproduce results. Still need to look into the winner’s work…
If any of this sounds interesting to you, and I believe it should, please take a look at the Grand Challenge website, and also check out some of the other participants’ contributions!
Here at the workshop, we also had an interesting related presentation yesterday by James Quirk about all that can be done with a PDF. Quite impressive! For examples, see his Amrita work and webpage.
I just got a pointer to ResearchAssistant, a Java tool to keep better track of research experiments. It stores the exact circumstances under which experiments are performed, with parameter values etc. Looks like a very nice tool for reproducible research. At least, it should make it very easy when writing a paper to trace back how certain results were obtained. If you’re doing research in Java, I’d certainly recommend taking a look at RA!
The tool is also described in the following paper:
D. Ramage and A. J. Oliner, RA: ResearchAssistant for the Computational Sciences, Workshop on Experimental Computer Science (ExpCS), June 2007.
Most people would agree that reproducible results are important in all areas of science. I think reproducibility is particularly important in areas of science where replication of an experiment or study—where a similar question is addressed using independent investigators, data, and methodology—is highly unlikely. Such studies are typically difficult to replicate because of time, money, ethics, or perhaps all three. In these cases, all we are left with are the data at hand and being able to reproduce the published results from these data is critical.
Much heat has been generated over the question of whether scientists should be forced to make their data and methodology public. Journals such as Science and Nature have adopted data dissemination policies; the National Institutes of Health requires data sharing plans for some of its grants; and the Office of Management and Budget Circular A-110 requires that data generated under federally sponsored research be made available upon request if those data were used in developing a government agency action. While the debate over such dissemination policies is highly relevant, I think it can obscure and cause people to overlook an important question related to reproducible research.
One way I sometimes think of this question is as follows: Suppose a collaborator comes to you and says “I desperately want to make my research reproducible. What should I do?” I don’t mean to frame this as purely a hypothetical question—I have actually had people ask me this before.
The problem right now is that I don’t think proponents of reproducible research (myself included) have a good answer to this question. A typical response might be “make the code and data available”. Yes, but how? If we cannot come up with a concrete and coherent answer to this question for people who are willing to make their work reproducible, we cannot realistically expect to change the minds of people who are currently unwilling to make their research reproducible.
I think there are two important roadblocks that make it difficult to publish reproducible research. The first is the lack of a broad toolset that a wide range of researchers can use to assist them in publishing their data and methodology. There are a number of efforts out there to develop tools, but many of these tools either have important limitations or are only accessible to more sophisticated users (the Sweave/LaTeX combination comes to mind, although it is a great contribution). A related problem involves getting people to use tools that are already out there. For example, I believe the use of version control software is a critical aspect of reproducible research and there are many high-quality software packages available for all operating systems. I personally use git but many others would also fit the bill. I must say I’ve had limited success convincing people they need to use version control systems. I think the basic problem is that it involves learning Yet Another Software Package.
The second roadblock for reproducible research is distribution. Suppose I carefully keep track of all the code I use to analyze my data and am happy to give the code and data to others. How do I do that? Many knowledgeable people will setup a web site for themselves and post code and data on their own web pages. But demanding that everyone create a web site for distributing reproducible research is in my opinion a steep demand. Many researchers do not have this capability and even if they did, it is not clear to me that web pages are the ideal medium for disseminating reproducible research. How much data analysis is done in your web browser?
The distribution problem can be addressed by creating some basic infrastructure. Analogous infrastructure already exists in other domains. Users of the R statistical system have the Comprehensive R Archive Network (CRAN) which is used to disseminate R packages (add-on functionality) to anyone around the world. In practice there is no need to interact with the web site with a browser because R itself can fetch the packages from the Archive and install them without the user ever having to change applications. Similar facilities exist for Perl (CPAN) and TeX (CTAN). Of course, we cannot expect such resources to appear out of thin air. Developing a useful archive requires hardware and administrative time.
I have been trying to develop a system for R users that can be used to distribute reproducible research via a central repository. The software is an R package called ‘cacher‘ and the associated repository is what I call the Reproducible Research Archive. The basic idea of the ‘cacher’ package is to take code that represents a data analysis and cache the code and associated data in a series of key-value databases. This “data analysis cache” can then be packaged and uploaded to the Archive. Each cache package is given a unique ID (via SHA-1) so that it can be referenced by others in a global fashion. On the other side of things, the ‘cacher’ package can download an available cache package and a user can run the code in the package to reproduce the results.
Not all of the abovementioned functionality is complete but many aspects of the ‘cacher’ package are available. There is also a paper in the Journal of Statistical Software that describes the package in greater detail. The advantage of the ‘cacher’ system is that R users have relatively little to learn—just a few functions. Of course, the disadvantage is that it is only available to R users, who are a minority of people conducting data analysis in the world.
There are of course other challenges that I haven’t mentioned that will need to be solved before reproducible research goes mainstream. I think the development of the necessary infrastructure (software and distribution media) is just one important challenge that is critical to its adoption because less technical users need to be able to easily “plug-in” to an existing framework without having to build a piece of it themselves. By learning from experiences in other domains I think we can successfully build this infrastructure and bring reproducible research to a much wider audience.
Roger D. Peng
Department of Biostatistics
Johns Hopkins Bloomberg School of Public Health