Synapse image via Shutterstock
Synapse image via Shutterstock

It’s no secret that crowdsourcing has been a successful approach in many industries. Even complex and technical topics can be addressed this way; one great example is Foldit, an online game that lets regular people design efficient protein structures. Those designs are submitted to a top protein laboratory, which tests to see whether predicted structures match the real-life structures of specific proteins.
In the biomedical community, though, Foldit is an outlier. The concept of pulling in as many minds and resources as possible to solve a problem, though proven to work repeatedly in other industries, has not gained real traction in life sciences. In a recent presentation at the Genomes, Environments, Traits (GET) Conference, Sage Bionetworks President Stephen Friend noted that there’s an institutional challenge to adopting this mindset in the biomedical community: the peer review process.
Peer review is to science what Consumer Reports and Amazon reviews are to washing machines, phones, or cars. Scientists who feel they have made an important discovery write it up in a paper and submit it to a journal. Before publishing it, that journal forwards the paper to a few other scientists in the field for a reality check: Does the science hold up? Do the methods make sense? Have the paper’s authors considered other explanations for this finding, and have they addressed them adequately? The reviewers’ feedback offers the journal educated insight into whether the science being reported is worthy of publication.
Peer review is an essential part of science, and influences developments throughout the field. A scientist’s ability to land a job, get tenure, win grant funding, or lead a laboratory hinges on how many papers and in what journals he or she has published. Because publication is so tightly woven into career success, the contents of those papers—the starting hypothesis, data generated, even the scientific problem being tackled—are highly valued, competitive assets. Until publication, which might happen months or even years after the experiments have been run, that information is guarded about as closely as a bank account password.
In this kind of environment, telling scientists that answers could come faster if the problem is opened up to thousands or millions of people doesn’t make much difference. Sure, scientists want answers faster; they also want viable careers.
That is where Stephen Friend comes in. He has spent years in the life sciences, directing cancer research at Merck, starting companies, and more. At the GET conference, he spoke about an interesting new initiative he’s working on called Synapse, which is run through a nonprofit organization and aims to be a GitHub for life sciences. (GitHub, for those not familiar, is an online community where programmers write and develop code together; activity there is used in the world of software engineering as a better proxy for someone’s skills and knowledge than a resume or CV.) Synapse is trying to buck the life sciences system by providing a framework where scientists can upload and share data, build on each others’ findings, and tag all information with history so that the people responsible get credit.
Synapse, still in its infancy, is unlikely to change the peer review process. After all, despite the advent of the Internet and its knack for sweeping away business models, peer review publishing still operates much as it did in the days of Newton or Galileo. But it’s encouraging to see that scientists themselves are trying to make progress and to find ways to maintain what’s needed for the scientific culture while improving how data is accessed, shared, and analyzed. We will need lots more innovation before we see real change, but Synapse is promising. Ultimately, it will lead to scientific problems getting solved faster.