ASPPH logo

Connect

Member Research & Reports

Member Research & Reports

UAB Uses Crowdsourcing to Evaluate Published Scientific Literature

Systematically evaluating scientific literature is a time-consuming endeavor that requires hours of coding and rating. Dr. Andrew W. Brown, scientist in the office of energetics at the University of Alabama at Birmingham—in collaboration with Dr. David B. Allison, director of UAB’s Nutrition Obesity Research Center (NORC)—recently described a method to distribute these tasks across a large group through online crowdsourcing.

ABrown_UAB ASPPH

[Photo: Dr. Andrew W. Brown]

Using Amazon’s Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question “Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?” The tasks included evaluating abstracts to determine if they were about human studies investigating nutrition and obesity, iteratively synthesizing free-text food answers into one coherent term, rating the perceived obesogenicity of foods, and extracting citation counts for each paper through Google Scholar. Microworkers reached consensus on 96 percent of abstracts and 84 percent of food answers. Over 99 percent of responses about the obesogenicity of foods were complete and usable, and the opinions of the microworkers tended to match what the authors’ suspected that the average person believes about certain foods (e.g., microworkers on average thought sugar-sweetened beverages cause obesity and fruits and vegetables prevent obesity).

Using the data, they showed that there was no significant association between popular opinion and attention papers received as measured by citation counts or the SCImago Journal & Country Rank (a measure of journal impact) of the publishing journal.

Their paper also describes important considerations for guaranteeing fair pay for workers. “There have been some concerns that microworkers can be taken advantage of, so we also describe our experiences with calibrating a fair pay for microworkers,” Dr. Brown said. Taking into consideration such calibrations, they estimate that repeating all of the tasks would result in a total direct microworker cost of only $313. “We have gotten feedback from some individuals who indicated that they are enjoying reading the abstracts because of an interest in the science. Getting research into more people’s hands is a great indirect benefit of using this method,” he continued.

The researchers concluded that, with good reliability and low cost, crowdsourcing has potential to reliably, quickly, and cost-effectively evaluate published literature. “Using Crowdsourcing to Evaluate Published Scientific Literature: Methods and Example” was published in July in PLOS One.

Read more: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0100647.