Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Dec 16;14(12):e0226394.
doi: 10.1371/journal.pone.0226394. eCollection 2019.

Tapped out or barely tapped? Recommendations for how to harness the vast and largely unused potential of the Mechanical Turk participant pool

Affiliations

Tapped out or barely tapped? Recommendations for how to harness the vast and largely unused potential of the Mechanical Turk participant pool

Jonathan Robinson et al. PLoS One. .

Abstract

Mechanical Turk (MTurk) is a common source of research participants within the academic community. Despite MTurk's utility and benefits over traditional subject pools some researchers have questioned whether it is sustainable. Specifically, some have asked whether MTurk workers are too familiar with manipulations and measures common in the social sciences, the result of many researchers relying on the same small participant pool. Here, we show that concerns about non-naivete on MTurk are due less to the MTurk platform itself and more to the way researchers use the platform. Specifically, we find that there are at least 250,000 MTurk workers worldwide and that a large majority of US workers are new to the platform each year and therefore relatively inexperienced as research participants. We describe how inexperienced workers are excluded from studies, in part, because of the worker reputation qualifications researchers commonly use. Then, we propose and evaluate an alternative approach to sampling on MTurk that allows researchers to access inexperienced participants without sacrificing data quality. We recommend that in some cases researchers should limit the number of highly experienced workers allowed in their study by excluding these workers or by stratifying sample recruitment based on worker experience levels. We discuss the trade-offs of different sampling practices on MTurk and describe how the above sampling strategies can help researchers harness the vast and largely untapped potential of the Mechanical Turk participant pool.

PubMed Disclaimer

Conflict of interest statement

We have read the journal’s policy and the authors of this manuscript have the following potential competing interests: All of the authors are employed at Prime Research Solutions. This is the company that owns TurkPrime, the platform whose TurkPrime ToolKit was used to source Mechanical Turk participants, and the database from which some data were queried. This does not alter our adherence to PLOS ONE policies on sharing data and materials.

Figures

Fig 1
Fig 1. The number of new and unique US workers taking a HIT posted through TurkPrime across years.
Fig 2
Fig 2. The number of new US workers per month from January, 2016—April, 2019.
On average 4,683 new workers joined the pool each month.
Fig 3
Fig 3. Percent of MTurk workers who fall into each experience group and the share of HITs completed by each group.
Fig 4
Fig 4. Verified approval rating for workers in the open sample.
Fig 5
Fig 5. Verified HIT completion history for workers in the open sample.
Fig 6
Fig 6. The anchoring manipulation across all three groups in Study 1.
Fig 7
Fig 7. The anchoring manipulation across groups in Study 2.
Fig 8
Fig 8. The percent of inexperienced workers participating in Study 3 by day.
In total, 48 participants completed the study.

References

    1. Buhrmester M, Kwang T, Gosling SD. Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspect Psychol Sci. 2011. January;6(1):3–5. 10.1177/1745691610393980 - DOI - PubMed
    1. Bentley JW. Challenges with Amazon Mechanical Turk research in accounting. SSRN 2924876. 2018 Mar 30.
    1. Chandler J, Shapiro D. Conducting clinical research using crowdsourced convenience samples. Annu Rev Clin Psychol. 2016. March 28;12:53–81. 10.1146/annurev-clinpsy-021815-093623 - DOI - PubMed
    1. Goodman JK, Paolacci G. Crowdsourcing consumer research. J Consum Res. 2017. February 22;44(1):196–210.
    1. Bohannon J. Mechanical Turk upends social sciences. Science. 2016; 352 (6291): 1263–1264. 10.1126/science.352.6291.1263 - DOI - PubMed

MeSH terms