TurkerView
  • Requesters
  • Institutions
  • Scripts
  • Queuebicle
  • API
  • Qualifeye
  • Forum
  • Search
  • Login
  • Login
    • Action
    • Another action
    • Something else here
    • Separated link

Search TurkerView

  • Requesters
  • HITs
  • Scripts

Sorry, have to turn this off for a day to try and fix a query search bug!

USAFA WERC

Is this your requester account?
George Mason University
  • Overview
  • Reviews 99
  • HITs 50

USAFA WERC Ratings


Workers feel this requester pays well

Okay Communication

Approves Quickly

No Rejections

No Blocks
Sorry, your browser doesn't support canvas elements.

USAFA WERC Wage History


Sorry, your browser doesn't support canvas elements.
Heads up! We'll never hide reviews unless they violate our Terms of Service.

Top Worker Reviews

  • More
    • Best
    • Newest
    • Rejected
    • Hourly (High)
    • Hourly (Low)
    • My Reviews

Morgainne Proficient Worker
Reviews: 12,070
Points: 11,186
Ratings: 700
Answer a survey about your perception of robots. - $1.00

Fair

Unrated

Approved

$11.46 / hour

00:05:14 / completion time

Pros

Cons

Okay, it's time to put on my rant pants. I have no idea if this was just one giant attention check disguised as a survey or if this was an actual survey. There's a difference between making sure workers pay attention and then something like this that is flat out paranoid and almost childish. This one got to the point I was more concerned with avoiding ACs and almost not even caring about what I was doing and doesn't that sorta invalidate the whole point of a survey? There were also several of them one right after the other. Plus it was basically put this between 0 and 30 so do you want it somewhere between 0 and 30 or exactly in the middle of 0 and 30?
Aside from that there were about 60 images to rate on slider scale. Was this really worth doing for a dollar? Not really. I respect that a requester wants to make sure their data is legit and that workers care enough to pay attention and give them good data but this nonsense was simply a joke that I should have just tossed back.
Feb 14, 2020 | 8 workers found this helpful.

Random Jobber Fast Reader
Reviews: 11,834
Points: 14,986
Ratings: 1,867
10 min experiment ($2.00 + bonus up to 90 cents) watch videos, play games, & answer questions - $2.00 +0.90 bonus Confirmed!

Generous

Good

Approved

$32.42 / hour

00:05:22 / completion time

Pros

They responded pretty quickly to all the emails and attached a screenshot showing the error they were getting.

Cons

Didn't receive bonus until 10/22 and required a few back and forth emails. Most likely wouldn't have received a response as quickly (or at all) if I sent it through MTurk instead of sending it to the email listed in the consent form.
Sep 13, 2020

cheyenne Average Pace
Reviews: 6,045
Points: 5,697
Ratings: 657
A 10-15 min ($1.00 + up to $0.90 bonus) study on different types of language use - $1.90

Unrated

Excellent

Approved

$16.89 / hour

00:06:45 / completion time

Pros

Bonus is built in survey don't look for extra
contacted requester who explained. "There was a mistake when putting the study on the platform. I am sorry for causing the confusion."
Sill good pay.

Cons

light writing

Advice to Requester

Thanks for the clarification!
Jan 13, 2022

Want to see USAFA WERC's full profile?

Create Your Account

or Login

USAFA WERC


A28GTU87UEJYH6 MTurk Search Contact Requester
Top Collaborating Institutions

Brown University Colorado School of Mines

Recently Reviewed HITs


10 min experiment ($2.00 + bonus up to 90 cents) watch videos, play games, & answer questions
10-15 min ($1.50 + up to $0.90 bonus) study on different types of language use
A 10-15 min ($1.00 + up to $0.90 bonus) study on different types of language use
Answer a survey about your perception of of descriptive words(~ 4 minutes)
Answer a survey about your perception of robots and other entities. (~ 6 minutes)

Ratings Legend

Wage Aggregates

Reward Sentiment

Communication Scores

Approval Tracking

Wage Aggregate Tracking

This is fairly straightforward: we take the completion time & the reward amount (where available) and calculate the average hourly rate for the task. We then apply that number to a simple range based on US minimum wage standards to color-code the data for easy to digest numerical data.

Color Pay Range (Hourly) Explanation
RED < $7.25 / hr Hourly averages below US Federal minimum wage
ORANGE $7.25 - $10.00 / hr Hourly averages between Federal & highest statewide (CA) minimum wages.
GREEN > $10.00 / hr Hourly averages above all US minimum wage standards

Reward Sentiment

Not all HITs are created equal. Sometimes an hourly wage doesn't convey the full story of a HIT's true worth, so we encourage workers to give their opinion on the overall pay of the task. Was it $8/hr to rate pictures of puppies? A worker could justifiably bump up the rating a bit for something so adorable. 10 hours locked in Inquisit? Even for $10/hr many workers would appreciate the heads up on such a task. The Pay Sentiment rating helps connect workers beyond the hard data.

Icon Rating Suggested Guidelines
Underpaid 1 / 5
  • Very low or no pay
  • Frustrating work experience
  • Inadequate instructions
Low 2 / 5
  • Below US min-wage ($7.25/hr)
  • No redeeming qualities to make up for pay
Fair 3 / 5
  • Minimum wages for task (consider SE taxes!)
  • Work experience offers nothing to tip the scales in a positive or negative direction
Good 4 / 5
  • Pay is above minimum wage, or compensates better than average for the level of effort required.
  • The overall work experience makes up for borderline wages
Generous 5 / 5
  • Pay is exceptional.
  • Interesting, engaging work or work environment
  • Concise instructions, well designed HIT.

Communication Ratings

Communication is an underrated aspect of mTurk. Clear, concise directions. A fast response to a clarification question or a resolution to a workflow suggestion can all be valuable aspects of interaction between Requesters & Workers and its worth keeping track of. Plus everyone enjoys the peace of mind knowing that if something does go wrong there will be an actual human getting back to you to solve the issue.

Icon Rating Suggested Guidelines
Unacceptable 1 / 5
  • No response at all
  • Rude response without a resolution
Poor 2 / 5
  • Responsive, but unhelpful
  • Required IRB or extra intervention
Acceptable 3 / 5
  • Responded in a reasonable timeframe
  • Resolves issues to a minimum level of satisfaction.
Good 4 / 5
  • Prompt Response
  • Positive resolution
Excellent 5 / 5
  • Prompt response time
  • Friendly & Professional
  • Helpful / Solved Issues
  • Interacts within the community

Approval Time Tracking

This rating is strictly for approval times. Let's face it, no one wants to mix approval time ratings with how fast a Requester rejects a HIT, so we've saved rejection flags for another category. This provides a more straightforward way to know about how long your HIT might sit pending before paying out. The default auto-approval for most MTurk tasks is 3 days, the maximum is 30 days. We've tried to base our ratings around those data-points.

Icon Rating Approval Time
Very Slow 1 / 5 Over 2 weeks
Slow 2 / 5 ~1 - 2 Weeks
Average 3 / 5 ~3 - 7 Days
Fast 4 / 5 ~1 - 3 Days
Very Fast 5 / 5 ~24 hours or less

Login

Login Failed! Please check your username/password and try again.
TurkerHub Member? Just use your normal TurkerHub credentials to log in to TurkerView.
Don't have an account? Register Here!

2025 TurkerView Privacy Terms Blog Contact