I’ll admit it: I stumbled into CrowdFlower as a way to earn Facebook Credits, which is a horribly inauspicious way to begin any adventure. The dangerous combination of downtime in the office merged with the pseudo-materialist allure of that damned Pawn Stars game, and suddenly, I wanted to unlock extra display spaces in which to sell antique chronometers to imaginary customers in my digital showcase. Unfortunately, this kind of action requires the frustrating eCurrency of Facebook Credits (which are 10 cents apiece), and despite my very palpable need to expand my stock room so I could store more cymbal-wielding, wind-up monkeys, I adamantly refuse to spend one dime on any variety of virtual currency. Fortunately, CrowdFlower offers these Credits for the low, low price of answering questions.
Surely, answering questions can’t be too complicated. As a professional research librarian with 13 years worth of experience, I’ve got answering questions honed to a fine, Boolean art. I walk that quivering membrane between “too vague” and “too specific” with winged feet, extracting only the most pertinent data with a grace heretofore unseen by mortal eyes. The search engine is my bitch.
The first task I attempted was matching phone numbers with businesses. Someone had performed an automated search of the Internet for these details and was using human brains to perfect the information that was squeezed out of the other end, thus the “crowd” aspect of their title. CrowdFlower’s application of crowdsourcing uses the malleable art of perfecting information by culling the infinite processing power of the human brain-cloud, enticed towards the neon glow of subtle rewards—like Facebook Credits.
The first 10 questions laid before me would be a test. Some of the questions would be genuine inquiries, while others would be plants: questions that CF already knew the answer too, serving as a kind of spot-check to make sure that I’m not truly the drooling idiot they assume I am. I did wander in from a Facebook game, so I can’t hold this assumption against them. So, using a few search engines, I hunted down some phone numbers, and before too long, I was told that my accuracy was too low to allow me to continue.
“For question X, your answer was ‘Yes, I have found this business’ phone number.’ The correct answer was ‘No’.”
Because the master brain at CrowdFlower didn’t locate the phone number of a particular business at the time this test was written, I was deemed a moron. Sure, I couldn’t find this phone number through a quick Google search, but as a paid librarian, I have access to search techniques and databases which the average person does not. Because I put in an iota of extra effort in finding this phone number (in this case, listed on a PDF from a contract the business held with their town for a recent project), I was ejected from CrowdFlower’s human cloud, an impotent wisp.
This was probably a bizarre, poorly-researched fluke, so I began another type of test: determining the accuracy of product descriptions. As I sorted through these for a few minutes, the choices were fairly obvious: a toaster oven was not the wife of Franklin Pierce, and a widescreen TV wasn’t a recipe for eggs Benedict. When I came upon a description that described an oven as something which was “attractive, expensive, and belongs in the kitchen,” I decided to check off the box which indicated, “No, this description is ambiguous.” Honestly, I could say the exact same things of my girlfriend and every letter would still hold true. Once again, CrowdFlower ejected me from their increasingly useless human cloud.
The problem with this accumulation of neuron-powered consciousness is that CrowdFlower is grading on a curve, and the curve is very, very stupid. The idea of using the altruistic brains of the mouse-wielding populace has worked exceptionally well for websites like Wikipedia, which is imperfect, but so meticulously monitored that it always maintains a core of truth and accuracy. CrowdFlower’s contributors are not stimulated by the intellectual volunteerism that powers a Wiki, but instead, it’s powered by those who crave a quick profit, in nickels or Facebook Credits to spend on making their Farmville’s beet crop grow a little faster.
This quest for profit is the impurity that makes CrowdFlower’s information useless: both CrowdFlower’s focus on retail information, and the avarice of their contributors. Once a user passes these initial tests, they can easily copy and paste any text into the various response boxes, which are then declared useless by the next user, and so on in an endless cycle.
CrowdFlower’s own goals aren’t altruistic, as every question I encountered had to do with a commercial product, all of which undoubtedly goes towards either building a better retail search engine or selling lists of crafted information to commercial entities.
Essentially, who gives a damn? CrowdFlower’s tasks aren’t designed by the company itself, but by external entities seeking a cross-section of valid information for their own devices, but the “good” that emerges isn’t a universal good that attracts altruism. And frankly, it’s a great idea, but mitigated by very stupid tests and a curve which favors the lowest, stickiest common denominator.