Tuesday, July 31, 2012

The Indian blackouts & oDesk

A nationwide blackout in India has left some 600 million people without electricity. Given that a large number of the contractors on oDesk are from India, I assumed that effects of the blackout would show up readily in the oDesk data. This evening, I wrote a query to get the hours worked each day by Indian contractors during the last month and the number of applications sent. I divided these counts by the respective totals for that day for all of oDesk. From this time series, we can get a sense of what was supposed to happen today and compare it to what actually happened. The time series for applications (top) and hours worked (bottom) are plotted below [1], with today annotated in red. Each percentage estimate has a 95% confidence interval.  

Some observations
  • There is a very easy to detect drop-off in the hours worked---my eyeball calculation says they should have been responsible for around 22% of the hours worked today, while the actual number is closer 17.5%. This is far less of a fall-off than we would naively predict from the "1/2 of Indians without power" headline. Presumably many contractors have access to private generators, or perhaps oDesk is over-represented in parts of the country that were less affected by the blackout. 
  • There is no corresponding obvious drop-off in the fraction of applications. I don't have a good explanation for this, but perhaps non-affected Indian contractors have made up the difference and exploited the now-thinner market. If I can get some data on what parts of the country are actually being affected by the blackout, I could test this notion since I do have contractor locations down to the city level. 
  • Indian contractors take weekends off, both in terms of working and job finding (or at least more so than their oDesk counter-parts from other countries). Remember that this time series is the fraction for a given day, so there's no reason for a strong weekend/weekday pattern. See oDesk Country Explorer for more of this kind of data.   
  • Indian contractors are generally over-represented in the application pool, making up ~25% of applications but only about ~20% of hours worked, though this could easily reflect differences in the kinds of categories Indian contractors work in---there is a great deal of variance in the average number of applications per opening across the different job categories. 
Code for the plots (done in ggplot2):





Wednesday, July 25, 2012

Digitization of the supply side of the labor market

Note: This blog post also contains a short review of Google's new Consumer Surveys service. See the end of the blog post for details. 

On most electronic commerce sites, information about the supply side is digitized and publicly available while information about the demand side is generally not: Amazon, Expedia, iTunes, Etsy etc., all collect and display detailed data about the items for sale, but there is generally little or no information about the consumers with the demands. If we look at the labor market, the reverse us true, in that it is the demand side that's digitized. On online job boards like CareerBuilder, Monster.com, Indeed, SimplyHired etc., vacancies are described via detailed textual descriptions about the nature of the work, skills required, location and approximate salary, but the job seekers---the sellers---generally do not create profiles that describe themselves to the marketplace. 


While we might think that there are some fundamental reason for this difference, I don't think this is the case for the simple reason that in the case of labor markets, the supply side is being digitized, primarily though LinkedIn (in a big way) and through sites like oDesk (in a comparatively smaller, but more comprehensive way). On these sites, workers create permanent, searchable profiles for employers that containe rich, employment-relevant data about themselves.

With the rise of LinkedIn, we are witnessing an unprecedented, voluntary data collection and digitization of the supply side of the labor market. On LinkedIn, individuals can create public profiles and list their education, professional credentials, associations, skills, current and past work experiences and, critically, their other professional connections (indicated by approved links to other LinkedIn users).  As of yesterday (July 24th, 2012), approximately 19% of the US-based Internet using population had a LinkedIn profile [* see note below for interesting background for this 19% figure]. According to LinkedIn, as of March 12, 2012, over 160 million people have created profiles, and in many industries, a LinkedIn profile is expected of all applicants. I talked recently to oDesk's corporate recruiter, asking her how many candidates had LinkedIn profiles. She responded: 

I'd say it is close to 100% (and certainly 100% for viable candidates).   I can't think of an example of someone who I have screened who didn't have a profile on LinkedIn. 

I think this supply digitization is likely to prove consequential, because once the supply side of the labor market is digitized, platforms can begin making data-driven, highly contextualized recommendations to both sides of the market. The recommendations made by a platform can have the advantage of being potentially informed by the platform's holistic perspective on the marketplace. In computer-mediated marketplaces, by necessity essentially every piece of data that goes into or is generated by the marketplace is captured in an electronic database that could conceivably used to make recommendations. 

Of course, job board do try to make recommendations by suggesting vacancies to workers, but they are limited to conditioning those recommendations on whatever search terms and perhaps geographic and/or salary constraints a job-seeker enters in a relatively brief search session. The platform cannot condition its recommendations on a worker's employment history, educational background, skills, current employment status, professional connections, certifications, personality, test scores and other match-relevant factors, nevermind try to balance recommendations to navigate the twin shoals of market thinness and market congestion. 

Unfortunately, I think a lot of this work on recommendations will happen within companies in a state of semi-secrecy, but hopefully enough will be made public that others can contribute, ala the Netflix challenge. It's a little sad that to date society has expended more machine learning research effort trying to predict taste in moves rather than fit for jobs, despite the enormous welfare consequences of the labor market. However, I predict this will change and expect a lot more work on this topic from computer scientists and market designers in the coming years. 

[*] The Origin of "19% of the US Population has a LinkedIn profile" Number

In writing this blog post, I wanted to get an accurate number for what fraction of the US population has a LinkedIn profile. This number was proving hard to come by, so I decided to try a relatively new service launched by Google called Google Consumer Surveys. For 10 cents an answer, you can pose questions to a supposedly representative sample of US-based Internet users. You also get some of the respondent's basic demographics, such as inferred age, gender and income. I launched a one question survey and got 1511 responses in less than a day. The screenshot below shows the main results, but it also includes some neat tools for looking at the data in different ways. I made the survey public---check it out here. I'm quite pleased with the service and plan to use it again.  


Thursday, July 5, 2012

Shrimponomics, Complements & BPOs


Most relevant image available from doing a
Google Image search for "Shrimp using a computer"

A few years ago, there was a Freakonomics post about how people reason about economic situations and phenomena. The phenomenon in question was shrimp consumption: the amount of shrimp people eat in the US per capita  tripled between 1982 and 2007. When asked to explain this rise, non-economists mainly give demand reasons (changes in preferences), while economists are more likely to also give supply reasons (improved fishing efficiency, rise of aquaculture etc.). 

If I had to offer an explanation for this focus on demand explanations, my guess it that demand explanations come more easily to us because it is the side of the market that is more familiar to us : most of us have eaten shrimp & bought shrimp---very few of us have worked in commercial fishing. So when asked "why are people consuming more shrimp?" we start with "why might I consume more shrimp?" and although price is certainly a reason (and a path of thought that would help lead to a demand explanation), it's not as salient or even as interesting as things like changing tastes, health trends, exciting new shrimp-based dishes etc.     

So this blog post isn't about shrimp and it isn't about supply & demand. It's about complements and substitutes. I think there is a similar psychological tendency to focus on goods-as-substitutes than goods-as-complements.  At the individual level where we are making choices, we are usually thinking in terms of substitutes: do I want coffee or tea? Should I take a vacation to Las Vegas or Hawaii? Mac or PC? It's a bit more subtle to think about "if I had X, would it make Y more useful to me" which is at the heart of all complementarity stories. 

This is a long-winded introduction to my real topic, which is that in my last blog post, I made the argument that online work could disrupt the BPO industry by serving as a substitute for what BPOs offer. A point I didn't think of---but in retrospect seems pretty obvious---is how just as easily complementarity could be the dominate effect. After my blog post, my CTO at oDesk, Odysseas, emailed me with his thoughts:  

The primary benefit of BPOs is not that of labor cost arbitrage. Thats typically the motive/benefit for offshore staff augmentation firms - but BPOs are business process outsourcers. BPO is ADP [Automated Data Processing] that outsources your payroll or a business that outsource your HR process etc... We often tend to think of BPOs as an offshore firm that does a little bit of everything having as sole pivot point its lower cost of labor - thats true, but its an abuse of the term and I would agree there that the particular type of business is going to be affected in the years to come from online labor.
This part is basically my substitutes story---now the complements part:
However, the more interesting effect would be the effect of online labor to the real BPOs..
There BPOs will not be negatively affected - the opposite. The availability of online labor would allow BPOs to become more flexible lower their overall fixed costs force them to become more automated and streamline (their virtual nature will require that), allowing them to lower even the cost per customer, allowing them to focus on smaller projects, smaller customers allowing to address smaller/different market segments.  They will become less relying on an enterprise sales force customer acquisition model which is dramatically affecting their cost structure.
We are seing examples of what the new BPOs will become in companies that outsource the process of testing (uTest) of seo writing (Mediapiston) etc.
He's of course exactly right---and he's a CS PhD, not an economist, so shame on me :). If you think of true BPOs in the sense that Odysseas is talking about, then the complementarity story becomes more important. These true BPOs would be big buyers in the inputs market and would benefit greatly from a liquid, efficient market for labor.


Tuesday, June 26, 2012

Will online labor markets disrupt the traditional BPO firm?

Today I spoke on a panel on something called "impact sourcing" at the BPO World Forum. The idea of impact sourcing, in a nutshell, is that online work is a tool for development and that for-profit firms outsourcing some part of their business should look beyond traditional BPO firms and consider non-profits like Samasource and Digital Divide Data. It was a good audience for this pitch, as many of attendees were CIOs from big companies that are accustomed to signing multi-million dollar IT outsourcing deals with the likes of traditional BPO firms like Wipro, Infosys, Tata Consultancy etc.

After the panel, I was at a reception where I talked to someone fairly high up in a traditional BPO. When I described my elevator pitch version of oDesk's business---clients post jobs, contractors make bids, clients make a hire, we intermediate the work and take a percentage---he said, literally "what are you doing here at this conference? You guys are like the Antichrist." What he meant (in a half joking, half serious way) is that oDesk and similar companies threaten the model of the BPO. 

My perception is that the traditional BPO model is possible because of two facts: (a) the enormous, purely placed-based differences in wages and (b) the difficulty of actually arbitraging those differences without help. BPOs stand ready to help companies reap the benefits of (a) by giving the help necessitated by (b). The word is still very far away from (a) no longer being true, but if oDesk and similar companies can radically lower the barriers to arbitraging differences by making it easy to hire, manage and pay workers regardless of geography, then (b) starts to become less true. If we get to the point where the qualitative differences of online remote and in-person work diminish and assessing and hiring workers is simple and easy, it would obviate the need for much of what the BPO firm is selling

This is not to say that there isn't still a huge space for IT consulting---outsourcing an entire process is hard and BPOs with lots of experience have something very valuable to offer. Furthermore, besides purely cost level, one of the motivations for business process outsourcing is ability to change cost structure, namely by turning a fixed cost into a variable cost. But these caveats aside, on the margin, the mediation aspect of the BPO role seems likely to get less attractive over time as technology improves and online labor markets mature.       

Wednesday, June 6, 2012

Resources for online social science

The Economist recently had an article about the growing use of online labor markets as subject pools in psychology research; ReadWriteWeb wrote a follow-up. If you've been following this topic, there wasn't very much new, but if you're a researcher that would like to use these methods, the articles were pretty light on useful links. This blog post is an attempt to point out some of the resources/papers available. This is my own very biased, probably idiosyncratic view of the resources, so hopefully people will send me corrections/additions and I can update this post.

To start, let's have this medium pay tribute to itself by running through some blogs and their creators. 

Blogs 

  • There is the "Follow the Crowd" blog which I believe is associated with HCOMP conference. It's definitely more CS than Social Science, but I think it's filled with good examples of high-quality research done w/ MTurk and with other markets. 
  • There's Gabriel Paolacci's (now at Erasmus University) "Experimental Turk" blog which was mentioned in the article and is probably the best resource for examples of social and psychological science research being done with MTurk.  
  • Panos Ipeirotis (at NYU and who is now academic-in-residence at oDesk) has a great blog "Behind-enemy-lines" that's basically all things relating to online work 
  • The defunct "Deneme blog" by Greg Little (who also works at oDesk) and Lydia Chilton (at University of Washington). 

Guides / How-To (Academic Papers)

A number of researchers have written guides to using MTurk for research. I think the first stop for social scientists should be the paper by Jesse Chandler, Gabriel Paolacci and Panos Ipeirotis:

Chandler, J. Paolacci, G. and Iperiotis, I. Running Experiments on Mechanical Turk,
Judgement and Decision Making (paper) (bibtex)

My own contribution is a paper with Dave Rand (who will still be starting as new assistant professor at Yale) and Richard Zeckhauser (at Harvard). The paper paper contains a few replication studies, but the real meat and the part I think is most important is the part discussing precisely why/how you can do valid causal inference online (I'm stealing this write-up/links of the paper from Dave's website):

Horton JJ, Rand DG, Zeckhauser RJ. (2011) The Online Laboratory: Conducting Experiments in a Real Labor Market. Experimental Economics14 399-425. (PDF) (bibtex)

Press: NPR's Morning Edition Marketplace [audio]The AtlanticBerkman Luncheon Series [video]National AffairsCrowdflowerMarginal Revolution, Experimental TurkMy Heart's in AccraJoho blogVeracities blog

Software 

Unfortunately there hasn't been too much sharing of software for doing online experiments. Since a lot of the experimentation is done by computer scientists who do not feel daunted by making their own one-off, ad hoc applications, there are a lot of one-off, ad hoc applications. Hopefully people know of other tools that are out there that the can open source / they can share links to.

"Randomizer"

Basically, it lets you provide subjects one link that will automatically redirect them (at random) to a collection of URLs you've specified.I made the first really crummy version of this and then got a real developer to re-do it so it runs on Google App Engine.



"QuickLime"
This is a tool for quickly setting up an Limesurvey (an open source alternative to Qualtrics & Surveymonkey) on a new EC2 machine. This was made courtesy of oDesk research. I haven't fully tested it yet, so as with all this software, caveat oeconomus.


"oDesk APIs"
There haven't been lot of experiments done on oDesk by social scientists, but there's no reason it cannot be done. While it currently is not as convenient or as low-cost as doing experiments on MTurk, I think long-term oDesk workers would make a better subject pool since you can more carefully control experiments, it's easier to get everyone online at the same time to participate in an experiment, there are no spammers etc. If you're looking for some ideas or pointers, feel free to email me.

"Boto"
This is a python toolkit for working with Amazon Web Services (AWS). It's fantastic and saved me a lot of time when I was doing lots of MTurk experiments.

"Seaweed"
This was Lydia Chilton's masters thesis. The idea was to create tools for conducing economics experiments online. I don't think it ever moved beyond the beta stage, but if you (a) have some grant money and (b) are thinking about porting z-tree to the web, you should email Lydia and see where the codebase is & if anyone is working on it.

Here's a little javascript snippet I wrote for doing randomization within the page of an MTurk task.  

People 

I'm not doing to try to do a who-is-who of Crowdsourcing, but if you're looking for some contacts of other people (particularly those in CS) who are doing work in this field, you can check out the list of recent participants at "CrowdCamp" which was a workshop prior to HCI.

History

Probably the first paper I'm aware of that pointed out that experiments (i.e., user studies) were possible on MTurk was by Ed ChiNiki Kittur and Bongwon Suh.  As far as I know, the first social science done on MTurk was Duncan Watts and Winter Mason's paper on financial incentives and the performance of crowds.

Friday, June 1, 2012

The Innovation of StackOverflow

So as I write this, there is an egg timer ticking away next to me, set with 10 minutes of time. What am I waiting for? 10 minutes is how much time I predicted it would take to get my programming question answered on StackOverflow (SO):


http://stackoverflow.com/questions/10860020/output-a-vector-in-r-in-the-same-format-used-for-inputting-it-into-r

The back story was that I was writing some R code and I got to a point where I was stuck: there was something I wanted to do and I remembered that there was a built-in function that could accomplish my goal. Unfortunately, I couldn't remember that function's name. After some fruitless googling, I posted the question on SO.

So, how long did it actually take to get the right answer? About 6 1/2 minutes. As I write this sentence, I'm waiting for some more time to elapse so I can actually approve the answer:  

 
This has been my general experience with SO---amazingly high-quality answers delivered almost immediately. I feel sheepish that I haven't been able to answer as many questions as I've asked, but one of the animating ideas of the community is that asking high-quality, answerable questions is a way of contributing. 

What's interesting to me is that SO is an example of a primarily social---as opposed to technological---innovation. There's nothing really technically innovative about SO: the site is fast, search works well, tagging works well etc., but lots of sites have those things. What's special about SO is that through a carefully designed system of incentives and policies, they have created a community that is literally---and I think profoundly---changing how people program computers.  
 
The reason I point about the social nature of the innovation is that it's become popular to lament the shallowness or perceived frivolity of many start-ups that are built around social rather than technological innovations (e.g., Facebook, Twitter, Instagram etc.). The idea seems to be that if you aren't making solar panels or cancer-curing drugs, you're not doing something socially useful. I personally don't share that bias, but if we are going to judge companies on the basis of some more "serious" metric like productivity or social surplus, then SO is a great example how a purely social innovation can succeed spectacularly on those metrics.  

Tuesday, May 22, 2012

Data openness by private firms

The New York Times has a story today about social scientists working with company data and being unable or unwilling to make it public. The story begins:
When scientists publish their research, they also make the underlying data available so the results can be verified by other scientists.
I think the first sentence is probably more a description of how we'd like the world to be than how it actually is right now, especially in the social sciences. The main so-what of the story is that private companies are collecting enormous amounts of high quality data that lets you do fascinating social science, but companies are understandably reluctant to make this data public, primarily for privacy reasons (and probably also because they are afraid of giving up some competitive advantage).

I think the options for any organization that does or might do research are:

1) Do research for business purposes. Make neither the findings nor the data public.
2) Do research for business purposes. Make the findings but not the full data public.
3) Do research for business purposes. Make the findings and data public.  
4) Do research. Make findings and data public.


Most companies probably aren't interested in (4) and this is probably academia's biggest comparative advantage. Barring (4), I think from a social perspective, privacy issues aside, the best outcomes in order are (3) > (2) > (1).   I can understand (1) in some cases, but at least in the kind of companies I'm familiar with, the advantages of keeping everything secret probably aren't that great. 

The advantages of (2) or (3) over (1):  

a)  If you're a software company and you release a feature that works, it will probably get copied anyway, regardless of whether you publish a paper, so you might as well get the thought leadership credit for coming up with the idea in the first place. This paper is/was the basis for Google's secret sauce---posting it to the InfoLab servers back in 1999 didn't doom the company and probably did a lot to increase the perceptions that they were doing something smarter (even though there were antecedents of this idea going back many years---including in Economics, by my academic grandfather).  

b) If you give them access and them publish, you can get outside academics to work on your problems for free (the Netflix prize is an obvious example).  You can recruit those academics to come work for you, or at least get their grad students to come work for you. 

c) If you let your internal researchers publish, you can get them to work at reduced cost or get researchers you otherwise wouldn't be able to attract (see Scott Stern's paper on scientists "paying" to do science).

On (2) versus (3), I think there is a real dilemma: openness and privacy concerns are in tension. Furthermore, just releasing more aggregated or somehow obfuscated versions of the data is not risk free: there's actually an emerging literature in Computer Science on how to release data in ways that are guaranteed to still have the right privacy properties (CMU UPenn professor Aaron Roth recently taught a course on the topic). The fact that smart people are working on it is exciting, since they might figure out provably risk-free ways to release data publicly, but it's also evidence that this isn't a trivially easy problem---seemingly innocuous data disclosures would let someone unravel the obfuscation.  


As a coda, I have a personal anecdote to share about this story. One of the people discussed in the article is Bernardo Huberman: 
The chairman of the conference panel — Bernardo A. Huberman, a physicist who directs the social computing group at HP Labs here — responded angrily. In the future, he said, the conference should not accept papers from authors who did not make their data public. He was greeted by applause from the audience. 
When I was a grad student, I taught a course to Harvard sophomore economics majors called "Online Labor" (syllabus).  I assigned some of Huberman's papers on motivation. I emailed him to ask for the data from one of his papers. He wrote back: 
Dear Dr. Horton:
Thank you for your interest in my work and I certainly feel pleased when I learn that you liked my paper enough to assign it to your class.
As to your request, let me talk with the person who now handles the youtube data (we lately used it to uncover the persistence paradox) and I'll get back to you.
Incidentally if you are interested in the role that attention and status (its marker) play among people I could send you a paper that reports on a experiment (as opposed to observational data) that elucidates it quite cleanly across cultures.
Best,
Bernardo

I got the data within days---I can state that he privately practices what he preaches publicly.

Update: I incorrectly stated that Aaron Roth was a professor at CMU---he did his PhD at CMU. He's a professor at UPenn. Apologies.


Tuesday, March 6, 2012

Location of India-Based Contractors on oDesk

My favorite R package, ggplot2, recently introduced enhanced support for choropleth maps. I'd like to make some of these kinds of maps with oDesk data, but as a first step, I thought I'd just plot the locations of all of our India-based contractors by city. In the plot below, dots size as log-scaled by # of contractors reporting that city. The massive light blue dot near Delhi is default coordinate when we're missing the city.

For those of you who know India, anything surprising/interesting here?


Here's the associated R code to make this figure:

Tuesday, February 21, 2012

Economics of the Cold Start Problem in Talent Discovery

Supply train steaming into a railhead
Tyler Cowen recently highlighted this paper by Marko Terviö as an explanation for labor shortages in certain areas of IT. The gist of the model is that in hiring novices, firms cannot fully recoup their hiring costs if the novices' true talents will become common knowledge post-hire. It's a great paper, but what people might not know is that the theory it proposes has been tested and found to perform very well. For her job market paper, Mandy Pallais conducted a large experiment on oDesk where she essentially played the role of the talent-revealing firm.

Here's the abstract from her paper:
... I formalize this intuition in a model of the labor market in which positive hiring costs and publicly observable output lead to inefficiently low novice hiring. I test the models relevance in an online labor market by hiring 952 workers at random from an applicant pool of 3,767 for a 10-hour data entry job. In this market, worker performance is publicly observable. Consistent with the models prediction, novice workers hired at random obtain significantly more employment and have higher earnings than the control group, following the initial hiring spell. A second treatment confirms that this causal effect is likely explained by information revelation rather than skills acquisition. Providing the market with more detailed information about the performance of a subset of the randomly-hired workers raised earnings of high productivity workers and decreased earnings of low-productivity workers. 
In a nutshell, as a worker, you can't get hired unless you have feedback, and you can't get feedback unless you've been hired. This "cold start" problem is one of the key challenges of online labor markets, where there are far fewer signals about a worker's ability and less common knowledge about what different signals even mean (quick: what's the MIT of Romania?). I would argue that scalable talent discovery and revelation is the most important applied problem in online labor/crowdsourcing.

Although acute in online labor markets, the problem of talent discovery and revelation is no cake walk in traditional markets. Not surprisingly, several new start-ups (e.g.,  smarterer and gild) are focusing on scalable skill assessment, and there is excitement in the tech community about using talent revealing sites like StackOverflow and Github as replacements for traditional resumes. It is not hard to imagine these low-cost tools or their future incarnations being paired with scalable tools to create human capital, like the automated training programs and courses offered by Udacity, Kahn Academy, codeacademy and MITx. Taken together, they could create a kind of substitute for the combined training/signaling role that traditional higher education plays today.


Like what you read? 
Why not follow me on twitter or subscribe to this blog via RSS?

Monday, February 20, 2012

Solvate joins the deadpool

Techcrunch and Betabeat  are reporting that Solvate, a platform for remote work, is shutting down. Unlike oDesk, Elance, Freelancer etc., they were not trying to create a true marketplace: they were trying to do more of a high-touch, human-in-the-loop matching service.

In the email Solvate sent to their users about the shutdown, they explicitly cited scalability issues, which I'm guessing refers to the non-sustainable effort and cost of hand-matching buyers and sellers. I wouldn't say this is definitive proof that the high-touch matching business model doesn't work (my outsider impression is that GLG is killing it), but it is a reminder that the value-added from your human-in-the-loop matching has to be sufficiency high that you can re-coup your costs: you can't take a hit on every unit sold but make it up on volume.

I think it's too bad they are shutting down---I would have liked to see how their approach to online labor would have evolved. That being said, I personally found their emphasis (at least in their marketing copy) on US-based workers off-putting. Solvate's CEO was quoted extensively in a  Gigaom article, in which he claimed that online labor markets were undermining US workers. He also suggested that by relying only on US-based workers, Solvate could promise a higher level of talent and expertise. All online labor markets have to find ways to help workers credibly demonstrate their talents, and using crude geography-based proxies for talent is an approach, but not a particularly admirable one. To me, the whole ethical/moral "so what" of online work is that geography and nationality doesn't have to matter.

A a coda, here is my response to the original Gigaom article:


Full disclosure: I’m the staff economist at oDesk and these opinions represent my own views.
A couple of thoughts:
  • Like any competitive market, the forces of supply and demand are going to determine prices in these online markets. With the opening up of new countries that have large, reasonably well-educated, internet savvy populations, supply increases which will tend to drive down wages. On the other hand, these markets (and the ability to break work up into small, outsourceable bits) also make it possible to outsource more work, increasing demand, and hence prices.
  •  At least within oDesk, we haven’t seen strong trends in wages, though presumably this article is talking about freelancers in general and we obviously don’t have visibility on their wages.
  • As a practical matter, I don’t think workers in developed countries like the US can’t compete in these markets—they actually have a lot of advantages: perfect english, same time-zone, familiarity with US business culture/expectations etc. Further, price matters, but it’s not the only thing. For what it’s worth, I work with many oDesk contractors and the break-down is 1 x US, 1 x Italy, 1 x Russia, 1 x Pakistan and 2 x Philippines.
  • The efficiency and distributional effects of information and communications technology are complex and the evidence is ambiguous, so I’d be skeptical of anyone offering a definite answer to these kinds of questions. There was an interesting Quora thread on this topic. 
  • I think focusing on what these markets do for relatively well-paid workers in developed countries misses one of the most important moral facts about these markets, which is that they generate new, relatively well-paid, meaningful work opportunities for people in developing countries. It’s obviously not a random sample of our workers, but If you spend a few minutes on oDesk’s Facebook fanpage and look at the comments and stories, it’s clear that online work is improving lives in a pretty dramatic way.





Saturday, February 18, 2012

High-wage skills on oDesk (or why you might want to learn Clojure if you're not a lawyer)


Update: Hello HackerNews readers. One thing that I discussed but probably didn't emphasize enough is that this data show the correlation between listed skills and offered wages---you absolutely cannot infer a causal relationship (my cheeky title notwithstanding). Unless I get to create and run a massive skills training program experiment, it's going to be hard to get at causality. But I can do something about the offered/earned distinction. If you don't want to miss my follow-on blog post where I explore the relationship between skills and actual earned wages from actual projects, follow me on twitter.     

oDesk recently introduced a controlled, centralized vocabulary of about 1,400 skills for buyers and contractors to use when posting jobs and creating profiles. The primary motivation for the change was to make it easier for buyers and sellers to find each other: without a standardized vocabulary, would-be traders can fail to match simply because they use different terms for the same skill.

A side effect of this transition is that high quality data on the relationships between skills and wages are now available. I recently built a dataset of contractors' hourly wages by skill: for each skill, I identified all contractors listing that skill on their profiles and averaged their offered hourly wages. Although contractors are free to offer any hourly wage they like, in my experience, wages offered closely map to actual earnings. However, to reduce the influence of outliers, I restricted the sample to contractors offering between 50 cents and 100 dollars per hour. I also only included skills for which there were 30 or more observations.

In the bar chart below (made using the very cool googleVis package for R), I plotted the top 50 skills, ordered by average hourly wage (here is a "live" version with mouse-over). The top of the list is dominated by high-end consulting areas (e.g., patents and venture capital consulting) or hot newer technologies (e.g., redis and Amazon RDS). The programming language that commands the highest wage is Clojure, which is a rather esoteric skill: it's a lisp dialect that compiles to the Java Virtual Machine (JVM). Perhaps this is the market reflecting Paul Graham's "Python Paradox":

"if a company chooses to write its software in a comparatively esoteric language, they'll be able to hire better programmers, because they'll attract only those who cared enough to learn it. And for programmers the paradox is even more pronounced: the language to learn, if you want to get a good job, is a language that people don't learn merely to get a job."

At the time Graham wrote this, Python was a far less mainstream language, probably analogous to how Clojure is regarded today. It's an interesting pattern, and although they'd cut up my economist membership card if I made a causal claim between knowing Clojure and being able to command hire wages, I'm intrigued by the idea of using online labor markets as a bellwether to help guide human capital choices.

Thursday, February 16, 2012

Why aren't we all freelancers?

Eendenfokkerij / Duck farm

Investors typically hold diverse portfolios of assets, with the goal of reducing risk. While diversification is commonplace in investing, most of us have no diversification in our labor income streams: we work at one job at a time, for a single employer. However, the "returns" to a job vary like returns on investments, especially on non-financial dimensions (e.g., engagement, learning, co-workers, working conditions). As in investing, there is also a significant amount of direct financial risk in holding one job---the firm may impose layoffs or go out of business. Given the similarities between jobs and assets, why isn't there a similar impetus to diversify, i.e., why don't we all hold a portfolio of small jobs at the same time, with many different employers [0]?

Some workers---freelancers and independent consultants---do follow this diversified model, but it's hardly the norm of workers generally. Below, I lay out a laundry list of potential economic explanations for why the portfolio/freelancing approach is not more common. What's interesting to me both academically and as someone working at oDesk is that many of these points are not set-in-stone attributes of the productive process but are instead things that smart features or policies might change.

Non-linearity in costs of searching/vetting/bargaining
Hiring a freelancer for a small project is like picking out a fancy restaurant; hiring a full time employee is more like buying a house. The effort of searching and vetting (and thus the cost) is related to the stakes of the hire. However, there is no guarantee that those costs scale linearly with the stakes. Suppose it takes nearly as much effort to find a small job as it does to find a large job---then a portfolio approach will generate larger search costs per dollar earned in wages [1].

Non-linearity in job size and productivity 
If you can make X widgets or Y schwidgets in 1 hour, it doesn't mean you can make X/2 widgets and Y/2 schwidgets in 1 hour. Every job has some fixed set-up costs---getting out the materials, remembering the key details, etc. The larger the costs, the less attractive the small job. On the other hand, productivity eventually wanes from boredom, physical fatigue, etc. ("I'm really getting bored with this TPS report---time for some Facebook"). The optimal size job (from a productivity standpoint) might be near or above the current 40 hours per week, 50 weeks a year paradigm, in which case going smaller means getting less efficient.

Complementarities with team members that grow over time
One of the advantages of team production is that workers can share knowledge with each other, motivate each other and generally create an environment where everyone is more productive than they would be working alone. There's no reason teams of freelancers working together cannot achieve the same complementarities with each other, but if these complementarities take time to develop, larger jobs become more attractive.

Firm-specific human capital 
If a job requires lots of firm-specific human capital, the per-job learning requirement is high, which tends to encourage larger jobs [2].

Monitoring & policing costs
Once you get a sense of the character and reputation of some trading partner,  you don't need to constantly monitor that person/firm. After some level of trust has been established, these costs would fall. This again pushes for larger jobs.  This is probably clearer in terms of firms monitoring workers, since the big fear is shirking, but it does go both ways: workers need to make sure their checks don't bounce, that their employers aren't skimming from the 401K, using malk for the coffee service instead of milk, etc.

Employer concerns about IP (broadly defined)
I do not think it is likely to find workers working simultaneously for direct competitors [3], the interests of most firms are fairly orthogonal to each other.  

Existing public policy 
At least in the US, at the present time, certain realities (health insurance, getting financial credit etc.) are full-time employee advantaged.

[0] Note that this isn't a theory of the firm argument or discussion. I'm assuming that one can be a full employee and reap all the benefits of firm organization / team production even with fractional employment.

[1] One of the reasons mechanical turk is semi-dysfunctional is that when problems arise (about the scope of work, payment terms etc.), all the surplus generated by the relationship is quickly destroyed: one minute thinking, talking and haggling about a task paying pennies is likely to be economically wasteful. This was one motivation for hagglebot.

[2] I think this is why ideal use of online labor is not so much a 1 for 1 replacement of some traditional job, but a decomposition of jobs into easily outsource-able pieces and pieces that require deep firm-specific knowledge.      

[3] McKinsey excepted. 

Tuesday, February 7, 2012

Writing Smell Detector (WSD) - a tool for finding problematic writing

tl;dr version: WSD is a python tool to help find problems in your writing. Here's the source and here's example output

In grad school, I wrote a program that used a series of regular expressions to detect "writing smell" (analogous to code smell), i.e., telltale signs of bad writing and mistakes. The rules for smelliness were loosely based on one of my favorite writing how-to's: Style: Toward Clarity and Grace by Joseph Williams.

The program took as input a text file and output was an annotated report with snippets of the offending bits. I used it for all my papers and found it really helpful, but the coding was very, um,  academic (i.e., written for use by the person who wrote it) and it was written in Mathematica [1], which was the language I knew best at the time. FWIW, here is my original version.

For a long time, I've wanted to port it to some other language and make it accessible and capable of receiving new rule contributions and explanations. To this end, I recently commissioned an oDesk contractor (utapyngo) to make a more polished, modular version in Python. I think he totally outdid himself. It's got a nice modular model now that lets you easily incorporate new rules and he greatly improved upon my often-flawed regular expressions. Be forewarned---the documentation is non-existent and the rules aren't explained, but I plan to take fix this over time, while I'm using it.

It's open source (courtesy of oDesk, who paid the bills) and available here on github (live example output). To use it, just clone it, install the python package jinja2 and then do:


$ python wsd.py -o output_file.html your_masterpiece.tex


Here's a screenshot of what the HTML output looks like, illustrating the a/an rule (i.e., that it's "an ox" but "a cat"):


Note the statement of the rule, the patterns that it looks for and the snippets. It also has a hyperlink to the full text, which is available at the bottom of the document.

A few thoughts:
  1. If you're interested in contributing (rules or features), let me know. 
  2. It might be nice to turn this into a web-service, though my instinct is that someone interested in algorithmically evaluating their LaTeX/structured text isn't going to find cloning the repository & then running a script to be a big obstacle. And they probably don't want to make their writing public.   
  3. A few weeks ago, I read this usethis profile of CS professor Matt Might. In the software section of the interview, he said that he had some shell scripts that do something similar. I haven't really investigated, but maybe there's ideas here worth incorporating. 
[1] When I told the other members of the oDesk Research / Match Team that I had code for doing this writing smell thing, they were impressed and wanted a copy; when I told them it was written in Mathematica, they thought this was hilarious and mocked me for several minutes. I tried to explain that Mathematica actually has great tools for pattern matching, but this fell on deaf ears.

Monday, February 6, 2012

Minimum Viable Academic Research

remember clippy
A non-viable product in minimum form, courtesy of
Flickr




One of the most talked about ideas in the world of start-ups is the notion of the minimum viable product (MVP). The rationale for MVP is clear: you don't want to build products that customers don’t want, never mind waste time polishing and optimizing those unwanted products. "Minimally viable" doesn't even require the product to exist yet---the viability refers to whether it will give you the feedback you need to see if the project has potential. For example, you might do an A/B test where you buy keywords for some new feature, but then just have a landing page where people can enter their email address, thereby gauging interest. The important thing is that it is market feedback, not just opinions of people near you.


In academia, a big part of the the day to day work is getting feedback on ideas. Each new paper or project is like a product you’re thinking of making. So you float ideas with colleagues, your advisers, your spouse, etc., and you might present some preliminary ideas at a workshop or seminar. The problem is that in most workshops and seminars, where you could potentially get something close to a sample of what the research community will think of the final product, the feedback is usually friendly and limited to implementation (e.g., "How convincing is the answer you are providing to the question you've framed?"), instead of "market" feedback on how much "value" your work is creating.


The academia analogue to market feedback on value will come later, in two forms: (a) journal reviews / editor decisions and (b) citations. By value, I mean something like (importance of question) x (usefulness of your answer). At least in economics, knowing what is important is difficult. There is no Hilbert's list of big and obvious open questions. A few such questions do exist, but they tend to be sweeping in nature---e.g., "Why are some countries rich and some countries poor?" and "Why do vacancies and unemployed workers co-exist?"---that no single work can decisively answer. To do real research, you need to pick some important part of a question and work on that.


A fundamental problem is that the institutional framework in some disciplines (economics being one example, though not all---see this recent NYTimes op-ed on scientific works being too short; see here for an economist's take on the topic) requires you to do lots and lots of polishing before you know (via journal rejection/acceptance) whether even the most polished form of your work is going to score high enough on the importance-of-question measure.  At seminars, people are usually too polite to say, "Why are you working on this?" or "Even if I believed your answer, I wouldn't care" or "So what?" But that's the kind of painful feedback that would be most useful at early stages. There are some academics that will give that kind of "Why are you doing this?" critique, and while they are notorious and induce fear in grad students, the world needs more of them. (I once gave a seminar talk where an audience member asked, "How does this study have any external validity?" And I had to admit he was right---it had none. I dropped the project shortly thereafter, after spending the better part of 3 months working on it.)


It's not that people won't be critical in seminars. You'll generally get lots of grief about your modeling assumptions, econometrics, framing etc. But those are easy critiques (and they let the critics show off a little). It's the more fundamental critiques about importance/significance that are both rare and useful. In academia, you really, really need the importance/significance critique because you can work on basically anything you want, literally for years, without anyone directly questioning your judgment and choices. And while this gives you tons of freedom and flexibility, you might waste significant fractions of your career on marginalia. I also don't think it's the case that if you're good, you'll simply know: I've heard from several super-star academics that their most cited paper is one they didn't think much of when they wrote it and Their favorite paper has languished in relative obscurity. One interpretation (beyond Summer's law) is that you aren't the best judge of what's important.


How does one get  more importance-of-question feedback?


In economics, there's a tendency (need?) to write papers that are 60 page behemoths, filled with robustness checks, enormous literature reviews, extensive proofs that formalize somewhat obvious things, etc. This long, polished version really is the minimally viable version of the paper, in that you can't safely distribute more preliminary, less polished work (people might think you don't know the difference). I think on the whole, this is probably a good thing. But it's often not the minimally viable version of an idea. Often the "so what" of a paper is summarized by the abstract, a blog post, a single regression, etc.


I'm not sure what the solution is, but one intriguing bit of advice I recently received from a very successful (albeit non-traditional) researcher was to essentially live-blog my research. There's actually very little chance of being "scooped”; if anything, being public about what you're doing is likely to deter others. And, because it's "just" a blog post, you nullify the "they don't know the difference between polished and unpolished work." The flip side is that I think there's a kind of folk wisdom in academia that blogging pre-tenure is a bad idea (I imagine the advice is even stronger for a grad student pre-job market). But if you were doing it for MVP reasons / feedback reasons, the slight reputation hit you'd take might be offset by the superior "so what" feedback you might get from doing such a thing. Anyway, still thinking about this strategy.*

* Beyond the purely professional strategic concerns, it might actually move science along a little faster and make research a bit more democratic and open.

Saturday, February 4, 2012

Stereotypes about animals (and children) as revealed by Google auto-suggest

I saw this tweet by @m_sendhil, which had a screenshot of Google's auto-suggest for "why are indians so," which contained a collection of (often contradictory) stereotypes (e.g., fat and skinny).  I began doing the same exercise for other nationalities and ethnic groups, products, animals etc.

Here is the screenshot for turtles (which apparently have lots of fans):



It was interesting to me how many of the supposed attributes showed up repeatedly across entities. This gave me an idea: I should turn this procrastination/time-wasting into something more useful, which was to learn how to make graph/network plots with the python package networkx (code below). Here is the result, using the top 4 auto-suggests for cats, children, cows, dogs, frogs, goldfish, hamsters, mice, turtles and pigs.  Entities are in blue, attributes in red. Edges are drawn if that attribute was auto-suggested for that entity.

Some observations
I'm guessing the "addicting" and"good" attributes of goldfish refers to the cheesy snack cracker and not the actual fish. People seem to be rather ambivalent about children. I'm kind of surprised that people were not wondering why dogs are smart. Finally, are pigs actually salty (this seems unlikely), or this just how pork is usually prepared?

The code: 

Wednesday, February 1, 2012

Employer recruiting intensity

I was reading/skimming this paper by Davis et al. and in the abstract, they write:

"This paper is the first to study vacancies, hires, and vacancy yields at the establishment level in the Job Openings and Labor Turnover Survey, a large sample of U.S. employers. ... We show that (a) employers rely heavily on other instruments, in addition to vacancy numbers, as they vary hires, (b) the hiring technology exhibits strong increasing returns to vacancies at the establishment level, or both. We also develop evidence that effective recruiting intensity per vacancy varies over time, accounting for about 35% of movements in aggregate hires."


In a nutshell, they document that recruiting intensity varies across time and that this variation has a big effect on the number of aggregate hires. What's interesting is that the labor literature tends to focus on search intensity by workers, with firm search intensity comparatively understudied, but this paper suggests that ignoring employer efforts is likely to give a (very) incomplete impression. My guess is that this bias in the literature comes from the comparative lack of employer data on matching, though JOLTS  (which this paper uses) is ameliorating the problem.

On oDesk, we've got excellent visibility on employer recruiting. Below is the "so what" plot from a recent experiment where we "recommended" contractors to employers (based on our analysis of what the job consisted of). The recommendations came immediately after the employer posted the job. We also made it easier for that employer to invite those recommended contractors to apply. The y-axis is the fraction of jobs where the employer made at least one invitation; treatment and control are side-by-side. We can see that regardless of category, the treatment was generally effective in increasing the number of invitations.  But I think the striking thing is how much variation there is in "levels" of recruiting by category: in the control admin group, less than 10% of employers recruited, while in sales, it's almost 25%.


Presumably the difference depends on a number of factors: how many applicants the job will get organically, how close a substitutes are the different applicants, the value to the firm of filling the vacancy to the firm and so on. It also clearly matters how easy it is to search/recruit, given the effectiveness of our pretty lightweight intervention.  From a welfare standpoint, this last point about the role of search/recruiting cost is potentially interesting, as reducing employer search frictions/costs technologically is, at least in online labor markets, a highly scalable proposition.