Monday, February 6, 2012

Minimum Viable Academic Research

remember clippy
A non-viable product in minimum form, courtesy of
Flickr




One of the most talked about ideas in the world of start-ups is the notion of the minimum viable product (MVP). The rationale for MVP is clear: you don't want to build products that customers don’t want, never mind waste time polishing and optimizing those unwanted products. "Minimally viable" doesn't even require the product to exist yet---the viability refers to whether it will give you the feedback you need to see if the project has potential. For example, you might do an A/B test where you buy keywords for some new feature, but then just have a landing page where people can enter their email address, thereby gauging interest. The important thing is that it is market feedback, not just opinions of people near you.


In academia, a big part of the the day to day work is getting feedback on ideas. Each new paper or project is like a product you’re thinking of making. So you float ideas with colleagues, your advisers, your spouse, etc., and you might present some preliminary ideas at a workshop or seminar. The problem is that in most workshops and seminars, where you could potentially get something close to a sample of what the research community will think of the final product, the feedback is usually friendly and limited to implementation (e.g., "How convincing is the answer you are providing to the question you've framed?"), instead of "market" feedback on how much "value" your work is creating.


The academia analogue to market feedback on value will come later, in two forms: (a) journal reviews / editor decisions and (b) citations. By value, I mean something like (importance of question) x (usefulness of your answer). At least in economics, knowing what is important is difficult. There is no Hilbert's list of big and obvious open questions. A few such questions do exist, but they tend to be sweeping in nature---e.g., "Why are some countries rich and some countries poor?" and "Why do vacancies and unemployed workers co-exist?"---that no single work can decisively answer. To do real research, you need to pick some important part of a question and work on that.


A fundamental problem is that the institutional framework in some disciplines (economics being one example, though not all---see this recent NYTimes op-ed on scientific works being too short; see here for an economist's take on the topic) requires you to do lots and lots of polishing before you know (via journal rejection/acceptance) whether even the most polished form of your work is going to score high enough on the importance-of-question measure.  At seminars, people are usually too polite to say, "Why are you working on this?" or "Even if I believed your answer, I wouldn't care" or "So what?" But that's the kind of painful feedback that would be most useful at early stages. There are some academics that will give that kind of "Why are you doing this?" critique, and while they are notorious and induce fear in grad students, the world needs more of them. (I once gave a seminar talk where an audience member asked, "How does this study have any external validity?" And I had to admit he was right---it had none. I dropped the project shortly thereafter, after spending the better part of 3 months working on it.)


It's not that people won't be critical in seminars. You'll generally get lots of grief about your modeling assumptions, econometrics, framing etc. But those are easy critiques (and they let the critics show off a little). It's the more fundamental critiques about importance/significance that are both rare and useful. In academia, you really, really need the importance/significance critique because you can work on basically anything you want, literally for years, without anyone directly questioning your judgment and choices. And while this gives you tons of freedom and flexibility, you might waste significant fractions of your career on marginalia. I also don't think it's the case that if you're good, you'll simply know: I've heard from several super-star academics that their most cited paper is one they didn't think much of when they wrote it and Their favorite paper has languished in relative obscurity. One interpretation (beyond Summer's law) is that you aren't the best judge of what's important.


How does one get  more importance-of-question feedback?


In economics, there's a tendency (need?) to write papers that are 60 page behemoths, filled with robustness checks, enormous literature reviews, extensive proofs that formalize somewhat obvious things, etc. This long, polished version really is the minimally viable version of the paper, in that you can't safely distribute more preliminary, less polished work (people might think you don't know the difference). I think on the whole, this is probably a good thing. But it's often not the minimally viable version of an idea. Often the "so what" of a paper is summarized by the abstract, a blog post, a single regression, etc.


I'm not sure what the solution is, but one intriguing bit of advice I recently received from a very successful (albeit non-traditional) researcher was to essentially live-blog my research. There's actually very little chance of being "scooped”; if anything, being public about what you're doing is likely to deter others. And, because it's "just" a blog post, you nullify the "they don't know the difference between polished and unpolished work." The flip side is that I think there's a kind of folk wisdom in academia that blogging pre-tenure is a bad idea (I imagine the advice is even stronger for a grad student pre-job market). But if you were doing it for MVP reasons / feedback reasons, the slight reputation hit you'd take might be offset by the superior "so what" feedback you might get from doing such a thing. Anyway, still thinking about this strategy.*

* Beyond the purely professional strategic concerns, it might actually move science along a little faster and make research a bit more democratic and open.

3 comments:

  1. I work as a software developer in academic research. I see what could either be perfectionism or procrastination in both research and related software development. Having gotten somewhat past the scoop fear when the paper has already been published, people don't want to release the software unless it's perfect. MVP fails them as they fall into following criteria that ire more appropriate to software both used and developed by the masses. Our projects would be wildly successful if a dozen other labs used the software, and would rarely be worked on by teams larger than just a few people. I think there's a fear of scarcity of ideas that you don't want to see the criticism until you've given it your best shot. This then reminds me of posts on news.ycombinator.com that say it's not the idea, it's the execution. To the extent this is true, there should be no fear of sharing your ideas. The one who wins is the one most capable of doing something with it. ...like the Windows 7 commercial full of people claiming to have thought of the great idea.

    ReplyDelete
  2. I have read your excellent post. This is a great job. I have enjoyed reading your post first time. I want to say thanks for this post. Thank you... 经济课程辅导

    ReplyDelete