Crowdsourcing Legal Research Won’t Work Because It Doesn’t Help the Majority: Solo and Small Law Firms

Last week, two of my longtime blogging buddies, Bob Ambrogi and Scott Greenfield tackled the question of why websites dedicated to crowdsourcing legal research haven’t taken off. Citing a quote by entrepreneur Approva Mehta whose own crowdsourcing attempt failed, Bob suggests that the odds are stacked against crowdsourcing from the get-go in the legal industry since “lawyers don’t like technology and don’t like to share.” Despite this inherent obstacle, Bob remains ever optimistic that crowdsourcing legal research can succeed if implemented in a way that facilitates and rewards lawyers for contributing and makes the content useful to others. (But read the comments too). By contrast (and not surprisingly), Scott isn’t so sanguine about crowdsourcing because lawyers – at least good ones — don’t work for free, and derive no benefit – either financial or even “exposure” from annotating or commenting on case law or statutes on someone else’s profit-generating crowdsourced platform.

Yet there’s a third reason why crowdsourcing legal research won’t take off: because it doesn’t work for – and may potentially harm – solo and small lawyers, the silent majority in the legal profession.  Unless solos and smalls can be engaged, crowdsearched research platforms will have to develop a product that big firms will pay for in order to become financially viable (in fact, that’s my guess why JD Supra ’s original “give content, get noticed” mission added a for-fee service to help large firms circulate content more widely).

As Scott emphasizes, lawyers don’t work for free. Contributing quality crowdsourced content takes time that many solo and small firm lawyers don’t have. Moreover, to the extent that solos and smalls have time to spare on marketing, they ought to spend it on targeted marketing initiatives – such as building their own online presence with a high-quality blog, delivering content to existing clients and potential leads through a newsletter or even writing an article for a widely circulated bar rag that might yield referrals – rather than wasting time on junk that goes into a big black hole.

The “crowdsourced” sites that have succeeded – again,  Avvo comes to mind with its crowdsourced Q&A – don’t ask much of participants (many lawyers can answer the same tired consumer questions in their sleep) — and at least in the beginning, offered significant rewards. Back when Avvo started out in 2007, it provided solos and smalls with a professional web presence and an easy way to check colleagues’ disciplinary records – neither of which many solos and smalls had access to at the time.

But solos and smalls receive no comparable benefit from participating in crowdsourced research sites. With the advent of  Google Scholar and FastCase and even SSRN  (for law review articles), solos and smalls have easy access to robust, FREE legal research tools that does mostly everything (short of specialized reporters like CCH) that WEXIS does. Whereas once solos and smalls may have been willing to contribute to crowdsourced sites in exchange for access to free legal research, that’s no longer a compelling reward. Moreover, when researching a brief or motion at the last hour, what lawyer has the time to scroll through comments and random cases when they can simply pull up a string cite through a search on a real research service?

In addition, contrary to Mehta’s comment – it’s not that lawyers don’t like to share – but rather, we’re already forced to since our court filings are public. Whereas a site like GitHub offers a collaborative environment for developers to share code and collaborate on projects, other lawyers and even the public can access our “work product” by combing the court websites for sample motions and briefs. As access to these resources becomes more widespread with more courts adopting e-filing, there will be even less need for collaborative sites.

Moreover, lawyers face more risks from sharing than tech developers. For starters, if I post (as I have) a valuable presentation on line, other lawyers can simply take my work product and use it as theirs. Over the past years, a couple of large firms have done just that – poached presentations and blog comments and use them in their CLE and marketing materials without so much as an attribution. Stealing code is more difficult because the person who takes it still needs to have sufficient skill to adapt it and make it work.  Not so much with lawyer work product which is more fungible – and you see lots of legal documents that are simply cut-and-paste.  Because solos and smalls risk losing business by sharing high-quality materials, there’s even more of need to ensure that they’re adequately rewarded for contributing.

Likewise, Wikipedia isn’t an apt analogy either – because there’s not much of a market for research on the invention of the cotton gin or a summary of popular TV shows. So those who contribute are doing so as a labor of love and aren’t foregoing revenue because it’s unlikely they’d be compensated for that work anyway.

Still, I admire the entrepreneurial drive of those lawyers trying to make a go of crowdsourcing, so like Bob, I’ll offer some suggestions on how to make it succeed. First and most obvious, crowdsourcing sites need to reward contributors financially. Ideally, the reward should be payment – even $10 to $20 per post would make contributing more attractive for law students and solos and smalls starting out. But payment could also come in the form of stock options in the company, gift cards or free meals. Without pay, lawyers aren’t going to play. Second, engage solos and smalls who actually practice law and ask what kinds of services would be valuable to them. Lawyers are so poorly regarded that those who seek to enter the “legal space” are often surprised to learn that many solos and smalls already use CRM systems, have answering services to respond to leads and the capability to Skype or web chat with clients.  Maybe crowdsourcing sites would discover that solos and smalls have absolutely no interest in creating new resources, but might be willing to pool funds to create a curated resource prepared by a top expert.

At the end of the day, a crowdsourced site needs to generate income. Wikipedia is a non-profit. Github generates its revenue not from free interaction but by providing a platform that private companies can pay to use for internal collaboration (my husband used Github at the last three tech companies where he worked before his death). The crowdsourcing sites don’t offer quality or exposure that’s worthwhile enough for a solo to use for free, yet alone for a fee.  But, crowdsourcing legal research won’t succeed as a business model if the platforms can’t even give it away from free.

2 Comments

  1. Paul Spitz on August 19, 2015 at 2:09 pm

    I would imagine that research questions often are very fact-driven, which means that existing research put together through crowdsourcing may not really address the question that a particular lawyer has at a specific point in time.



  2. Matthew Johnston on August 24, 2015 at 5:14 pm

    Simply because we have not addressed the problems doesn’t mean a solution does exist.

    I think you have put the key points into the mix, namely compensation. Honestly, I don’t mind helping other attorneys but there is a limit to what I will do without some sort of compensation.

    My thoughts, though, travel to how to effectively crowdsource a task that is somewhat linear. To research a question, even a complex question, is often an exercise in following a trail. Sure the trail might have twists and turns and even loop-backs, but the vast majority of it is linear. A lawyer starts at Point A–his facts. He then moves to Point B which may be case law or regulations. He this sees promise and moves to Point C. it is possible to have multiple Point Bs or Point Cs or Point Ns, but often the trail loops back on itself to return.

    Often the results, or the lack thereof, of a line of inquiry will return a lawyer to a previous position. Unless the lawyer takes well-documented notes, another lawyer can’t take over without reference to a starting point and running the risk of duplicating work already done.

    Crowdsourcing produces the best results when efforts can work in parallel without needing reference to other efforts taking place in parallel. The efforts cannot be duplicative too much in order to be effective. The power of crowdsourcing is the ability to churn through massive amounts of data in parallel and then allowing the ultimate recipient of the effort to assemble the results into a discernible end.

    Finally, crowdsourcing works best when there is excess capacity available to perform the work. Proponents of crowdsourcing will need to find where there is excess labor capacity to crowdsource research. Until you find that labor resource (and can reward that resource adequately), there is not likely to be any success in crowdsourcing research.



Leave a Comment