A Fatal eDiscovery Error

High Risk Pic

eDiscovery and doc review vendors and software solutions dominated the LegalTech Trade Show this past week in New York City. While there were a couple of standouts in my mind, most notably Logikull, the majority of solutions boasted similar claims rooted in technical concepts. Of course, eDiscovery and document review in the “Age of Big Data” is no simple task and requires a level of technicality that I’m sure is difficult to sum up in a simple pamphlet or trade show display. That said, understanding what’s going on under the hood of vendor tech or software is crucial when making an intelligent choice for your firm or company’s needs.

Today in Legal Analytics, taught by Professor Katz and Professor Bommarito, we discussed some of the metrics that should be considered when selecting an eDiscovery or document review solution powered by machine learning, including precision, recall, accuracy, and the trade-offs in price that alterations in these metrics can yield.

One concept that I thought was particularly interesting (and worth sharing) is the relationship between errors in responsive and non-responsive documents and the problems that these errors can cause for vendors, firms, and most importantly, the client. In the context of ML-based classification, the most commonly used task for eDiscovery/doc reviewers, we can determine the accuracy of a classifier using external judgments, frequently described as true positives, true negatives, false positives, and false negatives.  The terms positive and negative refer to the classifier’s prediction (the expectation), and the terms true and false refer to whether that prediction corresponds to the external determination. This relationship can be visualized with the following chart:

3eGlc

Putting these concepts in terms that are more familiar in the eDiscovery context:

  • a true positive is a relevant document that is produced as relevant;
  • a true negative is a irrelevant document that is produced as irrelevant;
  • a false positive is a irrelevant document that is produced as relevant; and
  • a false negative is a relevant document that is produced as irrelevant.

This is where the commonly referenced eDiscovery metrics “recall” and “precision” come into play. Recall is the true positive rate or “sensitivity”, and precision is also referred to as positive predictive value. Finally, the true negative rate is also called “specificity”. These more granular metrics are likely better measures of a system’s quality. Because while a vendor may boast a 95% measure of “accuracy“, the system may still create catastrophic errors, of the Type I and II variety.

So, which of these errors is more devastating in the context of litigation? In my opinion, a Type II error is potentially much worse – even fatal. Why? With a Type I error, documents that are irrelevant will be tagged as relevant. This might cause some extra time, work, and money after the TAR or vendor has done its work. Perhaps it could even cause some embarrassment or cost-shifting in court if opposing counsel can show that you produced lots of irrelevant documents. But consider the alternative.

A Type II error could cause a relevant document to be classified as irrelevant. In other words, the one in a million smoking gun email could be cast into the abyss. But that’s what quality control is for, right? Not exactly. Thus, when evaluating a eDiscovery/doc review platform, understanding how the system combats the Type II error is essential. Along with that should come the understanding that no solution is perfect – human or machine – yet.

Advertisements

5 thoughts on “A Fatal eDiscovery Error

  1. James Caitlan

    Thanks, Patrick. There is too little discussion of eDiscovery metrics in the industry, and terms like “accuracy” get thrown around without an unambiguous definition of what they mean in terms of the parties’ legal obligations, counsel’s certifications, or how they are calculated.

    I absolutely agree that failing to identify every smoking gun is not a catastrophe. In fact, it likely happens in virtually every medium-to-high volume eDiscovery effort regardless of whether it is linear human review or some kind of TAR review method. It’s not because humans can’t recognize them; it’s because often the smoking guns don’t have any of the “expected” relevance terms in them, and were filtered out before they could get to the review phase.

    QC is not the answer for higher-volume eDiscovery because there’s not enough litigation budget to conduct repeated checks of every individual coding decision before production. As we used to say in the aerospace industry, you design quality (“accuracy”) in the process, not the product. Parties are obligated to design a reasonable process, including metrics to evaluate when the process is achieving satisfactory levels of quality, and then to verify process results against the metrics. It is the failure to define those process metrics where most eDiscovery projects fall short.

    As you pointed out, the industry needs to move from a poorly-understood metric of “accuracy” to one which measures the components of accuracy to identify risk and compliance.

    Reply
    1. Pat Ellis Post author

      James, thank you for your comment. I’m glad you enjoyed the post. And I’m sorry you had to be approved – I’ve since fixed that.

      Clearly, discussing eDiscovery metrics can be difficult and is something that most attorneys probably do not want to spend their time doing. But I would imagine (but don’t know for lack of industry experience) that this kind of thing can make or break a case, which can be everything for the client.

      Thanks again for your comment.

      Reply
      1. Jim Caitlan

        No problem. I think having a moderator is a good idea, Patrick.

        Thanks for the post. Do I understand that as an econ major you actually do understand statistics?

        Best Regards,
        Jim

        615.547.9757 Office
        512.228.2419 Cell

        [Description: logo_sig]
        Global in reach. Local in focus.

      2. Pat Ellis Post author

        Jim,

        I did a lot of stats/econometrics for my Economics degree. I’m continuing this work in law school for two classes, Quantitative Methods for Lawyers and Legal Analytics. I am particularly interested in their application in the eDiscovery context.

  2. G pen vapor Pen

    Hello! This is my 1st comment here so I just wanted to give a quick shout
    out and say I really enjoy reading your posts. Can you suggest any other blogs/websites/forums that go over the same subjects?
    Thanks a lot!

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s