Category Archives: Misc.

Digital Strategy for Legal Career Development

The following post is a narrative version of a presentation given to Michigan State College of Law students and faculty in November 2014 and the Detroit Legal Innovation Meetup in January 2015.

Law school was once a safe path. Not only were you guaranteed job security, but you would join a profession and embark on a fulfilling career. Even if you didn’t love that career, greener pastures, commonly encapsulated in the “you can do anything with a JD” mantra, always seemed within reach.

My classmates and I, entering law school in 2011 and graduating in May 2014, did not enjoy this fabled safe path and instead found ourselves at not-quite-the-bottom of the legal unemployment trough, but close enough. Up until about November of my 2L year, I was confident that I would find a firm home for the summer. The formula for success I was led to believe and believed existed was quite simple: Perform well on exams the first year, get a summer internship doing something law-related, make law review or moot court, get a summer associate position with a firm, don’t do anything crazy and coast through 3L into your awaiting associate position’s arms.

I checked a lot of items on off “the list”: (1) good grades, (2) member of the law review, (3) internship the previous summer with a federal judge, and (4) I typically interview well. There were, however, a number of things working against me that I utterly failed to consider, the most notable factor being that I was in a less-than-average and crowded legal market. This wasn’t the only reason I didn’t secure a position through OCIs, but certainly one that was out of my control. In short, I didn’t stand out.

Around the same time, I started researching and writing my “note” that I was tasked with writing as a member of the law review. You know, because people love reading forty-page dissertations on esoteric legal issues written by people with one year of legal ed under their belt. Seriously, the requirements of writing a law review article are basically two-fold: It has to be long and on a super-novel topic. To fulfill these demands, I set out to write a comparative piece on the regulatory framework behind Dodd-Frank and Islamic Finance regarding OTC and exchange-traded derivatives. As you can imagine, I didn’t get very far.

About mid-way through my semester, I switched topics (typically indicative of a meltdown) and started writing about something I enjoyed much more, social media. The thrust of my article being that social media can be leveraged not only as a marketing tool for lawyers, but also as a mechanism for the democratization of legal information. I aimed to prove this by using Twitter as the sole source of my research. Rather than combing Westlaw and dusty treatises, I instead read Twitter, LinkedIn, and legal industry blogs.

It was through my OCI rejection and note research that I came to an important realization: Although the “traditional” path to a career in law is narrowing faster than ever, there are new and accessible avenues for lawyers and law students to build networks and establish expertise.

Since, I’ve worked on building an online presence through Twitter, LinkedIn, and blogs with the goal of building a strong network within the legal community and learning from that community. All of these things were topics of discussion during interviews and part of the calculus that eventually led me to the position I’m fortunate to have.

One caveat, I’m in a non-traditional legal role. As such, it’s possible that my recommendations may not be a good fit for everyone. With that in mind, here are a few notes on what I’ve learned from others and tried to implement both in law school and beyond.

Why?

The experience detailed above is a drop in the sea of similar law student stories. I’d bet that most law students in my graduating class, as well as our predecessors and successors, have similar stories of disillusionment and uncertainty. I’d also wager that almost all of them have similar stories of hustling to make it work and finding their path to a fulfilling career in law.

Times have been, and will likely continue to be, difficult for legal job seekers. The industry is changing, and we must continuously adapt and practice flexibility. At the same time, technology, which is partially responsible for the shifting legal landscape, is reallocating social capital and redefining value in our industry. That is, going to a top law school or graduating in the top ten percent of your class are still very valuable, but for the rest of us, there are new ways to convey future value.

A Framework for Digital Strategy

The steps I suggest below are by no means surefire ways to build a strong network or land your dream job. They are, however, iterative tactics that seem to be used by some of the brightest minds in the online legal community and, so far, they’ve worked for me as well.

Listen

When transitioning to or starting a professional presence online, the first instinct may be to get your voice out there. What are you reading, writing, or thinking? But unless you have deep experience in a topic and people are clamoring for your first 140 character declaration, first just listen. For law students, the first step, aside from setting up your profile is to find people and organizations that interest you. Interested in international human rights law? Go follow the UN, Human Rights Watch, and Amnesty International. Then go follow everyone they follow. Or maybe you’re interested in working in Big Law? Go follow the Am Law 100 on Twitter and LinkedIn, or like them on Facebook. This is a learning process, and as you dive deeper into the social graph of a theme or person, you’ll discover who the leaders are and what the hot topics of conversation are in those circles.

Then, simply listen. You have direct access to the thoughts, reading lists, blogs, and contacts of some of the most influential people in the world, including some of the best lawyers, judges, professors, and legal organizations. Choosing to ignore all of this powerful information, literally at your fingertips with the Twitter app or an RSS reader, is a missed opportunity.

Engage

As you continue to find your online identity and listen, you can eventually start to engage the people you follow. Social media and blogs are very unique because there is a low barrier to communication. Moreover, people have an incentive to engage and respond. Say there is a judge you are interested in interning for. Sure, you could email her your resume and a nice cover letter. Or, assuming the judge is on Twitter (which many are), you could simply use Twitter’s built in tools for engagement by “re-tweeting” or “favoriting” some of the judge’s tweets. You could even tweet her a question or suggest an article she might like. This might seem strange, but going back to the changing nature of social capital, engagement on an open platform garners social currency. And everyone wants that, from judges to first-year law students. This is not to say you should begin firing off your CV on Facebook and asking for jobs in 140 characters, but I found that I had more success when my first point of contact was not an emailed resume.

Curate

Some of my peers in law school asked me how I had time to use Twitter, LinkedIn and blog. I’ll discuss this in detail below, but producing original content takes time and effort. Fortunately, to develop an effective presence, you really don’t need to produce original content. LinkedIn, Twitter, and even some very well-known websites and blogs are based on sharing, not producing, content. You can apply this same concept to building your networks. If you are interested in environmental law, pick a few environmental law blogs or news sources and share those articles across your accounts. You can even offer a quick comment along with the shared content. So, in addition to the time it takes you to read the article, it should take less than thirty seconds to post it to Twitter, LinkedIn, Facebook, or your blog. Over time, your network will begin to associate you with the topic of law or issue one which you curate content on. This allows you to develop and demonstrate an expertise without the trouble of writing a law review article or taking an exam on that topic.

Create

The final step of the iterative framework is to create original content and add to the conversation. This is by far the most difficult and time consuming step, but one that if executed correctly, makes all the difference. I’m in no position to give advice on writing, but I do have a few observations that I’ve picked up from reading some fantastic legal writers and bloggers. These apply not only to writing blogs, but also posting original content on LinkedIn, Twitter, and other sites.

Focus on Quality. Quality in this case is two-fold. First, as a law student and future lawyer, you are expected to write well and with attention to detail. That’s a given. Be a borderline perfectionist and ask others to proofread your content. Second, focus on substantive quality. You might not have the deepest insight into an issue or topic, but you should still find a way to offer a unique perspective, even if it is from “a student’s point of view.” In a world where clicks are worth dollars and lists are replacing articles, it can be hard to find meaningful content. By focusing on quality, you will convey investment in a legal issue, even if you don’t fully understand it, and continue to develop expertise. Oh, and by the way, you can do all of this without the thousands of footnotes demanded by academic journals!

Be Yourself (Mostly) and Take Intelligent Risk. This is probably the most difficult point for law students and presented a question when I started blogging. Law students and lawyers are generally risk averse. But blogging or using other forms of social media can be perilous in the sense that any dumb thing you put on the internet is there forever. How can you write in a compelling and authentic voice without losing the professional vibe expected of a future lawyer? There isn’t much to say here other than sometimes the reward is worth the risk. At the end of the day, unless you write something crazy, it probably won’t matter. Most importantly, you have to consider your audience. If you want a job in government, you probably shouldn’t publish anarchist manifestos to your blog. That said, if you want to take a side on a legal or policy issue, do it and have the research to back it up. Attorneys advocate. If you always take the middle road, you won’t upset anyone, but you also won’t excite anyone either. Remember your audience!

Have a Strong Sense of Purpose. This one applies not only to developing an online presence, but also to your career in general. Peter Thiel says it best:

A good intermediate lesson in chess is that even a bad plan is better than no plan at all. Having no plan is chaotic. And yet people default to no plan. When I taught at the law school last year, I’d ask law students what they wanted to do with their life. Most had no idea.

For a lot of folks, myself included, going to law school was sort of a “no-plan plan.” That is, I was interested in law and practicing law, but beyond that, I didn’t know much about it. Fortunately, I had a few failures early on that helped me recognize the necessity of a plan moving forward. Simply being in law school was not enough. A plan, even a bad one, starts and ends with a strong sense of purpose. Forget thewhat and how of where you are, and ask yourself why? Once you find that purpose, reflect the why into the content you create. Ask yourself, “What is the purpose of this article, this picture, this connection?” If the answer to that question is in furtherance of your goals as a student and as a future lawyer, hit publish!

Of course, none of this alone will help you land your dream job. But the disciplined development of an online presence may help you circumvent those obstacles over which you have no control; impediments you will face both as a law student and a professional. By the way, it’s fun too!

If this is helpful, you have any questions, or you have further recommendations for law students building an online presence, please let me know in the comments or send me an email at pme@honigman.com.

Views expressed are the personal views of the author and do not represent the views of Honigman Miller Schwartz and Cohn LLP, its partners, employees or its clients.

[de]Coding Advocacy: An Introduction to Informatic Analyses of Oral Argument


 

People want to know under what circumstances and how far they will run the risk of coming against what is so much stronger than themselves, and hence it becomes a business to find out when this danger is to be feared. The object of our study, then, is prediction, the prediction of the incidence of the public force through the instrumentality of the courts.

~ Oliver Wendell Holmes, Jr., The Path of the Law, 10 Harv. L. Rev. 457, 469 (1897).


 

Despite Holmes’ definition of the object of our study over one hundred years ago, legal professionals’ abilities to forecast matter outcomes remain tenuous, at best, and largely rely on “gut feeling” and “anecdata” rather than quantitative or data-driven analyses. Recent advances in computational and statistical platforms combined with the rapid growth and increasing accessibility of information should, however, spur new efforts to predict what, how, and why courts make the decisions they make and, more importantly, how those decisions affect the clients we represent.

This paper attempts to introduce new approaches to analyze the Supreme Court’s rich tradition of oral argument, and suggest ways these methods can be further developed to forecast Supreme Court case outcomes and, potentially, provide early case assessment tools and prediction in lower courts. Specifically, this paper will use statistical, natural language processing,and visualization techniques to examine oral arguments and suggest ways these methods could be used to uncover latent patterns in the justices’ conduct. These applications are certainly not new or even particularly advanced in the world of data science and analytics, but they are rarely applied to a field and profession that is in desperate need the insight they can provide.

I would greatly appreciate any feedback as this is an ongoing experiment and effort to learn. If nothing else, I hope that some of the following thoughts will assist or inspire the application of new approaches to old problems facing the legal profession and the clients we serve.

Predicting SCOTUS: Tea Leaves vs. Math

On March 24, 2014, the United States Supreme Court heard oral argument in Sebelius v. Hobby Lobby Stores, Inc., a case presenting the issue of whether the Religious Freedom Restoration Act allows a for-profit corporation to deny its employees the health coverage of contraceptives to which the employees are otherwise entitled by federal law, based on the religious objections of the corporation’s owners. For 90 minutes, advocates for two corporations and the government sparred over this complex legal issue all while being peppered with challenging questions from the justices.

Almost immediately after arguments concluded, news organizations across the world summoned their most talented jurisprudential analysts to dissect the arguments and predict how the Court would rule. According to one SCOTUS prophet, CNN’s “Supreme Court Producer,” Bill Mears, “[t]he justices appeared divided along ideological lines in a 90-minute oral argument.” Mears also pointed to Justice Kennedy’s “tough questions” to both sides as evidence of his swing voter status. How insightful. Despite Mears’ inability to cite any specific questions or exchanges during argument on which he based his predicted outcome, he or some editor boldly titled his piece: “Court majority harshly critical of Obamacare contraception mandate.”

So there you have it! CNN’s legal oracles have spoken and it is simply a matter of time before the Court hands down its decision, though it matters not for we already know the outcome. Right? Not exactly . . . tout simplement, CNN legal analyst Jeffrey Toobin following oral argument in National Federation of Independent Business v. Sebelius:

This was a train wreck for the Obama administration. This law looks like it’s going to be struck down. All of the predictions, including mine, that the justices would not have a problem with this law were wrong.

Toobin was, of course, wrong . . . twice. But we shouldn’t judge Toobin (too harshly). Even the collective wisdom of the masses via Intrade shifted dramatically in favor of the ultimately wrong outcome following oral argument on the issue (pictured below). In Nate Silver’s opinion, this phenomenon may have been the result of “overconfidence in the value of information.” At best, an overvaluing of the information conveyed in oral arguments. At worst, an overvaluing of oral argument in and of themselves.

Intrade

But what if it is merely an inability to detect the information? A confusion of noise and signal? Many studies have attempted to address this problem and have conjectured that oral arguments do have some predictive power:

Their accuracy, consistency, and comprehensiveness is, however, debatable. Then, and much worse, there are the pundits and legal experts who point to “the intangibles” during argument – such as Mr. Verrilli’s cough – that allegedly factor into the calculus of a case’s outcome. I can’t help but think of the scene from Moneyball:

Scout Artie: I like Perez. He’s got a classy swing, it’s a real clean stroke.

Scout Barry: He can’t hit the curve ball.

Scout Artie: Yeah, there’s some work to be done, I’ll admit that.

Scout Barry: Yeah, there is.

Scout Artie: But he’s noticeable.

Matt Keough: And an ugly girlfriend.

Scout Barry: What does that mean?

Matt Keough: Ugly girl friend means no confidence.

Maybe there is something to it. But probably not. In fact, studies have found that “expert” commentators on the Supreme Court barely do better than a coin flip and are consistently beaten by statistical methods. A result of irrational confidence in their ability to read the sibylline leaves. This is not to say that expert opinions are entirely worthless. After all, “anecdata” is a valuable commodity in the space of legal prediction. I fear, however, that continually failing, legal predictions that rely heavily on “gut feeling” or some other noise may make the Supreme Court, and potentially our entire justice system, seem unpredictable or, even worse, irrational. After all, if Mr. Toobin, a Supreme Court insider and multiple, award-winning book writer on the subject can’t get it right, who can?

A New Approach

In Holmes’ speech-turned-essay, The Path of Law, quoted supra, he explained:

For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.

Today, the addition of computer science to the skills of the man or woman of the future is likely appropriate as new technologies and tools make a greater bouquet of computational, legal analysis techniques increasingly accessible. To convey this concept, the following analyses will be performed with the R Project for Statistical Computing, which has been described as:

The world’s most powerful programming language for statistical computing, machine learning and graphics as well as a thriving global community of users, developers and contributors. R includes virtually every data manipulation, statistical model, and chart that the modern data scientist could ever need. As a thriving open-source project, R is supported by a community of more than 2 million users and thousands of developers worldwide. Whether you’re using R to optimize portfolios, analyze genomic sequences, or to predict component failure times, experts in every domain have made resources, applications and code available for free, online.

Thus, Legal Analytics, which Lex Machina describes as “the discovery and communication of meaningful patterns in [legal] data,” can be performed at some level by anyone with a laptop, for free. Here goes an example.

The Issue

Do Supreme Court oral arguments exhibit any patterns, latent or obvious, that suggest how the justices will vote in a given case?

A Sample of Oral Arguments

Unfortunately, I do not have the time (the Bar Exam is getting in the way) to perform the kind of comprehensive analysis it would take to answer my given question . . . for now. However, I can provide an introduction and starting place with five oral argument transcripts, which I’ve converted to a cleaner, more machine-readable format. This is by far the most tedious process, and any suggestions or resources to automate the process would be greatly appreciated.

For these purposes, I’ve selected transcripts from five, high-profile cases’ oral arguments, all of which were decided on a 5-4 basis. These cases also fairly characterize the political ideologies of the justices (with two significant departure, Justice Roberts’ vote in the Obamacare cases and Justice Kennedy’s vote in Windsor) visualized in the image below from VoteView Blog.

pic

A link to the cleaned transcripts and the voting outcomes of the cases follow:

Importing and Cleaning the Transcripts

For the following analyses, we will use an R package called “qdap.” qdap developer Tyler Rinker, describes the package:

The package stands as a bridge between qualitative transcripts of dialogue and statistical analysis and visualization. qdap was born out of a frustration with current discourse analysis programs. Packaged programs are a closed system, meaning the researcher using the method has little, if any, influence on the program applied to her data.

Given our five transcripts, we must import the data into our statistical system, R, and then clean it by removing certain characters, numbers, etc. We do this with the following lines on the Obamacare argument transcript:

library(qdap)
dat <- read.transcript("ENTER TRANSCRIPT FROM WORKING DIRECTORY", col.names=c("person", "dialogue"))
truncdf(data)
left.just(data)
# qprep wrapper for several lower level qdap functions
# removes brackets & dashes; replaces numbers, symbols & abbreviations
data$dialogue <- qprep(data$dialogue)  

We then break the transcript down into sentences:

# sentSplit splits turns of talk into sentences
data2 <- sentSplit(data, "dialogue", stem.col=FALSE) 

And take a peek at our refined transcript:

htruncdf(data2)   #view a truncated version of the data(see also truncdf)
    person tot   dialogue
1  ROBERTS 1.1 We will he
2  ROBERTS 1.2   Florida.
3     LONG 2.1 Mister Chi
4     LONG 2.2 The Act ap
5     LONG 2.3 There is n
6     LONG 2.4 On the con
7     LONG 2.5 First, Con
8     LONG 2.6 Second, Co
9     LONG 2.7 And third,
10    LONG 2.8 Congress d

Once we’ve accomplished this step, we can start running tests on the data and analyzing it.

Basic Stats + Visualizations

The ABA Journal recently published an article discussing “the genesis of visual law” and its applications. In the article, Daniel Lewis, founder of Ravel, a visual-based legal research platform, explains the benefits of adding visualizations to text based research:

We’re looking at how we can group cases in a way that tells the story. If you’re interested in the rules about abortion, let’s start with Roe v. Wade and then track the elements of that over time. We want to help build visualizations that function like dynamically created infographics to help people see the stories in their search results.

Just as maps of legal precedent tell a story, so can visualizations of oral arguments. Using R and qdap, we can explore these stories in a way that may help us better understand the ebb and flow of argument, and potentially provide insight into how the justices behave. The following function allows us to produce the plots to follow:

with(data2, gantt_plot(dialogue, person, title = "U.S. Department of Health and Human Services v. Florida",  
xlab = "Argument Duration", ylab = "Speaker", x.tick=TRUE, minor.line.freq = NULL, major.line.freq = NULL, 
rm.horiz.lines = FALSE))

obamacare

holder

windsor

mcc

koontz

These visualizations may be a bit overwhelming, but they do convey a lot of information (speaking patterns, length of questions and exchanges, and are just plain fun to look at). Note, Justice Thomas is not listed on any of the graphs as he has not asked a question in seven years, with one minor exception. At first glance, does anything jump out? Aside from Justice Breyer’s long stretches of color, it appears that Justice Kennedy is more active in the Windsor argument. Of course, we know that Justice Kennedy voted with the “liberal” wing of the Court in that case, and my identification of extra-activity may simply be a case of apophenia, the experience of seeing patterns or connections in random or meaningless data, more commonly called a Type I error. Without further research and a larger sample of cases, it is impossible to tell. But, just for fun, we can “zoom in” on the data and see if Justice Kennedy did in fact talk more in the Windsor argument using the following function:

print(windsor_data)
       person total.sentences total.words
10     SCALIA              48         700       
8     KENNEDY              48         791       
1       ALITO              51         958       
4    GINSBURG              27         418       
6       KAGAN              22         545        
11  SOTOMAYOR              57         822        
2      BREYER              91        1561        
9     ROBERTS              82        1156        

We can do these for each of our five cases to obtain accurate word counts, and then plot the data:


wordcount

When we isolate Kennedy’s Windsor statistics against our other cases, we can see that he was 13.064% more vocal by word count in Windsor than his next most vocal argument, Shelby.

pic

Is this a signal or just more noise? It will take more research to find out, but combining statistical analyses with visualizations certainly present some new and interesting questions that are worth examining beyond one justice and five cases. Let’s take a look a more complicated analysis.

Contextual vs. Formality Analysis

When analyzing language, researchers often examine formality and context. Formality is a measure of how contextualized a person’s language is. The more formal, the less ambiguous words are standing on there own and vice versa. Thus, complex issues, like those litigated in courts, often require a high degree of formality. qdap uses an algorithm developed by Heylighen & Dewaele (2002) to calculate and measure formality in speech by finding the difference of all of the formal parts of speech (nouns, adjectives, prepositions, articles) and contextual parts of speech (pronouns, verbs, adverbs, interjections) divided by the sum of all formal & contextual speech plus conjunctions. This quotient is added to one and multiplied by 50 to ensure a measure between 0 and 1, with scores closer to 100 being more formal and those approaching 0 being more contextual.

While this analysis could, theoretically and at the expense of human sanity, be done by hand, it can quickly and efficiently be performed in R with relatively little effort with the following:

#parallel about 1:20 on 8 GB ram 8 core i7 machine
v1 <- with(data2, formality(dialogue, person, parallel=TRUE))
plot(v1)
#about 4 minutes on 8GB ram i7 machine
v2 <- with(data2, formality(dialogue, person)) 
plot(v2)
# note you can resupply the output from formality back
# to formality and change arguments.  This avoids the need for
# openNLP, saving time.
v3 <- with(data2, formality(v1, person))
plot(v3, bar.colors=c("Dark2"))

We can then produce formality scores, such as these on the Obamacare argument:

      person word.count formality
1   VERRILLI       3017     66.52
2     KATSAS       2037     64.83
3       LONG       2894     62.61
4  SOTOMAYOR        966     61.80
5      ALITO        661     61.04
6    ROBERTS        493     58.82
7   GINSBURG        797     58.09
8     BREYER       1065     57.98
9      KAGAN        674     56.82
10    SCALIA        317     55.05
11   KENNEDY        290     51.72

Note that the three advocates have the highest level of formality, an indication of less contextualization in their speech. We can also visualize these results individually in R:

pic

Or plot the arguments against one another and look for areas of interest:

pic

This information, standing alone, does not have tremendous value. However, when combined with other information or further analyzed, formality scores may provide insight into the attitudes and understandings of the justices. Again, this is merely a starting place, but an excellent example of using R to process and dissect information that would otherwise be extremely difficult to grasp or quantify with armchair theorization alone.

Polarity Analysis

Another language-based analysis that has gained popularity, especially in the realm social analytics, is sentiment analysis. Though sentiment analysis algorithms are generally applied to written text, qdap offers a function for dialogue-based analysis. This function compares a given text to the word polarity dictionary used by Hu & Liu (2004), which has pre-coded words as either positive or negative. The algorithm uses an equation to determine the words’ use in their context by examining words before and after each subjected word, and then weighting the words differently, depending on the context.

Though the Hu & Liu dictionary may not be ideal for polarity analysis on Supreme Court argument, it is worth a try as it is able, again, to perform a task that humans, even experts, cannot do without tedious work and bias. In fact, a potentially great project would be to develop a sentiment dictionary for law, but that’s for a later day. Using the following code, we can further examine the justices’ sentiment toward the advocates:

#Using Obamacare Transcript
poldata <- with(obamacaretrans, polarity(dialogue))
poldata
POLARITY BY GROUP
=================
  all total.sentences total.words ave.polarity
1 all             715       13211        0.025

We can also visualize the group polarity with the plot() function, which produces:

pic

This plot doesn’t convey much information, as the bulk of the words are considered neutral. This may be due to the dictionary used. But we can break the group statistics and visuals down by individual speakers with the following:

poldata2 <- with(obamacaretrans, polarity(dialogue,list(person)))
poldata2
POLARITY BY GROUP
=================
      person total.sentences total.words ave.polarity
10 SOTOMAYOR              60         966       -0.014
1      ALITO              31         661       -0.012
6    KENNEDY              21         290       -0.008
3   GINSBURG              37         797       -0.008
4      KAGAN              40         674       -0.001
5     KATSAS             115        2037        0.006
2     BREYER              75        1065        0.010
9     SCALIA              17         317        0.018
8    ROBERTS              42         493        0.030
11  VERRILLI             151        3017        0.042
7       LONG             126        2894        0.078

And plotted:

pic

For another view, we can render a heat plot:

pic

This gives us much more information and a more granular view on the justices’ sentiment. Of course, this information could, again, not have any predictive value. Moreover, there are some problems with running a sentiment analysis on the entire argument. For instances, one justice’s sentiment could be positive toward one advocate, and negative toward another, which would balance out the total average. At an even higher level, is a justice more negative toward the side she disagrees with? Or is she perhaps more challenging to the side she agrees with to vet the issues and draw the sting? Also a question for another day, but certainly worth exploring.

Just for curiosity’s sake, we can plot the sentiment scores against one another and look for areas of interest:

pic

Again, it would be premature to try and forecast judicial behavior on this graph alone. We need more data and research into this issue. That said, we can take a peek at the the differing and vast ranging polarities of the justices during oral argument. Perhaps, with a more tailored dictionary and a more in-depth analysis, examining polarity would provide some insight or even have predictive power.

Or maybe, it’s just noise.

Beyond SCOTUS

Whether Supreme Court oral arguments have predictive power remains to be seen. Oral advocacy and judicial decision-making, is a complex business. At the Supreme Court, it becomes even more complicated, which makes prediction all the more difficult. But we have to start somewhere. After all, as early as 1897, legal scholars described prediction as the object of our study. And today, prediction remains an authentication of a honed legal professional. But while lawyers, hopefully more so than the lay person, are skilled in finding patterns in legal data, that skill suffers from well-documented biases. Even more so, we suffer from an inability to aggregate and calculate data in large enough amounts to provide the kind of insight we need to make decisions free of bias and heuristics.

That is where the techniques shown above come in. The idea is not that a formality or polarity analysis, or visualization will replace lawyers, but they may, with the proper amount of information, computation, and development, supplement our legal predictive powers. Perhaps the techniques, and many more, can be combined into a regression equation or performed on hundreds of arguments. Even more exciting, what if analytical tools and processes are increasingly used, not only for Supreme Court research, but also on arguments or cases in appellate courts, trial courts, or even in motion practice?

Final Thought

As technology and data becomes increasingly accessible, it is a matter of time before we can predict legal outcomes with greater speed and precision, not only for the sake of writing law review articles or blog posts, but to provide better counsel to our clients: Those who trust us to stand between them and “the whole power of the state.”

Coding Advocacy: Measuring Simplicity in Oral Argument

Justice John Marshall Harlan II

Justice John Marshall Harlan II

Code available at GitHub

In 2011, Professor Kurt Lash wrote a short post on PrawfsBlawg posing an interesting question:

How does one measure greatness as a Supreme Court advocate?  Does it involve eloquence?  Most wins?  Best performance on behalf of a worthy cause? Greatest skill in getting the Court to change its mind?

Naturally, an academic, but spirited debate to answer the question ensued in the comments, though no decisively “best” advocate emerged. It’s a highly subjective question and one that I’m not experienced enough to argue about with constitutional law professors. However, one trait of great advocacy neglected in the commentary of Professor Lash’s post is . . . simplicity.

Kanye West and/or Leonardo Da Vinci once tweeted/wrote: Simplicity is the ultimate form of sophistication. A great aphorism for lawyers, though it’s infrequently applied. Simplicity in communication is, of course, essential in any profession: How can we explain a complex idea, product, or service in a way that our target audience will understand? But for attorneys, effectively conveying a complex idea to a court, mediator, or jury may make all the difference in a case’s outcome.

Simplicity in advocacy may not, however, seem as necessary when your audience is the U.S. Supreme Court, a panel of some of the most distinguished and sophisticated legal thinkers of our time. But as Justice John Marshall Harlan II noted in his 1955 address to the Judicial Conference of the Fourth Circuit:

[I]t seems to me that there are four characteristics which will be found in every effective oral argument, and they are these: first, what I would call “selectivity”; second, what I would designate as “simplicity”; third, “candor”; and fourth, what I would term “resiliency.”

***

Simplicity of presentation and expression, you will find, is a characteristic of every effective oral argument. . . . It is sometimes forgotten by a lawyer who is full of his case, that the court comes to it without the background that he has. And it is important to bear this in mind in carrying out the preparation for argument in each of its phases. Otherwise the force of some point which may seem so clear to the lawyer may be lost upon the court.

Even Justice Harlan, one of the most sophisticated and influential jurists of the 20th Century, appreciated simplicity in advocacy. But in the context of Supreme Court advocacy how simple should one be?

In an effort to satisfy this question, I’ve been experimenting a bit with natural language processing and textual analysis tools, courtesy of R. Below is a sample of a larger project I’m working on for Legal Analytics and any feedback will be greatly appreciated. So let’s take a look at an example of how we can measure simplicity in oral advocacy.

Paul D. Clement.jpg

Paul D. Clement

Attorney Paul Clement, former Solicitor General, has argued more cases before the Supreme Court than anyone else since 2000. Mr. Clement has been described as “LeBron James on a fast break” and former Bush attorney general John Ashcroft once referred to him as “a Michael Jordan-like draft pick.” In other words, Paul Clement is a baller.

Mr. Clement has argued countless, important cases before the Supreme Court, including McConnell v. FECTennessee v. LaneRumsfeld v. PadillaUnited States v. BookerHamdi v. Rumsfeld,Rumsfeld v. FAIRHamdan v. RumsfeldGonzales v. RaichGonzales v. OregonGonzales v. Carhart, and United States v. Windsor. Perhaps most famously, or infamously, Mr. Clement also led  the challenge on behalf of 26 states to overturn the Patient Protection and Affordable Care Act in the Supreme Court on March 26 – 28, 2012. We all know how that turned out, but many of Mr. Clement’s arguments were supported by a majority of justices and cannot be fairly characterized simply as a loss. More importantly for my purposes, Mr. Clement, as well as his opponents, did a tremendous job of taking complex arguments and simplifying them . . . at least, I think so. But let’s find out.

Using Mr. Clement’s March 27, 2012 Individual Mandate argument, we can calculate the “readability,” which I’m using as a proxy for simplicity, of Mr. Clement’s advocacy. (Note: I used this great blog post, Statistics meets rhetoric: A text analysis of “I Have a Dream” in R by Max Ghenis to do the heavy lifting here). After firing up R, we need to take Mr. Clement’s argument and convert it to raw text. I do this using a website called textuploader.com, which allows text be be easily stored and converted to more usable forms. It’s also worth noting that I included the Justices’ questions in the sample. Though this may skew the results, I think it will only have a marginal effect. We can then upload the raw text into R using the following function:


speech.raw <- paste(scan(url("Insert textuploader file URL"),
 what="character"), collapse=" ")

Now that we have our data in R, we can begin to clean and quantify the text. This can be accomplished by using the qdap package (text analysis) and the data.table package to organize and provide structure to the data:


library(qdap)
library(data.table)

Next, we need to split the data into sentences, clean the sentences, and count them.


argument.df sentences sentences[, sentence.num := seq(nrow(sentences))]
sentences[, person := NULL]
sentences[, tot := NULL]
setcolorder(sentences, c("sentence.num", "speech"))

We then calculate the syllables per sentence and find the total syllables to determine where fluctuations in readability occur within the argument.


sentences[, syllables := syllable.sum(speech)]
sentences[, syllables.cumsum := cumsum(syllables)]
sentences[, pct.complete := syllables.cumsum / sum(sentences$syllables)]
sentences[, pct.complete.100 := pct.complete * 100]

Now, we can use a function to calculate the “readability” of Mr. Clement’s argument based on the Automated Readability Index, a readability test designed to gauge the understandability of a text by producing an approximate representation of the U.S. grade level needed to comprehend the text.


sentences[, readability := automated_readability_index(speech, sentence.num)
$Automated_Readability_Index]

We almost have all the pieces in place to visualize the readability of Mr. Clement’s argument. We finally need to load visualization packages and define the parameters of our graph.

library(ggplot2)
library(scales)

my.theme <-
+ theme(plot.background = element_blank(), # Remove background
+ panel.grid.major = element_blank(), # Remove gridlines
+ panel.grid.minor = element_blank(), # Remove more gridlines
+ panel.border = element_blank(), # Remove border
+ panel.background = element_blank(), # Remove more background
+ axis.ticks = element_blank(), # Remove axis ticks
+ axis.text=element_text(size=14), # Enlarge axis text font
+ axis.title=element_text(size=16), # Enlarge axis title font
+ plot.title=element_text(size=24, hjust=0)) # Enlarge, left-align title

Plot + return(gg + geom_point(color="grey70") + # Lighten dots
+ stat_smooth(color="red", fill="lightgray", size=1.3) +
+ xlab("Percent Complete")
+ scale_x_continuous(labels = percent) + my.theme)

And, finally, render our plot.


Plot(ggplot(sentences, aes(pct.complete, readability))
+ ylab("Automated Readability Index")
+ ggtitle("Simplicity of Clement's Argument"))

Rplot01

By visualizing Mr. Clement’s argument as a measure of readability, we can see that his rhetoric is, for the most part, relatively simple. According to the Automated Readability Index, an U.S.-educated tenth grader should be able to comprehend the argument (approximately). So either tenth graders are getting a whole lot smarter or I didn’t pay attention enough in high school history classes. Of course, this measure is simply an approximation. Regardless, the overall semantic simplicity of Mr. Clement’s advocacy is apparent. Though this little experiment by no means satisfies the scientific method, this further suggests that simplicity is an important driver of effective oral advocacy.

That said, what works for the LeBron James of SCOTUS might not work for another advocate. In the words of Justice Harlan II:

The art of advocacy — and it is an art — is a purely personal effort, and as such, any oral argument is an individualistic performance. Each lawyer must proceed according to his own lights, and if he tries to cast himself in the image of another, he is likely to become uneasy, artificial, and unpersuasive.

Reinventing Contractual Readability

File:LearnedHand1910a.jpg

The great Judge Learned Hand once said:

There is something monstrous in commands couched in invented and unfamiliar language; an alien master is the worst of all. The language of the law must not be foreign to the ears of those who are to obey it.

The language of law must also not be foreign to the eyes that read it and perhaps the greatest offender of this principle is the contract. So how can we make the language of contracts more simple? More accessible? The Reinvent Law conference last week highlighted two ways, both of which focus on variations of readability. 

For People: Simplifying Contracts

When Abe Geiger, the founder and CEO of Shake, took the stage at Reinvent, he described his company’s work as “tiny law.” By this, he meant that Shake is providing people with tools to leverage the power and protection of contract law without an attorney. While this may seem to undermine attorney work, Shake aims to serve the latent market, one which is arguably too small to be tapped by individuals or law firms, but is better served by simple and intuitive software solutions. Put more eloquently, Shake’s website explains:

Our mission is to make the law accessible, understandable and affordable for consumers and small businesses. We want to empower our users to share ideas, goods, and services without the fear of being stiffed for a freelance gig or putting their business at risk.

One of the key words in this statement of purpose is “understandable.” To make contracts more understandable, Shake tries to provide contracts to users in plain English, rather than legalese (hereinafter, hereinabove, hereinbefore, heretofore, thereunder, thereunto, thereabout, whensoever, wheresoever, whereupon, etc.). This not only makes contracts more readable, but makes them more accessible. This accessibility ultimately serves Shake’s purpose to prevent the average consumer in the sharing, consumer-to-consumer economy to, in the words of Mr. Geiger, “not get screwed.” Democratization at its finest.

For Machines: Coding Contracts

Harry Surden, law professor at the University of Colorado-Boulder, had a different take on contractual readability. Rather than simplifying contracts for people, Professor Surden’s work focuses on simplifying contracts for computers. Machine-readable or computer-oriented data is language that can be read by a computer. Of course, computers do not “read” or process data in the same way humans do. This creates several hurdles (abstraction, file type, natural language processing, etc.) that must be overcome for a computer to read even the simplest of documents. Naturally, legal documents, which are often written unnaturally, present particularly steep challenges for computers, i.e. legal language processing.

To overcome these barriers, computable contracts must be re-oriented. For example, rather than expressing an expiration date as “January 1, 2015,” a computable contract might express that date as “<option_expiration_date:01/01/2015>”. By translating semantic terms and provisions to data-oriented heuristics, organizations employing automated contracts may benefit from reduced transaction costs, new properties for contractual analysis, and autonomous computer-to-computer contracting. For more on computable contracts, see Harry Surden, Computable Contracts, 46 U.C. Davis L. Rev. 629 (2012).

I don’t know if Judge Hand, in all of his wisdom, could have anticipated a mobile-based contracting application (much less a cell phone) or computable contract. But hopefully these developments will combat the “unfamiliar language” of contract law, if not entirely for people, than at least for the tools that we use.

A Fatal eDiscovery Error

High Risk Pic

eDiscovery and doc review vendors and software solutions dominated the LegalTech Trade Show this past week in New York City. While there were a couple of standouts in my mind, most notably Logikull, the majority of solutions boasted similar claims rooted in technical concepts. Of course, eDiscovery and document review in the “Age of Big Data” is no simple task and requires a level of technicality that I’m sure is difficult to sum up in a simple pamphlet or trade show display. That said, understanding what’s going on under the hood of vendor tech or software is crucial when making an intelligent choice for your firm or company’s needs.

Today in Legal Analytics, taught by Professor Katz and Professor Bommarito, we discussed some of the metrics that should be considered when selecting an eDiscovery or document review solution powered by machine learning, including precision, recall, accuracy, and the trade-offs in price that alterations in these metrics can yield.

One concept that I thought was particularly interesting (and worth sharing) is the relationship between errors in responsive and non-responsive documents and the problems that these errors can cause for vendors, firms, and most importantly, the client. In the context of ML-based classification, the most commonly used task for eDiscovery/doc reviewers, we can determine the accuracy of a classifier using external judgments, frequently described as true positives, true negatives, false positives, and false negatives.  The terms positive and negative refer to the classifier’s prediction (the expectation), and the terms true and false refer to whether that prediction corresponds to the external determination. This relationship can be visualized with the following chart:

3eGlc

Putting these concepts in terms that are more familiar in the eDiscovery context:

  • a true positive is a relevant document that is produced as relevant;
  • a true negative is a irrelevant document that is produced as irrelevant;
  • a false positive is a irrelevant document that is produced as relevant; and
  • a false negative is a relevant document that is produced as irrelevant.

This is where the commonly referenced eDiscovery metrics “recall” and “precision” come into play. Recall is the true positive rate or “sensitivity”, and precision is also referred to as positive predictive value. Finally, the true negative rate is also called “specificity”. These more granular metrics are likely better measures of a system’s quality. Because while a vendor may boast a 95% measure of “accuracy“, the system may still create catastrophic errors, of the Type I and II variety.

So, which of these errors is more devastating in the context of litigation? In my opinion, a Type II error is potentially much worse – even fatal. Why? With a Type I error, documents that are irrelevant will be tagged as relevant. This might cause some extra time, work, and money after the TAR or vendor has done its work. Perhaps it could even cause some embarrassment or cost-shifting in court if opposing counsel can show that you produced lots of irrelevant documents. But consider the alternative.

A Type II error could cause a relevant document to be classified as irrelevant. In other words, the one in a million smoking gun email could be cast into the abyss. But that’s what quality control is for, right? Not exactly. Thus, when evaluating a eDiscovery/doc review platform, understanding how the system combats the Type II error is essential. Along with that should come the understanding that no solution is perfect – human or machine – yet.

Mootus Interview (A Reflection)

I recently had the opportunity to do an interview with Adam Zielger, the founder of Mootus, for the Mootus Blog.

At this time last year, most of the things I discuss in the interview were completely foreign to me. Today, I am by no means an expert in any of the topics I discuss in this blog or the interview, but I have learned a lot and continue to learn every day. My point is that if you are passionate about something, don’t underestimate your ability to learn it. It will take effort and time, but it will be worth it.

You never know where it may take you.

An [Attempted] Understanding of Detroit’s Bankruptcy

The abandoned

Michigan Central Station. Detroit, Michigan.

Understanding Detroit’s bankruptcy is hard. In fact, I’m not sure if being from the area makes it harder or easier to understand. There are a lot of unanswered social, economic, and political questions swirling around the City’s insolvency and they are all difficult.

Then there are the legal questions.

Last Monday, Judge Steven Rhodes ordered that initial oral arguments in the case begin on September 18. The first issue will be whether Detroit is actually eligible for bankruptcy. This argument will center on whether Detroit is insolvent, whether the city negotiated in good faith with its creditors, and whether negotiations are even possible with so many creditors.

The most heated issue, which will be decided after the eligibility determination, will undoubtedly be whether current and former Detroit employees will be treated differently or given priority as creditors of the City.

To make things even more complicated, there are a number of unsettled issues of law, exacerbated by differences at the State and Federal level, that govern this area. I want to try and give a 10,000 foot view of the legal argument on both sides.

Chapter 9

Like most legal issues, we can start with the statute. The US Bankruptcy Code allows for a number of different flavors of bankruptcy (Chapters 7, 11, 12, and 13 are the ones most commonly covered in law school – yes, Chapter 12, farm-ruptcy). A city, however, can only file for bankruptcy under Chapter 9 of the Code if a state allows the city to do so. Only about half the states allow their municipalities to seek Chapter 9 protection and these protections are often limited. In Oregon, for example, only irrigation and drainage districts can file for bankruptcy.

Here in Pure Michigan, statutes authorize the appointment of an emergency manager who can restructure (read: terminate) contracts with city employees in attempt to fix a municipality’s financial issues. The EM essentially displaces the city’s government and then is charged with trying to fix the city in 45 days. When this fails, as it did in our case, the EM can ask the governor to authorize a bankruptcy.

The Issue

The real legal issues arise when states attempt to adjust pension plans for current and former state employees, i.e. creditors. Even in the extraordinary context of a muni-bankruptcy, the legal authority to alter pensions is precarious.

At the heart of the issue lies one question: How can Federal bankruptcy law, which advocates for the equal treatment of creditors, be reconciled with Michigan’s law, which treats current and former city employees’ pensions as untouchable?

The Federal Argument

The Federalist argument generally rests on two points of authority. First, the U.S. Constitution (Article I, Section 8, Clause 4) states that “[t]he Congress shall have Power . . . To establish uniform rules of naturalization, and uniform laws on the subject of bankruptcies throughout the United States.”

Now add Chapter 9: “The court, on request of the proponent of the plan, shall confirm the plan notwithstanding the requirements of such paragraph if the plan does not discriminate unfairly, and is fair and equitable, with respect to each class of claims or interests that is impaired under, and has not accepted, the plan.”

Thus, under Chapter 9, Congress has chosen to require the equal treatment of creditors, including retirees. In other words, no special treatment for current and former Detroit employees.

The State Argument

On the other hand, retirees (and States’ Rights proponents) can find constitutional firepower in the 10th Amendment: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.”

Mix that in with Michigan’s Constitution (Article IX, Section 24), and you have some persuasive law to tell Kevin Orr to keep his hands off your pension: “The accrued financial benefits of each pension plan and retirement system of the state and its political subdivision shall be a contractual obligation thereof which shall not be diminished or impaired thereby. Financial benefits arising on account of service rendered in each fiscal year shall be funded during that year and such funding shall not be used for financing unfunded accrued liabilities.”

This argument, which will likely represent approximately 23,500 retirees, advocates for retirees to be treated as a separate class of creditors who will receive special protections throughout the bankruptcy process.

* * *

I have no idea which side has the legal or moral high-ground in this fight. I do know that there are going to be some winners and some losers and some people who are and will continue to truly suffer from this crisis.

I’d also like to admit that I am really trying to wrap my head around the legal issues revolving around Detroit’s bankruptcy. If I missed something or inaccurately stated something, please feel free to correct me. This is my way of trying to understand the issues that are affecting my home, but I am by no means an expert.

However, I think that it is important that we all take a close, legal look at what is happening in Detroit:

Pay close attention because it may be coming to you soon, Los Angeles, Baltimore, Chicago, Philadelphia. In 2011, Moody’s calculated the unfunded liabilities for Illinois’ three largest state-run pension plans to be $133 billion. (It is expected to be even larger this year.) That’s the size of six Detroit bankruptcies — give or take a few hundred million.

See Charlie LeDuff: Detroit’s bankruptcy and what it means for America.

Regardless of what happens here, or anywhere else, I think there is a sense of opportunity and future in Detroit. And maybe, depending on the legality of it, “a clean balance sheet.

RT Photography (rtphoto.smugmug.com) via http://counterpoint22.wordpress.com

RT Photography (rtphoto.smugmug.com) via http://counterpoint22.wordpress.com