Part I: The Basics – What is AI?

Let me begin this series on AI in Legal Research and Law Practice with a foundational discussion of what artificial intelligence is and why it’s such a hot topic right now. 

John McCarthy, a computer scientist credited as the father of artificial intelligence defined AI is “the science and engineering of making intelligent machines” – in other words, machines that can perform functions traditionally associated with humans, such as reasoning, recognizing patterns and drawing inferences. 

Though AI research began around the mid-1950s, it attained more widespread use within the past few years due to other technology advances and the growing aggregation of big data which helps train the algorithms that power AI programs.

Today, there are two types of AI – strong and weak. The distinction is best explained in excellent presentation by Harry Surden on AI and the Law. Surden explains that strong AI involves computers thinking at a level that meets or surpasses human logic – and that there is no evidence that any programs have reached this point. However, there’s a second type of AI known as “weak” pattern-based AI where computers solve problems by detecting patterns and which has been used to automate many processes from language translation to self-driving vehicles to email sorting. 

In addition to strong and weak AI, Surden identifies two predominant AI techniques: (1) logic and rules-based engines and and (2) machine learning.  A logic or rules-based approach entails developing rules for a computer through use of subject matter experts that can be used to automate processes. Surden offers Turbotax as one example of a rules-based approach.  Another specific to legal is Neota Logic, a cool tool that I saw in practice years back  at the Georgetown Law School Iron Tech Lawyer Competitions.  Neota Logic doesn’t require programming knowledge so lawyers can use it to create tools that will provide answers to clients on wide-ranging topics from eligibility for expungement of criminal records to whether a data breach must be reported

 A second AI technique is machine learning through pattern-recognition where algorithms discern patterns in data and infer rules on their own. As the machine learns from the data, the tools improve over time. Netflix’ automated recommendations or Google’s email system for identifying spam rely on AI-based pattern recognition. In terms of legal applications, machine learning and pattern recognition power tools for contract review and predictive coding in e-discovery to identify responsive documents.

Data analytics is best described as a closely related cousin to AI  – but with important differences described here and  here. In simple terms, data analytics culls through mountains of data to offer observations on what happened in the past – such as how long a judge typically takes to rule on a summary judgment motion. Data analytics cannot predict the future – though assumptions based on past data can be used to inform or predict future conduct. AI adds another layer through use of pattern recognition or machine learning to analyze and make assumptions or identify patterns.  

Whether an AI system employs a rules-based engine or pattern recognition, it’s fairly easy to imagine the potential for bias or inaccuracy.  With a rules-based system, omitting a critical step or inaccurately applying the rules could lead to errors. For example, imagine a rules-based system includes a rule stating that the statute of limitations for filing a personal injury action is two years from the incident date, but fails to qualify the rule with a caveat for cases involving municipalities where a notice of claim must be filed within six months of the incident or the claim is forfeited. A lawyer or client applying this rules-based engine to evaluate a case might identify a cause of action but nevertheless miss the filing deadline for a claim for cases involving municipalities.

Rules-based systems may also make assumptions that reflect bias.  This has been a widespread problem in criminal sentencing where courts may rely on a risk forecaster tools to predict the likelihood of a defendant’s recidivism and then use that score to inform sentencing decisions. Not surprisingly, a study sponsored by Pro Publica found that not only were the results of  forecasting “remarkably unreliable” in forecasting future violent crimes, but they also falsely flagged black defendants as future perpetrators in twice as many cases as white defendants.

Pattern recognition systems can also be challenging. For starters, they require a large data set to achieve accuracy – something that isn’t necessarily available to many solo and small law firms.  The accuracy of pattern recognition also depends upon whether the data used is “clean” and whether training data provided fully represent the scope of the problem.

The purpose of describing some of the challenges to creating accurate AI systems is not to discourage their use but instead to highlight for the lawyers the importance of understanding what’s going on under the hood. I’ll return to this subject in Part III when discussing the role of law librarians because they are uniquely positioned to identify many of these problems.  With this background in place, let’s move on to Part II.

Leave a Comment