How Slow -IS- the U.S. District Court for the Northern District of Illinois?

How Slow -IS- the U.S. District Court for the Northern District of Illinois?

According to one published report, it appears the local Chicago federal court (the U.S. District Court for the Northern District of Illinois, or “Illinois (ND)” for short) has ground to a veritable halt.

There is reason to doubt that conclusion. Figuring out why untwines a cautionary tale about use of the wrong statistical tools for a given data set.

Scheme and Players

Several analogies apply here. You can't run an NFL offense with 2 players, or 11 players at the same position.

You can't produce a richly blended sound with 2 singers, or 50 sopranos.

You can't produce a meaningful average with a tiny sample set, or skewed data.

Like the pairing of scheme and players, choral arrangement and singers, mismatched statistical tools create bad outcomes. Here, the good name of the Illinois (ND) may have been impugned in just such a manner.

Apparently Bleak Stats About the Northern District of Illinois

A major data aggregation and analysis company, Docket Navigator, publishes yearly reports analyzing courts across the country. In its latest report, Docket Navigator reached an interesting conclusion about the Illinois (ND). https://brochure.docketnavigator.com/category/special-reports/ (“Report”). The Report suggests that the Illinois (ND) handled “more complex” patent cases—in terms of the parties and patents involved—than any of the other major patent district (primarily Texas and Delaware):

No alt text provided for this image

Report p. 20 (The “accusations” variable generally equals patents in a case multiplied by defendants).

As a result, Docket Navigator reports, the Illinois (ND) is terribly slow in reaching important milestones. For example, it reports an average of almost 9 years (3,201 days) for the court to proceed to the summary judgment stage and rule on patentee motions for summary judgment.?

No alt text provided for this image

Report p. 26.

A court with complex, time consuming cases indeed!

Data Fight

The problem with the analysis above—call it statistics or analytics or what have you—is it seems internally inconsistent and ignores erratic controlling variables.

In terms of consistency, one might ask how a court that failed to break into the top 5 in terms of the total share of patent cases was able to garner 25.6% of all patent litigants and 22.6% of all patent accusations—beating out behemoths like Texas (WD) and Delaware. (Docket Navigator has since slightly adjusted the numbers, but the mismatch remains.)

Looking deeper into the data Docket Navigator aggregated, a complicating variable emerges. The Illinois (ND) had a high percentage of anticounterfeiting cases in which design patents were asserted against large groups of defendants. Those large defendant groups threw the count off—they skewed the data. Just a few examples include:

  • Approximately 300 defendants named in Deckers Outdoor Corporation v. 0eshop-2014 et al, N.D. Ill. No. 1-20-cv-02930 (terminated after 5 months)
  • Approximately 150 defendants named in Dynamite Marketing, Inc. v. The Partnerships and Unincorporated Associations Identified on Schedule A, N.D. Ill. No. 1-20-cv-01468 (terminated after 8 months)
  • Approximately 150 defendants named in Fairly Odd Treasures, LLC v. Yiwu Anbai Trading Co., Ltd. et al, N.D. Ill. No. 1-20-cv-01494 (terminated after 5 months).

Such examples demonstrate how single cases can produce many parties (defendants) and accusations. But those numbers do not necessarily make them complex or time-consuming. Quite the opposite, anticounterfeiting cases are pretty straightforward. The defendant groups generally include long lists of pseudonyms under which single ne’er-do-well entities operate. And they tend be resolved rather quickly—see the 5-8 month time frame in the examples above.

In short, the data—the core commodity of Docket Navigator—are detailed, comprehensive, and helpful. But the conclusions it drew were misleading. Too many cases with artificially high defendants/accusations. Too many sopranos in the choir.?

Don’t Be Average

But what about those long times to reach summary judgment and claim construction in the Illinois (ND)? Each seems to support the conclusion that the Illinois (ND) was bogged down in near decades-long gridlock. The extended time to meaningful milestones could be explained by the fact that the cases were more complex.

Indeed, Docket Navigator reports, the average time to ruling on patentee summary judgment motions jumped from 3.3. years (1,217 days) in 2019 to 8.8 years (3,201 days) in 2020. 8.8 years! An especially novel feat, considering the average time to trial was half that, at 4.4 years (1,595 days).

Again, the unit-level data correct the record. In terms of summary judgment rulings in 2020, removing (1) data points corresponding to summary judgment rulings that were not the first round of summary judgment rulings entered in the case and (2) cases where the first round of summary judgment occurred before 2020, the following results.

No alt text provided for this image

The data set above cannot be described using an average. It is skewed. It has too few data points. It is affected by an independent variable that has not been tested: whether a stay pending US Patent Office proceedings was entered. Are stays evenly distributed across the country and in terms of length? Do they have similar impacts on case pendency? There is simply not enough information in the average to answer those questions.

Consider claim construction timelines. In 2019, the Illinois (ND) had an average time to claim construction of 4.6 years (1,661 days). That dropped to a more palatable average of 2.4 years (874 days) in 2020. The data show, however, that 2019 was skewed by a few outliers, including two cases in which claim construction occurred more than 10 years after filing.?

No alt text provided for this image

Here again, in 2019, is an example of the statistical tool being used (the average) doing a poor job of explaining the central tendency of the data. Comparing the average and median timelines, we see reason to doubt the 2019 average, and hope for the 2020 average:

No alt text provided for this image

In 2019, the difference between the average and median was large. At 1.4 years, it constituted almost half the median value. That large difference is a bad omen for the predictive value of the average.

In 2020, the average and median are close; the exact same, in fact, at 2.4 years. That coherence gives some indication that they accurately describe the central tendency of the data. As does the spread; looking at the 2020 plot above, the data appear to be evenly distributed between 1 and 4.5 years.

Indeed, to formulate average, one needs numerous data points that are not unfairly skewed. Average thrives on robust sample sizes, normal distributions, and minimal independent variables. The problem, however, is that such characteristics rarely describe court statistics. Sometimes a close examination of the data will produce a reliable result. Often it will not.

On average, average is unhelpful.

How to Analyze Illinois (ND)

The Illinois (ND) is a unique court. It has many judges (32), a large volume of high-volume-filer patent plaintiffs and anticounterfeiting cases, and a substantial criminal docket. The other top patent districts do not match these criteria. Thus, aggregated statistics should not be used to compare district courts against the Illinois (ND), unless it is confirmed by more rigorous tests of the underlying data.

One should always gut-check analytics against filtered, unit level data. Asking local counsel for examples of representative cases that might be comparable to a prospective case is a good place to start.

And understand that the variance will be still be high.

How to Analyze Analytics

Analyzing data is difficult and generally the province of experts. But there are some simple hacks for an initial tire-kick of data sets.

  • Drifting Toward the Median | As we did above, compare average and median. If the two values are far a part, there is a good chance neither will be that helpful in determining the central tendency of the data. If they are close, that is an encouraging sign.
  • Go to the Bars | Sophisticated analyses will include error bars—another data aggregator (Lex Machina) does a great job here. Those bars represent uncertainty. A wide error bar is the data’s way of shrugging and saying, “I’m not so sure.”
  • Complex Conundrum | Compare sample size to the complexity of the system being measured. If your district court has 32 judges—like the Illinois (ND)—you will need a large, well-distributed sample set on that basis alone. That is before considering the number of other independent variables that might skew the outcome (e.g., type of case, the type of patents asserted, whether many cases are anticounterfeiting cases, number of defendants, number of patents, etc.).
  • Vast Variance, Very Vulnerable | Look at variability. Going back to the original example of case numbers above: the drastic variance in values when looking at number of cases versus number of parties/accusations indicated that one set may have suffered from a unique, complicating variable. Large jumps (step increases) in data between related variables often result from something other than a well-distributed variable (such as judicial tendencies). One must account for them, or the data will be misleading.

In general, large courts with standard caseloads make for poor statistics. Unit level analyses and first-hand experience are typically better.

Susan Heerema

Managed Health Care

4 年

good read

回复
Michael Geller

Partner at DLA Piper

4 年

Super interesting. Thanks, Steve!

回复

Steve: Great article. Terrific writing, by the way -- I like the informal summations. My old classmate Rebecca Pallmeyer should like this.

回复

要查看或添加评论,请登录

Steve Reynolds的更多文章

社区洞察

其他会员也浏览了