LITIGATION MANAGEMENT: PRINCIPLES & PRACTICALITIES FOR THE INSURANCE DEFENSE INDUSTRY REDUX; POST 4: METRICS
INTRODUCTION
Metrics? What the hell is a metric?
Back in the 1990s, when metrics first started to make their appearance in the insurance claims world, this was a question you often heard in the halls of claims offices. Nowadays, everyone in the insurance defense industry throws around phrases like “metrics driven management” and discusses their favorite metrics and those that they abhor. Yet, despite the lives metrics now live on the tips of everyone’s tongues, I can’t help but get the sense that underlying questions still exist as to what they are and how they should best be used. This article will look to answer those questions and provide a firm foundation for any metrics discussions you may encounter.
?
PRELIMINARY CONSIDERATIONS
Meriam-Webster defines metrics simply as “a standard of measurement.” More helpful for our purposes, Investopedia defines metrics as “measures of quantitative assessment commonly used for assessing, comparing, and tracking performance production.” (https://investopedia.com). There are lot of other definitions you can find online, but I like this last one for our purposes. I’ll go into specific metrics at length later, but a quick example of a litigation management metric would be average case duration, that is, for a given caseload, the average amount of time between when a file opens and when it closes.
Another preliminary consideration is the difference between datapoints and metrics. Datapoints are the raw data captured in a system or business record. Metrics are the measurements derived from those datapoints. For example, a file opened date and a file closed date are datapoints. The difference between the two results in the file duration, which is a metric.
This distinction is important because unreliable datapoints result in bad metrics. As a result, a metric that on the face of it sounds reasonable may actually be flawed if it is based upon an unreliable datapoint. Continuing with our duration example, most claims organizations and law firms have datapoints in their systems for when a file opens and closes. When measuring duration, the temptation is to simply use the difference between the two. Unfortunately, a lot of organizations suffer from significant delays in entering closing dates into their systems. People are busy. A file is over and done with and out of mind. And people simply forget to enter the closed date. This can result in inaccuracies of months, or even years. To remedy this, many organizations have taken to using the last billing or payment date, or the last activity date, to measure duration. These entries are automated, in the sense that they are captured when the file is being worked on for transactional purposes. Thus they are likely to be more accurate than the file closed date.
Similarly, a metric may be unreliable if it is based upon a data point that does not accurately reflect what it purports to reflect. A real-life example would be a claims organization that wishes to measure the accuracy of a law firm’s initial settlement evaluations. It doesn’t have a datapoint for capturing a firm’s initial settlement recommendation. So as a substitute, it uses the initial reserve posted by the adjuster, under the assumption that this figure will reflect the amount recommended by the law firm. Unfortunately, this assumption requires a wild leap of faith. In my experience working within claims organizations, adjusters do not routinely set their reserves to match a law firm’s settlement recommendation. This may be purposeful (such as when they disagree with the recommendation) or just due to administrative delays in updating reserves. In any event, the datapoint does not accurately reflect defense counsel’s recommendation, and thus the metric is flawed.
The key takeaway from this is that whenever you are evaluating metrics, you should examine and question the underlying datapoints. This is equally true whether you are establishing metrics for your own business or being presented with metrics by a business partner that purport to evaluate your performance. Just like a trial attorney knows that when cross examining an expert witness she should question his underlying assumptions, when presented with metrics, you should examine the underlying datapoints.
This last example leads to another preliminary point. People like to discuss “good” and “bad” metrics. However, and speaking very generally, taken at face value metrics themselves are neither good nor bad. They are just measurements. It is their accuracy in combination with what they purport to prove that makes them good or bad.
As a final preliminary matter, we need to discuss whether metrics should be reported utilizing averages or medians. Because there is no topic so dry that there isn’t someone who feels passionately about it, this is actually a subject of robust discussion in certain circles. As a reminder for those of you who, like me, are mathematically challenged, the average of a set of numbers is the sum of those numbers divided by how many numbers are in the set. By contrast, the median is the value separating the lower half from the higher half of a data set. For example, take the following data set: 1, 2, 2, 3, 11,11,12. The average of these numbers is 6. The median is 3. Obviously, there is a significant difference in these numbers.
The argument in favor of utilizing medians points out that the median is not skewed by a few exceptionally large or small numbers, which are referred to as “outliers.” Thus the median gives a better view of where most numbers fall than the average of all numbers, which can be very skewed by a few outliers. Think of the data set 4, 5, 6, 7, 100. The average of these numbers is 24.4. The median is 6. So the median clearly gives a better view of where most of the numbers fall.
On the other hand, if you are interested in totals, the average is mathematically related to the total of all numbers, while the median is not. Thus, for example, it is possible that year over year the median indemnity payment on a set of closed claims would decline, even though the total would go up. Say in year one you closed five claims files with indemnity payments of $40,000, $50,000, $60,000, $70,000, and $100,000. In year two you closed five claims files with payments of $30,000, $40,000, $50,000, $60,000, and $200,000. The total of year one was $320,000 and the average payment was $64,000. The total for year two increased to $380,000, and the average also increased, to $76,000. But the median payment decreased from $60,000 in year one to $50,000 in year two. At the end of the day, the carrier still had to pay $60,000 more in year two, even though it was due to an outlier. You can’t say to the $200,000 claimant, “Oh, we’re not paying you because you are an outlier.” Thus, the argument for using averages. Another way to think of it: We don’t measure the health of the stock market using the Dow Jones Industrial Median.
Most business analysts and people with Six Sigma backgrounds whom I have known argue in favor of utilizing medians. And what I have noticed over the years is that the arguments are always framed in terms of using one or the other. But what I ask is: Why not use both? Averages and medians are different measurements. They both provide valuable, albeit different, information. And when one trends one way and the other trends another way, it may be a good indicator that further analysis of the underlying cause of the difference is in order. Ultimately, it is a choice each business has to make on its own. But for the purposes of this article, when discussing individual metrics, I will refer to “average and/or median,” rather than choosing one or the other. That’s right, I am copping out of this argument.
?
FAVORED METRICS
The metrics discussed in this section are trusted friends. Many of them have been used in the insurance defense industry for ages, and most people I’ve encountered agree that they provide valuable information.
For all of these metrics, their value increases the more they reflect like-kind cases. For example, a large, multi-line carrier would not be well served to lump all of their lines of business into one analytical bucket. It serves little purpose to track average legal payments for simple car accident cases with those for officers’ errors and omissions. And carriers I know don’t do that; at a minimum they break their metrics down by line of business. But even within a line of business, the more one can break cases down into like-kind groupings, the more valuable the data becomes. For example, within a transportation book of business, if two car auto crashes resulting only in soft tissue injuries can be tracked separately from those resulting in serious injury or death, then all the better. The problem is, there is often no way to track cases at an ideal level of granularity, due to limitations or inaccuracies in the way data is captured. But comparison of like-kind cases is the holy grail of litigation management metrics, and it is worth pursuing to the extent feasible.
So here then are the favored metrics.
1. Average and/or Median Duration (Cycle Time). This metric measures the average or median amount of time that claims or legal files stay open. Generally accepted wisdom holds that the longer a file stays open, the more it costs in both legal expense and indemnity. The longer a file drags on, the longer a carrier has to incur the expense of paying claims professionals to work the file. Likewise, longer duration equals more legal activity and legal bills. But it is not just an expense issue. As cases age, indemnity payments tend to go up also. Plaintiffs’ attorneys spend more time and money on a file and feel they should get bigger settlements. And damages ferment and grow as plaintiffs keep seeking medical treatment and don’t return to work.
2. Average and/or Median Legal Paid. This measures the average or median legal paid per file. Its importance is pretty self-evident.
3. Average and/or Median Indemnity Paid. This one measures the average or median indemnity paid per file. Again, its importance is self-evident.
领英推荐
4. Average and/or Median Total Paid (Legal Plus Indemnity) for Client/Insured. This one seeks to measure the average and/or median total paid on a file, including legal and indemnity. A popular expression in the claims world is that “a dollar is a dollar.” In other words, from a profitability viewpoint, it doesn’t make a difference how the legal versus indemnity payments divvy up; it’s the total that counts. That’s not to say that it’s not important to track legal and indemnity separately. You need to see how you are spending your money in order to identify where costs are going up or down. Rather, it’s to acknowledge that at the end of the day, the goal is to get the total to go down. So if, for example, you spend $50,000 more in defense but it results in paying $100,000 less in indemnity, then your total cost has gone down $50,000 and your strategy was good. Conversely, if through early settlement you spend $100,000 less in legal, but pay $50,000 more in indemnity, you’ve achieved a good result.
5. Average and/or Median Indemnity Paid for Client/Insured v. All Defendants in Case. In multi-defendant cases, its valuable to see how much you are paying to settle cases for your client/insured as compared to the total settlement paid to resolve the case. The measurement becomes all the more valuable if it can be broken down by the type of client/insured you represent in a case. By way of illustration, in construction site accident cases where you represent or insure the general contractor, if you are able to consistently get other contractors on the job to pay the majority of the indemnity, it is a good indicator that your strategies are working.
6. Percentages of Resolutions within each Litigation Phase (e.g., pleadings, paper discovery, depositions, etc.). Since early resolution is a key objective of litigation management, it is important to track where in the litigation process your cases are settling. For example, if almost all of your cases are settling post discovery, you may not be as aggressive as you should be in seeking early resolution. On the other hand, if none of your cases are being resolved during or after trial, you may be in danger of being perceived as a soft touch. In particular, this could be true if, in multi-defendant cases, you are always paying a majority of the settlement (see number five, above). This is a good example of how metrics need to be read in conjunction with one another in order to get a fuller understanding of what is taking place.
7. Percentages of Resolutions by Type of Resolution (e.g., settlement, trial, dispositive motion, etc.). The importance of this metric is similar to number six, above. But here the focus is on the method of resolution rather than the phase. Once again, you are looking to achieve a proper balance. So, for example, if you never resolve cases through dispositive motions or trial, you may need to add these weapons to your arsenal.
8. Percentages of Legal Billing by Phase. This metric looks to track the percentages of legal spend attributable to different phases of the litigation. The easiest, and most common, way to do this is to track the amount billed within each of the five Uniform Task-Based Management System (“UTBMS”) billing phases (Case Assessment, Development and Administration; Pre-Trial Pleadings and Motions; Discovery; Trial Preparation and Trial; and Appeal). As in numbers six and seven, above, you are looking to achieve a good balance.
9. Percentage of Legal Billing by Task and by Timekeeper Category. Proper litigation management requires having the proper person performing the task. As a general proposition, tasks should be performed by the least expensive person qualified to perform them. For example, senior partners probably should not be routinely billing for most of the written discovery work on a file. On the other hand, depending upon the book of business, you might expect to see them conducting most of the depositions. Once again, an easy, common method of tracking this is by UTBMS task codes.
10. Average and/or Median Variance between Initial Settlement Evaluation and Ultimate Settlement Paid. Reasonably accurate early settlement evaluations are essential to good litigation management. As will be discussed in later posts, they facilitate the creation of proper litigation plans and are necessary to achieve sound, early resolutions. Therefore, it is of great value to track how early settlement evaluations compare to the ultimate amounts paid.
11. Average Variance between Post-Discovery Settlement Evaluation and Ultimate Settlement Paid. This is similar to number ten, above. By the time discovery is complete, you really should have a good idea of how much a case should settle for. And for cases that haven’t yet settled, the close of discovery is a good opportunity to take stock and try to settle a case before incurring the considerable expenses of trial preparation and trial. But this can’t be done to best effect if you don’t have good settlement evaluations. Hence the value in tracking the accuracy of settlement evaluations made at this point in the litigation.
12. Average and/or Median Variance between Initial Legal Budget and Ultimate Legal Paid. Good litigation management also requires reasonably accurate initial budgeting. Budgets should not be exaggerated in order to make sure that they are large enough to cover any possible contingency that may occur. Budgets are a planning tool. And they are used to evaluate settlement values, especially in the case of early settlements where savings in anticipated legal expense can be used to justify settlement amounts. Therefore, it is helpful to track the variance between initial legal budgets and the amount actually paid at the end of the day.
?
PROBLEMATIC METRICS
The following are problematic metrics that I have encountered. The problems they present vary from metric to metric. But the one thing they have in common is that they are used as performance evaluation measures for either groups (such as claims offices or law firms) or individuals (such as lawyers or claims professionals). As such, their inherent problems and resulting unfairness cause great consternation and foster resentment. While they may have legitimate utility for other purposes (as stated above, metrics themselves are just measurements and not inherently good or bad), as generally used they are highly problematic and should be avoided.
1. Ratio between Indemnity Paid and Legal Paid. This vexing metric has been kicking around the industry for decades. It purports to measure the efficacy of the legal defense provided. So, for example, if you paid $200,000 to defend a case that settled for $10,000, that 20 to one ratio would be considered a bad result because, the theory goes, you paid 20 times what the case was worth to defend it, when you could have just settled it for $10,000 at the outset. But this reasoning is based upon the false assumption that the amount paid in settlement reflects some kind of absolute value that could be paid at any time to end the case. It does not take into consideration that a strong defense drives down settlement value. Measured by this metric, the worst thing you could do would be to try an excellent case, get a defense verdict, and pay nothing.
2. Law Firm Performance Metrics Based Upon Questionable Claims Organization Datapoints. Problems abound with any metrics that purport to measure a law firm's performance based upon a data point maintained by a claims organization that actually reflects the judgment and/or administrative performance of the claims organization itself. For an example, please see the discussion, above, regarding a metric that assumes that the carrier’s initial indemnity reserve equals the law firm’s initial settlement evaluation.
3. Bell Curve Performance Evaluations. This would include any metric that ranks a law firm’s performance in comparison to other law firms based upon a forced bell curve. These rankings tend to distort small differences in performance and saddle firms who are performing well with unfavorable rankings.
?
MAKING THE BEST USE OF YOUR METRICS
OK, so now you’ve identified the metrics you want to track and the best data points to use to determine them. And you’ve worked with your IT department to pull the metrics and generate reports. Now, what do you do with them?
Traditionally, metrics are captured in a scorecard and distributed throughout an organization. Or they may be made available for on demand viewing through a live dashboard. Either or both of these methods is all well and good. But it cannot be left at that. Otherwise, all that good information runs the risk of blending in with all the other white noise that buffets people during their workday. You need a way to get people to pay attention and give your metrics some thought.
Distributing a brief, informative cover e-mail discussing the significance of the metrics can help. But we all know that emails also tend to be ignored.
What is really needed is to incorporate the metrics into your live discussions during meetings and everyday business conversations. Make them the subject of brainstorming sessions. Call up key players and say, “hey, I’ve noticed something in these figures that I’ve found interesting and I was thinking maybe you could help me make sense of it.” As in other aspects of litigation management, nothing beats a live discussion. It gets people engaged. It gets people thinking. And hopefully that thought and engagement leads to action.
Finally, keep in mind that the metrics themselves are not the end of the story. They may be good indicators of whether or not your litigation management program is meeting its goals. But when those indicators start to trend in the wrong direction, such as when your average legal spend for a book of business starts to spike, the metrics really just serve as a red flag that you need to explore what is going on. There may be nothing wrong with the program. Legal spend may be increasing because the book of business took on riskier insureds. Or there might have been legislation that increased the exposure of an existing book of business. Myriad factors could have caused the spike that have nothing to do with your litigation management program. On the other hand, there might be an underlying problem with the program that has developed. The only way to tell is to roll up your sleeves and take a deep dive. Internal audits and process improvements reviews may be in order. You know, the fun stuff.
Partner, Fabiani, Cohen & Hall, LLP
1 年Great insights, Doug.
Retired National Supervising Attorney of General Liability Law Firms at AIG
1 年All is well. The years are flying by.
Retired National Supervising Attorney of General Liability Law Firms at AIG
1 年Doug you have the ability to boil down issues and distill them into my brain. Love this article.