Thoughtful case evaluation is a prerequisite to any professional approach to making the critical decisions in a litigation. However, formal case evaluation should not be confused with quantitative case evaluation. The latter concept is becoming fashionable and is being pressed into service by its advocates not only for making many of the cost-benefit decisions in litigation but for deciding, for example, how to price legal services and whether to proceed on an alternative fee or contingent basis. Decision-tree analysis surely is coming of age in assessing litigations. The question remains whether the techniques used in decision-tree analysis are adequate to the tasks they are being asked to perform.
One of the major benefits of a quantitative approach to case evaluation is that it requires client and counsel to engage in a rigorous analysis and explicitly to communicate their assumptions and estimates concerning what they believe is truly driving the value and risks of a case. Doing the work necessary to achieve a quantitative analysis helps to avoid the type of fuzzy thinking that - all too often - occurs when counsel tells the client that it has "a pretty good" chance of winning, yet the analysis stops there.
The use of quantitative decisional analysis in making litigation assessments dates back to the late-1970s and early-1980s. There has been less critical exposition of the limitations of quantitative decision analysis in the litigation context than there has been of the quantitative decision-making techniques of economists and social scientists, where the technique has been widely used, and improved, since the late-1950s and early-1960s.
The Limitations Of Quantitative Analysis In The Litigation Context
Experience has identified certain limitations to an overly quantitative approach to case evaluation. Other limitations have been identified in legal literature, but only infrequently. And although some students of the subject have attempted to develop sophisticated (and ever more complex) techniques to overcome some of the limitations, the fact remains that making a quantitative evaluation of a complex case remains an inherently uncertain enterprise.
In the Second Circuit's decision in North River Ins. Co. v. Ace American Reinsurance Co., the Court observed - perhaps with tongue firmly planted in cheek - that, at one point, the preliminary decision-tree analysis used by an insurance carrier to justify allocating settlements in a way that the reinsurers objected to "set forth 83 different, probability-weighted, damage and coverage scenarios." North River Ins. Co. v. Ace American Reinsurance Co., 361 F.3d 134, 138 (2d. Cir. 2004).
The problem here is not merely that, at such a level of necessary detail, the estimation process becomes difficult if not dizzying. There is the related issue of whether the client can be expected to wade through such analyses.The difficulty is also that, without detracting from the brilliance of our fellow practitioners, complex mathematical modeling abilities are not always associated with good litigation skills. See the concurring opinion in Doyle-Vallery v. Aranibar, where the court offered views about the robustness of a decision-tree analysis "without regard to the precise math," explaining in a footnote:
The author has intentionally omitted a precise mathematical example in this concurrence. In part, he fears that such an example might dissuade those who hate math. In larger part, he dreads the personal embarrassment that could flow from the publication of his own faulty mathematical analysis. Doyle-Vallery v. Aranibar, 838 So. 2d 1198, 1199 (Fla. Dist. Ct. App. 2d Dist. 2003).
Basic human nature highlights other limitations to an overly quantitative approach to case evaluation. One is the tendency to overemphasize those factors that can be quantified and downplay those that cannot, a well-known phenomenon in the social sciences. Hence, in a case where there are five material variables to be considered and only two can be quantified, one tends to overemphasize the two that can be quantified and minimize the significance of the other three. And that is even assuming that two of the five elements can be quantified with a fair degree of accuracy.
Likewise, there exists a natural human tendency to discount small probabilities, and this is often seen in case evaluations. Legal practitioners - alas, like the rest of humanity - typically do not fully comprehend what it means to accord a very small probability to an outcome. In any given case, the 5% outcome can actually occur, on average, 5% of the time. Yet do we actually believe that?
The quantification process essential to a quantitative risk assessment typically cannot, and almost always does not, account for risk appetite, risk tolerance, or risk adversity. Two different people can assess a situation and determine that a 5-10% possibility exists for the same negative outcome. Yet it would not be surprising, in such a situation, that one decision maker would immediately embrace undertaking the risk, while another would immediately reject the very same strategy. Without accounting for personal attitudes towards risk, a quantitative decision analysis is, by definition, incomplete.
The same is the case when attempting to predict how a jury will decide - a strictly mathematical approach is typically not sufficient. There is a tendency in numerical analyses to try to approximate the thinking of a mathematician, statistician, or economist rather than a juror or other trier of fact.If the plaintiff is able to sustain its burden of proof on all issues but one, is it more likely that the jury will be inclined to overlook certain weaknesses on the last of the issues? Is there a meaningful way to capture that particular complexity in a decision tree or other numerical model? Various means have been suggested to overcome these problems. None is entirely satisfactory.
A simple approach to testing the robustness of quantitative case evaluation is to vary the numerical estimates on the decision tree or other portrayal of the probabilities and see how the client and lawyer react to the sensitivity of changes in the results. If minor adjustments in the estimates of probabilities create major changes and if, as in most cases, assessing the estimates to begin with requires judgment as to which reasonable people could differ, the amount of weight the client gives to the numerical assessment should be affected as a result.
Class Action Settlements & Case Evaluation
What can we learn from specific applications of some of the analytics of the case evaluation process in analyzing class action settlements in federal court under Rule 23? A well-known list of factors to be considered in the context of settling class actions is the following:
1. the likelihood of success on the merits weighed against the amount and form of the relief offered in the settlement;
2. the risks, expense, and delay of further litigation;
3. the judgment of experienced counsel who have competently evaluated the strength of their proofs;
4. the amount of discovery completed and the character of the evidence uncovered;
5. whether the settlement is consistent with the public interest;
6. objections raised by class members; and
7. whether the settlement is the product of arm's length negotiations as opposed to collusive bargaining.
E.g., Granada Investments, Inc. v. DWG Corp., 962 F.2d 1203, 1205 (6th Cir. 1992). How have numerical case evaluations or the analytics underlying them been used in this context?
A good example is In re Cardizem CD Antitrust Litig., which involved allegations of per se illegality under the federal antitrust laws arising out of a settlement agreement reached between a brand pharmaceutical company and a would-be generic competitor. Two different class action settlements were reached, one with a class composed of direct purchasers (the "Direct Settlement") and one with classes of indirect purchasers and the attorneys general of all fifty states.
In analyzing both proposed settlements in In re Cardizem, the court engaged in what is a typical analysis by federal courts. It might also be characterized as a non-quantitative review, proceeding through the above listed factors and finding that each weighed in favor of approving the settlements. In evaluating the proposed settlements, the court had to confront the fact that it had already entered a number of important decisions against the defendants, which earlier in the case were claimed dramatically to increase the likelihood of success by the plaintiffs. Like other courts in this area, the court did not conduct a quantitative evaluation of the opinions of the economists or the relative risks of the parties. Based on Circuit authority, the court's analysis was clearly guided by an inclination to approve the settlement and not by one to quantify estimates of the fairness, reasonableness, or adequacy of the settlements.
In evaluating the Direct Settlement, for example, the plaintiffs had submitted a report of an "expert economist" who had "estimated that this amount represents more than 200% of the total amount the class was overcharged" during what the plaintiffs alleged, but had not proved, was a possibly relevant damages period "and more than 95% of the overcharge damages accrued" over a longer and, as best as one can tell from the decision, equally likely damages period. The court accepted this expert report without any quantitative analysis, even though the expected value of the settlement would obviously vary dramatically if one were to assess the damages at either the 95% or 200% number, even assuming that either number were entitled to credit.
The court then looked to find whether the risks, expenses, and delay of continued litigation favored settlement. The court found that there were numerous "risks" involved in continuing the litigation, notwithstanding the earlier decisions on liability, including that the FTC, having extensively analyzed the matter, had found that the allegedly illegal agreement had not caused any injury to consumers. No effort was made to quantify the risks or probabilities; in keeping with the mainstream of analysis in this area, the analysis was at the level of the general and qualitative. The court found that the various legal hurdles combined with the time and expense of litigating justified the settlement.
The approach of In re Cardizem might be contrasted with that in Reynolds v. Beneficial National Bank . In that case, the United States Court of Appeals for the Seventh Circuit commented that, in reviewing a proposed class action settlement, the district court should at a minimum make an effort " to quantify the net expected value of continued litigation to the class, since a settlement for less than that would not be adequate." The court of appeals was critical of the fact that, in that case, the district court had "made no effort to translate his intuitions about the strength of the plaintiff's case, the range of possible damages, and the likely duration of the litigation if it was not settled now into numbers that would permit a responsible evaluation of the reasonableness of the settlement." Reynolds v. Beneficial National Bank, 288 F.3d 277, 284-285, 52 Fed. R. Serv. 3d 1006 (7th Cir. 2002).
In the end, the best use to be made of quantitative approaches is to consider whether a case is simple enough to try to employ them. Quantitative techniques should be used as a means to ensure that all the relevant issues and interrelationships of issues are considered. Frankly, though, a strictly quantitative approach should not be given a status beyond what it merits. A numerical approach to decision analysis should not be used as a substitute for a formal, judgmental approach to case evaluation.
Louis M. Solomon is co-Chair of the Litigation and Dispute Resolution Department at Proskauer Rose LLP. This article is based on and gratefully acknowledges "Business and Commercial Litigation in Federal Courts," a chapter co-authored by Mr. Solomon and Bruce Fader, also co-Chair of Proskauer's Litigation Department, which originally appeared in the American Bar Association's "Section of Litigation" treatise.