To The Readers Of The Metropolitan Corporate Counsel:
Law Firm Rankings - How Useful Are They?
Recently, the American Bar Association adopted a resolution sponsored by the New York State Bar Association directing the ABA to examine efforts to rank law firms and law schools. The resolution was inspired, in part, by the announcement that US News & World Report was expanding its "America's Best" series to include rankings of law firms.
More than 5,000 law firms will be ranked in 125 legal practice areas nationally, by state, and by metropolitan area. In addition to rankings by legal practice area, the national, state, and metropolitan-area rankings for law firms with general corporate practices will be aggregated from rankings in individual practice areas to produce an overall "America's Best Law Firms" ranking.1
The rankings will be based on a combination of "qualitative peer-reviews by leading lawyers" and "more than 50,000 client references."2
Many types of lawyer and law firm ratings have been published. See, for example, Martindale-Hubbell's peer-reviewed ratings.3 There have even been some law firm rankings , such as Vault.com.4These have not, however, attempted the kind of broad ranking of firms that US News has announced. Vault.com, for example, offers rankings of the "Top 100 Firms" based on their relative prestige, as perceived by associates. It also offers rankings of the "Top 25 Firms" in particular practice areas. In short, it is attempting to identify the crème de la crème from a very large pool. Further, its criteria - perceived prestige in the case of general firm rankings - are easily stated, if not necessarily objectively determinable.
In contrast, US News plans to rank more than 5,000 firms on some overall measure and in 125 practice areas. According to a 2000 ABA survey, there were more than 47,000 firms in the country, but there were only about 5,200 firms with 10 or more lawyers.5Thus, it appears that US News is attempting to rank all but the smallest firms. This presents serious issues for both law firms and for their customers.
Anyone who has participated in employee reviews that attempt to produce a linear ranking of a large number of employees knows that this is far from an exact science. To start with, there is the problem of comparing apples to oranges. The employee-ranking process, however, is easier than the task of ranking law firms because of the fact that, basically, the rankings are produced by a relatively small number of people who have some personal knowledge of the people they are ranking. With respect to law firms, some, inevitably, will have many more reviews than others. For firms with only a few reviews, the "law of small numbers" comes into play - one glowing review or one pan can skew the results dramatically. With respect to all firms, there is no control over which peers and clients choose to participate in the review process, and how broad their bases for comparison are.
We do not yet know whether the factors taken into consideration in the rankings and their relative weights will be well understood. If they are not, the rankings will suffer from having the appearance of objectivity and precision, when in fact the user does not understand what is being measured and how it is measured.
We do have some experience with the US News rankings of law schools. A study commissioned by the American Association of Law Schools in 19986criticized the methodology and purported precision of the rankings:
Statistical analyses of the data that were available to us revealed that virtually all of the differences in the overall ranks among schools could be explained by the combination of two of the US News factors. These factors are student selectivity (which is driven by the school's median LSAT score) and academic reputation. The other ten factors are superfluous. However, because the US News ranking system inflates small differences in quality among schools, the addition of other factors (and/or slightly changing their weights) could shift a school from the bottom of one broad category of overall quality to the top of another (such as from the second to the third tier). Unfortunately, because of problems with all the factors in the US News system, these changes could just as easily decrease as increase the validity of the overall rankings.
Similarly, 164 law school deans deplored the effect of assigning arbitrary weights to a relatively small number of metrics:7
The key problem with all law school rating systems is that the final rankings are not based primarily on any hard underlying data. The range of performance among law schools based on most hard variables is actually fairly narrow. Rankings result chiefly from the value judgments that transform a limited range of data into an evaluative scheme. After the ranking "authority" decides which variables are relevant to the ranking, each variable must be given a weight. But the choices of variables and, even more dramatically, the choice of weights to be given those variables can be conspicuously arbitrary.
Given that law firms have few statistical attributes comparable to "median LSAT score," and the number of firms being ranked - more than 5,000 - is so much larger than the number of law schools being ranked - under 200 - it is hard to see how the law firm ranking process can escape similar weaknesses.
Ann B. Lesk
6 "The Validity of the USNews and World Report Ranking of ABALaw Schools," Stephen P. Klein, Ph.D. and Laura Hamilton, Ph.D., February 18, 1998. http://www.aals.org/reports/validity.html