Benefit Of The But-For Bargain: Assessing Economic Tools For Data Privacy Litigation

By: Mike Kheyfets[*]

I. INTRODUCTION

A theory of harm frequently asserted in data breach class actions is that plaintiffs did not receive the “benefit of the bargain” with defendants. That is, plaintiffs claim that when they transferred sensitive information to defendants, they anticipated that the information would remain safe. When the data were exposed as part of a breach, that “bargain” was not upheld. For example, Anthem plaintiffs alleged that when purchasing health insurance, they suffered “loss of the benefit of the bargain with Defendants to provide adequate and reasonable data security” and instead received health insurance that was “less valuable than described in their contracts.”[1] Similar theories have been alleged in a variety of data privacy class actions.[2] For example, in retail breach cases: (i) Chang’s plaintiffs claimed damages on “the cost of their meals” because they “would not have dined at P.F. Chang’s had they known of its poor data security,”[3] and (ii) Neiman Marcus plaintiffs argued they overpaid because “the store failed to invest in an adequate security system.”[4]

Methods to analyze benefit of the bargain harm in a class certification setting have continued to evolve. For example, while P.F. Chang’s and Neiman Marcus plaintiffs did not propose any specific analytical framework for assessing this theory, Anthem plaintiffs suggested that they would use a statistical technique called “conjoint analysis” to do so.[5]

II. ECONOMIC FRAMEWORK IN DATA BREACH CLASS ACTIONS AND POTENTIAL RELEVANCE OF “CONJOINT ANALYSIS”

“The appropriateness of the class action mechanism for adjudicating a consumer data breach litigation rests crucially on the plaintiffs’ ability to present an analysis capable of determining whether all—or, in some cases, virtually all—class members could have suffered injury from the alleged data breach,” as well as the estimation of damages on a class-wide basis.[6] Moreover, because plaintiffs often allege multiple theories of economic harm,[7] such an analysis should distinguish between the damages associated with the different theories.[8]

With respect to a benefit of the bargain theory, a consumer’s damages may be measurable as the difference between what the consumer actually paid for a product (i.e., in the “actual world”) and what the consumer would have paid (i.e., in the “but-for world”)[9] for a product that did not allegedly misrepresent its level of “adequate and reasonable data security.” This difference is meant to represent the “benefit” a defendant allegedly failed to deliver to its customers. The actual price paid for a product may be observable from invoices, consumer receipts, or point-of-sale records. However, the question relevant to assessing impact and damages is: What price would the consumer have paid if the defendant appropriately described the bargain at the time of the transaction, i.e., that it did not include adequate and reasonable data security?

Conjoint analysis—the technique suggested by Anthem plaintiffs to assess this question—is a “popular marketing research technique that marketers use to determine what features a new product should have and how it should be priced.”[10] In practice, it is implemented by first conducting a survey which asks respondents to choose among a series of hypothetical products with a variety of prices and features.

Exhibit illustrates a survey that breaks down a consumer’s choice of which TV to buy into “attributes” such as screen type, screen size, brand, and price. The consumer is also offered a choice of various combinations of attribute “levels.” By offering respondents different combinations of attributes (e.g., a 36″ Plasma Sony TV for $499 vs. a 46″ LED Philips TV for $899),[11] a well-designed conjoint survey aims to gather information that can be used to study their preferences for individual attributes.

Once choice data from these surveys are collected, the goal of the conjoint analysis is to statistically model the weight (called “utility” or “part-worth”) respondents place on a given feature—relative to the products’ other features—when making their choices.[13] Moreover, the respondents’ collective valuation (or “willingness to pay”) for a feature can be derived through a calculation involving the “utility” of that feature and the “utility” of price.[14]

Courts have accepted this technique in several patent infringement cases involving reasonable royalty damages, with the goal of using it to isolate the value of an allegedly infringing feature by (indirectly) comparing versions of a product with and without that feature.[15] In these cases, experts have argued that such valuations would have been considered by the parties in a hypothetical negotiation for royalties.[16] More recently, conjoint analysis has been offered in consumer product mislabeling class actions. In such cases, plaintiffs allege that a manufacturer of a consumer product made false or misleading claims, and aim to use conjoint analysis to estimate the value of the allegedly misrepresented feature (e.g., the value related to labeling a product as “All Natural,” as compared to one without that label).[17]

Whether courts will accept conjoint analysis to certify classes in data breach cases remains uncertain.[18] This Article discusses several key features of conjoint analysis, as well as challenges for the use of such analysis in the context of class certification issues in data breach litigation. Specifically, conjoint surveys may: (i) struggle to isolate the purported bargain at issue in a data breach case; (ii) aim to measure the customer’s willingness to pay for something rather than the price that prevails in the marketplace; and (iii) not yield results that represent all, or nearly all, members of a proposed class.

III.“HOLD THE PICKLES, HOLD THE . . . ADEQUATE AND REASONABLE DATA SECURITY”: CAN CONJOINT ANALYSIS IDENTIFY THE “BARGAIN” ON THE RELEVANT FEATURE?

Conjoint analysis does not study actual transactions where sensitive information is exchanged. Rather, it surveys individuals—who may or may not be party to a proposed class—on their preferences for certain products relative to others. At least some products in the respondent’s “choice set” are hypothetical in that they lack a feature that is actually offered in the real-world marketplace. There are two initial issues relating to hypothetical products that merit consideration. First, hypothetical products necessarily have hypothetical features—or actual features in hypothetical combinations—and prices that are set by the survey designer. Thus, the choices about what combinations of features are offered in the hypothetical products, as well as the price points for those products, necessarily influence the outcome of the survey. More importantly, however—and perhaps where analysis in data breach cases begins to depart from that in patent infringement and false claims cases—is that it may be difficult to assess how the notion of adequate and reasonable data security figures into consumers’ choices.

For conjoint analysis to serve its purpose, the attributes among which respondents are choosing must be ones that affect the purchase process. For example, consumers may have a relatively clear perception of how much more they would be willing to pay for a mobile phone with a touchscreen than for one without, or a food product with an “All Natural” label than a similar product without the label. However, consumers may have more difficulty with an abstract concept like adequate and reasonable data security, particularly since that feature is not typically advertised or described by sellers of consumer products and services.

A conjoint analysis seeking to assess a claim like the one in Anthem—i.e., that purchasers of health insurance were deprived of adequate data security—may face the issue in the real world that consumers do not explicitly consider data security. For example, one academic study identified ten “key drivers of consumer choice among health-care coverage alternatives” as: (i) carrier providing health care coverage; (ii) doctor quality; (iii) hospital choice; (iv) monthly premium; (v) physician network; (vi) cost per doctor visit; (vii) prescription coverage; (viii) wellness visits coverage; (ix) dental coverage; and (x) vision coverage.[19] Even this list, which goes beyond the six-attribute “choice sets” generally prescribed by conjoint analysis practitioners,[20] does not leave room to identify the feature at issue in a data breach litigation. It may be difficult to tease out respondents’ valuation of such a feature if, in a real-world setting, they would not consider purchasing the “but-for” version of the product. Moreover, unlike the binary choice between a product either having an “all-natural” label or not, “data security” may be open to the respondent’s interpretation, further compounding the problem.

An issue with applying conjoint analysis to a “tough-to-value” feature arose in Sanchez-Knutson v. Ford Motor Co.[21] In that case, plaintiffs alleged that certain Ford Explorer vehicles were defective because they experienced exhaust odor under certain driving conditions.[22] Plaintiffs’ expert opined that he could design a conjoint analysis that would enable him to “determine the difference in value . . . that customers place on a Ford Explorer with no exhaust leaking into the cabin compared to an otherwise identical Ford Explorer subject to the problems with exhaust.”[23] The court took issue with this approach, stating “I don’t know how you do that analysis when no one’s gonna buy a car if it fills up with carbon monoxide when you drive it,” and indicating that if “you ask a bunch of people, how much would you pay for a Ford Explorer that has carbon monoxide in it . . . they’re all going to say nothing.”[24]

Asking survey respondents what they would be willing to pay for health insurance without adequate and reasonable data security may yield similar results. Plaintiffs’ expert in Anthem recognized that “a critical aspect of the survey will be to specify a set of levels for the data security attribute,” and hypothesized three formulations of the feature at issue[25]: Example 1: 1. Highest Level: Exceeds industry standards. 2. Intermediate Level: Meets industry standards. 3. Lowest Level: Falls short of industry standards in one or more important areas. Example 2: 1. Meets or exceeds industry average for 11 of 13 metrics used in standard security audits. 2. Meets or exceeds industry average for 8 of 13 metrics used in standard security audits. 3. Meets or exceeds industry average for 5 of 13 metrics used in standard security audits. Example 3: 1. All fundamental data security practices are adhered to. 2. One or more fundamental data security practices is (sic) not adhered to.

Because Anthem plaintiffs did not ultimately conduct this survey, it remains unknown which, if any, of these formulations would yield meaningful information about the value of adequate and reasonable data security. However, even taken at face value, these questions would raise concerns about how seriously consumers—who may not be well-versed in evaluating data security when purchasing health insurance—would consider plans whose security “falls short of industry standards,” or does not adhere to “fundamental data security practices.”[26] Thus, if a survey approach cannot offer a “but-for” product option that is plausible in the real world, it may not yield results that offer insight into the relevant question.
IV. “P-R-I-V-A-C-Y IS PRICELESS TO M E”[27] : CAN CONJOINT ANALYSIS IDENTIFY AN ECONOMICALLY OBJECTIVE VALUE OF THE RELEVANT FEATURE?

Even if a conjoint survey is designed to elicit information about a complex and abstract concept like adequate and reasonable data security, a relevant next question is what value exactly that analysis would be estimating. In considering the answer to this question, it is important to keep in mind that the economic damages award should return plaintiffs to the financial positions they would have occupied in the absence of the allegedly unlawful actions. To assess what positions those would have been, it is necessary to estimate the but-for prices of the products at issue. A key feature of conjoint analysis, however, is that it estimates a consumer’s self-reported willingness to pay for something. The consumer’s willingness, however, is just one side of the equation that determines prices. What prices a seller is willing to accept, which conjoint analysis does not address, also plays a role in determining but-for prices.

As an initial matter, surveys used in a conjoint analysis solicit from respondents their subjective valuations of various product features. Perceptions of “value” may differ based on respondents’ individualized preferences, their varying knowledge about the features and products at issue, their budget constraints, and the specific alternatives available to each of them.[28] However, despite different perceptions of “value,” two customers purchasing the same product from the same seller at the same point in time would generally pay the same or similar prices. This means that a consumer’s valuation of a product is not the same as the price of that product.[29] Recognizing the distinction between perceived value and prevailing price is essential in assessing a benefit of the bargain claim in a data breach class action. Consider the following illustrative example: Based on the features of a particular health insurance product (e.g., monthly premium, hospital choice, adequate and reasonable data security, etc.), Customer A has a subjective “value” of $100 for that product. If Customer A can purchase the product for $95, the difference between value and price—i.e., the “consumer surplus”—is $5. Now suppose that Customer A has a subjective “value” of $2 for the “data security” feature. If Customer A did not, in fact, get the “benefit of the bargain,” then the value he received was $98 and not $100. However, because even the diminished value is above the prevailing price of $95, Customer A would still buy that product in the but-for world. Now consider another—more security-conscious—Customer B: Customer B has a subjective “value” of $96 for the identical health insurance product, and a $10 value for the “data security” feature. In the actual world, Customer B would buy the product because the value to her ($96) is greater than the prevailing price ($95). The consumer surplus for Customer B in the actual world is $1. However, in the but-for world where the $10 “data security” feature is excluded, Customer B would not pay $95 for $86 of value.

Exhibit summarizes this example.

This example illustrates several key issues with conjoint analysis. First, while the two customers have different perceptions of “value” (both for “data security” and for the product as a whole), there is only a single prevailing price: $95. Their individual preferences only determine whether they buy the product or not, not the price they pay. Second, while Customer A received less “value” than he would have in the but-for world, he would still have purchased the product absent the “data security” feature (i.e., price of $95 versus $98 in value). That is, Customer A would have still paid $95 for this product even if the “bargain” did not include the “benefit” of data security. However, given Customer B’s preferences, that customer would not have purchased the product in the but-for world. Third, even if each customer’s preferences for “data security” could be measured objectively, an average of $6 (Customer A’s value of $2 and Customer B’s value of $10) would be misleading. This is because it would falsely imply that the Customer A would not have purchased this product in the but-for world (i.e., price of $95 versus $94 in value).[30]

Ultimately, neither customer’s perceived valuation of product features solely dictates the actual price charged by the seller. Thus, as this example shows, using conjoint analysis to estimate consumers’ subjective values of product features is not the same as studying prices that would have prevailed, but for the alleged illegal conduct (i.e., whether the hypothetical insurance product would have been priced at anything other than $95 even absent the “data security” feature).

Determining but-for prices requires an analysis of how, if at all, the product’s “market-clearing” price would have changed in the absence of the allegedly illegal conduct. However, prices are determined not solely by what consumers are willing to pay but also by what sellers are willing to accept. If properly designed and implemented, a conjoint survey may provide an estimate of consumers’ willingness to pay for a product relative to their willingness to pay for a similar product that has slightly different features. At best, this addresses the “demand” side of the equation. It cannot, however, offer insight into how, if at all, the seller of the product (or its competitors) would change its prices.

Consider again the example of the $95 health insurance product. While it may be that consumers would reduce their willingness to pay for it if certain features were removed, that finding offers no insight into what price the seller would charge. For example, if supply-side competition is vigorous because many other sellers offer many similar products at similar prices, the removal of a valued feature may lead to a reduction in price. If competition is not as vigorous or products are sufficiently differentiated, it may be that the seller does not reduce the price it charges even if the feature is removed.[31] Moreover, if the seller is able to set pricing at different levels for different groups of customers based on characteristics of their demand for this product, it may be that the price charged to some (but not all) customers would change as a result of removing a feature. Nonetheless, simply assuming that a reduction in consumers’ “value” would necessarily correspond to an identical reduction in price ignores the supply-side factors that determine prices.

Academic literature on survey-based methods, including conjoint analysis, indicates that these methods may produce estimates of “willingness to pay” that are higher than the prices that would prevail in a real-world setting. As one paper on implementation of conjoint analysis notes[32] : In the context of conjoint studies, feature valuation is achieved by using various measures that relate only to the demand for the products and features and not to the supply. In particular, it is common to produce estimates of what some call Willingness To Pay and Willingness To Buy. Both WTP and WTB depend only on the parameters of the demand system. As such, the WTP and WTB measure cannot be measures of the market value of a product feature as they do not directly relate to what incremental profits a firm can earn on the basis of the product feature.

The same paper states that measures of willingness to pay derived from conjoint surveys[33] : [D]o not take into account equilibrium adjustments in the market as one of the products is enhanced by addition of a feature. For this reason, we cannot view either pseudo-WTP nor WTP as what a firm can charge for a feature-enhanced product nor can we view WTB as the market share than can be gained by feature enhancement. Computation of changes in the market equilibrium due to feature enhancement of one product will be required to develop a measure of the economic value of the feature. WTP will overstate the price premium afforded by feature enhancement and WTB will also overstate the impact of feature enhancement on market share.

Absent such a “computation of changes in market equilibrium,” a conjoint analysis cannot answer the question relevant for the determination of impact and damages. That is, what prices would plaintiffs have paid for the “bargain” they received from defendants? Rather, this approach considers only one side of the price-setting equation and necessarily overstates the impact (if any) of the foregone “benefit” on prices. Conjoint analysis does not study actual transactions engaged in between plaintiffs and defendants, and by considering only part of the equation, on its own, it cannot account for an important part of the real-world price-setting process.[34]

This feature of conjoint analysis proved relevant in a number of false claims class actions. For example, the court did not certify the proposed class of e-cigarette purchasers because the plaintiffs’ expert’s conjoint analysis did not satisfy Comcast[35] : “His conjoint methodology could quantify the relative value a class of consumers ascribed to the safety message, but it does not permit the court to turn the relative valuation into an absolute valuation to be awarded as damages.” Similarly, the Saavedra court declined to certify the proposed class of consumers because the proposed conjoint analysis—which was neither designed nor executed at the time of the class certification decision—“focuse[d] only on the demand side of the equation” and “suffer[ed] from serious methodological flaws.”[36]

To address the limitation of conjoint analysis as a “demand-side” tool, some practitioners have suggested a variation on the basic approach. Specifically, some practitioners have suggested that “if the researcher seeks qualitative information about how much consumers value . . . the attribute at issue, he can develop a conjoint survey that provides that average or median consumer WTP.”[37] In contrast, “if the researcher wants to assess the price premium associated with the [attribute at issue], then he will need to develop a conjoint survey that assesses the WTP of the marginal consumer—i.e., the consumer who is indifferent between buying and not buying the . . . product.”[38]

Using the “marginal” willingness to pay to assess a “price premium” for the feature at issue is based on the notion that the marginal consumer’s willingness to pay is equal to the market-clearing price for a product. That is, if the price were any higher, it would be above that consumer’s willingness to pay. As a result, the idea is that taking the difference between the actual price of a product and the ostensibly market-clearing price for the product without the feature at issue can be used to determine the value of the feature.

This distinction between average and marginal WTP played a role in the Dial case, where the court certified a proposed class of soap purchasers and indicated that[39] : [W]hile no doubt imperfect in some respects, weak in others, and subject to challenges on cross-examination, [Plaintiffs’ expert’s] proffered means of calculating class wide damages is sufficient to demonstrate that a price premium for the allegedly falsely-claimed feature(s) exists, and that it can be reliably calculated, using means and methods generally understood and accepted in the fields of economics and statistics. Specifically, the court noted that by determining the marginal consumer’s willingness to pay for the product without the feature at issue, plaintiffs’ expert’s model purportedly also determined the maximum price [at which Dial] could “have sold the equivalent number of products without the false claim(s).”[40]

This model, however, only appears to have addressed the supply-side issue by assuming it away.[41] In estimating the marginal consumer’s willingness to pay for the but-for product, plaintiffs’ expert in Dial “held constant” the quantity, i.e., “the number of products with the offending claims actually sold.”[42] This assumed that Dial’s goal was to sell a fixed number of soap bars, and in the absence of the feature at issue, it would have had to lower its price in order to sell that number.[43] This is a strong assumption, however. As discussed above, the but-for price depends on the behavior of suppliers, and it may be that even in the absence of the feature at issue, the same “market-clearing” price would prevail. A “fixed quantity” cannot simply be assumed; rather, any assumptions about but-for quantities should be supported through sound economic analysis.

Notably, the assumption that if a feature were removed from a product, sellers would simply reduce the price of that product by the value of that feature (or by any amount) may be inconsistent with how price-setting works in the real world. For example, as an alternative to the but-for world offered by the Dial plaintiffs’ expert, a seller could choose to keep prices unchanged, allowing for fewer consumers to purchase the allegedly lower-quality product.[44] Depending on the industry at issue, sellers may also use a variety of pricing strategies that do not rely on valuation of features at all. For example, some retailers may use “line pricing,” a strategy that assigns a uniform list price to a group of similar products, even if the exact features of those products vary.[45] In other instances, retailers may use “focal point pricing,” whereby products are priced at dollar levels ending in “9” or cent levels ending at “99.”[46] Under these kinds of pricing strategies, among others, the prices consumers pay may not change even if the features of a product do—a reality inconsistent with the foundational assumption of conjoint analysis.[47]

Plaintiffs’ expert in Anthem also recognized this shortcoming of the willingness-to-pay analysis, emphasizing that “market price is determined not only by consumer demand and willingness to pay for a product feature but also by competition from other manufacturers” and that “a market price premium therefore differs from willingness to pay because it is what a firm can charge for a product with a particular feature rather than just the consumers’ valuation of that product feature.”[48] However, he did not actually conduct an empirical analysis to address this issue. Rather, he indicated that “with some analysis on the supply side, it is possible to compute Nash equilibrium prices for health insurance products associated with a range of data security levels.”[49] Additionally, Anthem plaintiffs’ expert cited to an academic article he had written,[50] which he suggested provided “sufficient detail” on the “mathematical details of [his] proposed methodology.”[51] Nonetheless, no market price premia were actually derived in Anthem, as neither a conjoint analysis nor a Nash equilibrium analysis were conducted. Thus, whether this type of analysis can yield meaningful results in a real-world data breach litigation remains an open question.[52]

V.“ALL FOR ONE AND ONE FOR ALL”: CAN CONJOINT ANALYSIS BE USED TO SHOW A BREACH’S IMPACT ON ALL (OR NEARLY ALL) PROPOSED CLASS MEMBERS?

Cohen et al., discussed several key elements of constructing an appropriate “but-for world” in data breach class actions, including testing (and “falsifiability”) of assumptions, as well as rigorous assessment to determine whether injury can be established using evidence common to the proposed class.[53] Moreover, there are potential problems with using a sample intended to represent the “average” or “typical” experience of the proposed class, specifically[54] : “[g]iven consumers’ idiosyncratic reactions to a data breach, extrapolating from a small sample of consumers to thousands (or millions) of other purported class members whose data was (or may have been) compromised risks reaching the wrong conclusions.”[55]

Relying on conjoint analysis in the context of assessing a benefit of the bargain claim may face this exact issue. In the context of the “willingness-to-pay” approach, the issue of conjoint analysis as “common proof” relates to the factual question of whether some class members place a high value on this feature, while others give it little or no value. This is not simply an issue of imprecisely estimating damages for a given class member (i.e., one class member valuing adequate and reasonable data security at $2 and another at $10, and therefore the average of $6 not precisely compensating either one). Rather, this approach runs the risk of improperly estimating damages for unharmed customers or, potentially, failing to find damages for class members who were harmed. In fact, although conjoint analysis would yield a single aggregate valuation for adequate and reasonable data security, responses for sub-groups of respondents may indicate substantial variation,[56] including some respondents’ choices indicating that they do not value this feature at all.[57] Importantly, groups (or individuals) who indicate that they do not value data security would not be harmed under a benefit of the bargain theory. That is, the “bargain” those consumers got would have allegedly lacked a “benefit” they did not value, meaning that their willingness to pay for a product which explicitly excluded that feature would have been unchanged.

This issue may be partially, though not entirely, mitigated by a “market price premium” approach like that proposed by Anthem plaintiffs’ expert. That is, if it can be determined that the alleged conduct inflated the prevailing price of a product by some amount, it would not matter to the determination of impact and damages whether that amount is equal to a given consumer’s valuation of the feature at issue. Consider again the hypothetical situation illustrated in Exhibit 2. If it can be shown, for example, that the market price premium for data security was $1—and the prevailing but-for price would therefore have been $94—that amount would apply to all consumers that would have bought that product in the but-for world, including Customer A (despite that customer personally valuing data security at $2).[58]

The issue that persists even with the market price premium approach is that in the real world, there may not be a single product or a single price premium that is relevant to the assessment of harm for the entire proposed class. For example, while the plaintiffs’ expert in Anthem provided an extended discussion of how healthcare pricing varies substantially across geographies, product offerings, and customer segments—and, indeed, of how “prices” consumers pay can be a complex combination of premiums, deductibles, copayments, and coinsurance[59] —he nonetheless concluded that a “market price” can be used to show that “all class members have suffered the same loss commensurate or proportional to the price paid by them.”[60] Moreover, he indicated that he would “undertake surveys of different markets” and that these surveys would be “analyzed independently to determine market price premia in each of these distinct markets.”[61]

However, even this approach—to the extent proof in the form of many distinct market-specific analyses may be considered “common” to the proposed class—would assume that there was a single data security premium within a given “market.” That is, even a “market-specific” survey, by construction, would imply only two possible outcomes: either every consumer in that market was injured—and necessarily in the same amount—or no consumer was injured. However, to the extent price premia for data security vary across geographies, product offerings, and customer segments within markets (as defined by the survey designer), such surveys would (potentially inappropriately) assume that price premiums are identical across these parameters. Requisite testing of such an assumption would be necessary to determine whether it is appropriate given the facts of the case at hand.

Footnotes

.* J.D. Stanford Law School and Associate Professor, The David Nazarian College of Business and Economics, California State University, Northridge.

1. Fourth Consolidated Amended Class Action Complaint at 120, 135, In re Anthem Data Breach Litig., No. 15-MD-02617-LHK (N.D. Cal. Feb. 24, 2017), ECF No. 714-3 [hereinafter Anthem Complaint] (emphasis added).

2. See, e.g., Resnick v. AvMed, Inc., 693 F.3d 1317 (11th Cir. 2012) (alleging that portions of plaintiffs’ insurance premiums were consideration for an insurer’s promises to provide data 115

3. Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 968 (7th Cir. 2016).

4. Remijas v. Neiman Marcus Group, 794 F.3d 688, 694 (7th Cir. 2015). The way the specific “bargain” between plaintiffs and defendants is described varies from case to case. However, for consistency, this article refers to the feature at issue using Anthem plaintiffs’ terminology: that they understood their purchases to include a feature called “adequate and reasonable data security.” Anthem Complaint, supra note 1, at 120.

5. Notably, Anthem plaintiffs indicated that the conjoint analysis “could not be completed until after class certification” because “the parameters of the conjoint surveys would depend on the classes ultimately certified by the Court.” Plaintiffs’ Memorandum in Support of Preliminary Approval of Class Action Settlement at 4, 16, In re Anthem Data Breach Litig., No. 15-MD- 02617-LHK (N.D. Cal. June 23, 2017), ECF No. 869-5.

6. David Cohen, Michael Kheyfets, Michelle Visser, & Adam Winship, A Rigorous Analysis of Class Certification Issues in Consumer Data Breach Litigation, 16 PRIVACY & SECURITY L. REP. 104, 107 (2017).

7. Id.

8. For example, in instances where plaintiffs have alleged they were harmed due to (i) fraudulent misuse of the stolen information, as well as (ii) not receiving the benefit of the bargain, their class certification and damages frameworks should be able to distinguish between the two.

9. As the Anthem plaintiffs described it, they suffered: [L]oss of the benefit of the bargain with Defendants to provide adequate and reasonable data security—i.e. the difference in value between what Plaintiffs should have received from Defendants when they enrolled in and/or purchased insurance from Defendants that Defendants represented, contractually and otherwise, would be protected by reasonable data security, and Defendants’ partial, defective, and deficient performance by failing to provide reasonable and adequate data security and failing to protect Plaintiffs’ Personal Information from theft.” Anthem Complaint, supra note 1, at 120–21 (emphasis added); see, e.g., Federal Judicial Center, Reference Manual on Scientific Evidence 432 (3d ed. 2011). Note that what the plaintiffs would have paid in the but-for world is not necessarily the same as what they would have been willing to pay. As I discuss in more detail below, consumer willingness to pay is just one part of how prices are set in the real world.

10. Joseph Curry, Data Use: Understanding Conjoint Analysis in 15 Minutes, QUIRK’S MARKETING RES. REV. (1996), https://www.sawtoothsoftware.com/download/techpap/undca15. pdf [hereinafter Curry, Understanding Conjoint Analysis].

11. In some conjoint surveys, the respondent may be asked to rank the choices from most- to least-preferred. In others, the respondent may be asked to make a single selection from the available choices.

12. Conjoint Analysis, DOBNEY, http://www.dobney.com/Conjoint/Conjoint_analysis.htm (last visited Sept. 25, 2018).

13. Curry, Understanding Conjoint Analysis, supra note 10.

14. To use terminology from Anthem, the survey would seek to identify respondents’ perceived valuation of—or willingness to pay for—adequate and reasonable data security. Anthem Complaint, supra note 1, at 120.

15. See generally Apple, Inc. v. Samsung Elecs. Co., No. 11-CV-01846-LHK, 2013 U.S. Dist. LEXIS 149741 (N.D. Cal. Oct. 15, 2013); Microsoft Corp. v. Motorola, Inc., No. C10-1823- JLR, 2011 U.S. Dist. LEXIS 73827 (W.D. Wash. May 31, 2011).

16. See cases cited supra note 15.

17. See, e.g., Briseno v. ConAgra Foods, Inc., 844 F.3d 1121, 1123 (9th Cir. 2016) (arguing that the “100% Natural” label on the product was false or misleading because Wesson oils are made from bioengineered ingredients that plaintiffs contended were “not natural”); In re Dial Complete Mktg. & Sales Practices Litig., 312 F.R.D. 36, 47 (D.N.H. 2015) (alleging that a variety of statements appearing on Dial Complete’s product labels, including claims that it “Kills 99.99% of Germs,” is “#1 Doctor Recommended,” and “Kills more germs than any other liquid hand soap” were inaccurate and misleading); In re NJOY, Inc. Consumer Class Action Litig., No. CV 14-00428 MMM (RZx), 2014 U.S. Dist. LEXIS 199368, at .*6 (C.D. Cal. Oct. 20, 2014) (alleging that NJOY’s failure to include certain harmful ingredients on the label was misleading because consumers would want to know that the product contained these ingredients before purchasing e- cigarettes and that NJOY failed to warn of the harmful effects of inhaling such ingredients).

18. For example, plaintiffs in Anthem indicated that “the Benefit of the Bargain theory depended upon the results of a conjoint study that could not be completed until after class certification, and there was no guarantee that Plaintiffs would ultimately have found this type of damage at all.” Plaintiffs’ Memorandum in Support of Preliminary Approval of Class Action Settlement at 21, In re Anthem Data Breach Litig., No. 15-MD-02617-LHK (N.D. Cal. Feb. 24, 2017), ECF No. 869-5 (emphasis added). Plaintiffs also indicated that “it is possible that both the Benefit of the Bargain theory and the Loss of Value of PII theory could yield large numbers that would be unpalatable to a jury.” Id.

19. Roger Gates et al., Modeling Consumer Health Plan Choice Behavior to Improve Customer Value and Health Plan Market Share, 48 J. BUS. RES. 247, 250 tbl.1 (2000).

20. Paul E. Green & V. Srinivasan, Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice, 54 J. MARKETING 3, 8–9 (1990).

21. Sanchez-Knutson v. Ford Motor Co., 52 F. Supp. 3d 1223 (2014).

22. Id. at 1225.

23. Defendant Ford Motor Company’s Motion In Limine to Exclude the Testimony of Steven Gaskin at 2, Sanchez-Knutson v. Ford Motor Co., No. 0:14-CV-61344-WPD (S.D. Fla. 2017), ECF No. 182.

24. Id. at 17. Notably, the Court in this case certified part of the proposed class, despite Plaintiffs having not actually executed the conjoint analysis at the time of the decision (“[T]he Court disagrees with Defendant that [plaintiffs’ expert], must have already performed his proposed conjoint analysis for the Court to consider the proffered methodology.”). Order Granting in Part and Denying in Part Plaintiff’s Renewed Motion for Class Certification at 14, Sanchez- Knutson v. Ford Motor Co., No. 0:14-CV-61344-WPD (S.D. Fla. 2017), ECF No. 148.

25. Expert Report of Peter E. Rossi, In re Anthem Data Breach Litig., No. 15-MD-02617- LHK (N.D. Cal. Feb. 24, 2017), ECF No. 720-30 [hereinafter Rossi Report].

1. Highest Level: Exceeds industry standards.

2. Intermediate Level: Meets industry standards.

3. Lowest Level: Falls short of industry standards in one or more important areas. Example 2:

1. Meets or exceeds industry average for 11 of 13 metrics used in standard security audits.

2. Meets or exceeds industry average for 8 of 13 metrics used in standard security audits.

3. Meets or exceeds industry average for 5 of 13 metrics used in standard security audits. Example 3:

1. All fundamental data security practices are adhered to.

2. One or more fundamental data security practices is (sic) not adhered to. Because Anthem plaintiffs did not ultimately conduct this survey, it remains unknown which, if any, of these formulations would yield meaningful information about the value of adequate and reasonable data security. However, even taken at face value, these questions would raise concerns about how seriously consumers—who may not be well-versed in evaluating data security when purchasing health insurance—would consider plans whose security “falls short of industry standards,” or does not adhere to “fundamental data security practices.”26 Thus, if a survey approach cannot offer a “but-for” product option that is plausible in the real world, it may not yield results that offer insight into the relevant question.

26. Greg M. Allenby, Jeff D. Brazell, John R. Howell, & Peter E. Rossi, Economic Valuation of Product Features, 12 QUANTITATIVE MARKETING AND ECON. 421, 433 (2014) [hereinafter Allenby et al., Economic Valuation] (“[T]he conjoint exercise makes the consumers (survey respondents) aware of the new product features and assumes that all choice alternatives are, hypothetically at least, available for purchase.”).

27. PEARL JAM, VITALOGY (Epic Records 1994). See also Al Weisel, Vitalogy, ROLLING STONE (Dec. 15, 1994, 5:00 AM), https://www.rollingstone.com/music/albumreviews/vitalogy- 1994121 (“‘Pry, To’ is a one-minute doodle that consists of [Eddie] Vedder spelling out the word privacy over and over until we get the point already.”).

28. For example, a higher-income consumer may be willing to pay more for “data security” as part of a health insurance product than a lower-income consumer. This does not mean, however, that if the two customers purchased the same product, the higher-income customer necessarily paid a higher price. See, e.g., Paul G. Patterson & Richard A. Spreng, Modelling the Relationship Between Perceived Value, Satisfaction and Repurchase Intentions in a Business-to-Business, Services Context: An Empirical Examination, 8 INT’L J. SERV. INDUSTRY MGMT. 414, 416 (1997).

29. As a matter of economics, for each purchaser, as well as for all purchasers collectively, the “value” of a product necessarily equals or exceeds the prevailing price, since no potential consumer who gets less “value” than the amount of the price would purchase it. The difference between consumers’ “willingness to pay” (or perceived “value”) and the prevailing price is called as “consumer surplus” and is a basic concept in economics. See, e.g., N. GREGORY MANKIW, PRINCIPLES OF ECONOMICS 139 (7th ed. 2015).

100. However, because even the diminished value is above the prevailing price of $95, Customer A would still buy that product in the but-for world. Now consider another—more security-conscious— Customer B: Customer B has a subjective “value” of $96 for the identical health insurance product, and a $10 value for the “data security” feature. In the actual world, Customer B would buy the product because the value to her ($96) is greater than the prevailing price ($95). The consumer surplus for Customer B in the actual world is $1. However, in the but-for world where the $10 “data security” feature is excluded, Customer B would not pay $95 for $86 of value. Exhibit 2 summarizes this example. EXHIBIT 2 $100 Absent $98 security feature, $96 $95 prevaling price perceived Absent value $94 security below feature, prevaling $92 perceived price. value still No $90 above purchase prevaling in but-for $88 price. world. Purchase $86 in but-for world. $84 $82 $80 Customer A Customer B Value of all other attributes Value of “data security”

95. Their individual preferences only determine whether they buy the product or not, not the price they pay. Second, while Customer A received less “value” than he would have in the but-for world, he would still have purchased the product absent the “data security” feature (i.e., price of $95 versus $98 in value). That is, Customer A would have still paid $95 for this product even if the “bargain” did not include the “benefit” of data security. However, given Customer B’s preferences, that customer would not have purchased the product in the but-for world. Third, even if each customer’s preferences for “data security” could be measured objectively, an average of $6 (Customer A’s value of $2 and Customer B’s value of $10) would be misleading. This is because it would falsely imply that the Customer A would not have purchased this product in the but-for world (i.e., price of $95 versus $94 in value).30 Ultimately, neither customer’s perceived valuation of product features solely dictates the actual price charged by the seller. Thus, as this example shows, using conjoint analysis to estimate consumers’ subjective values of product features is not the same as studying prices that would have prevailed, but for the alleged illegal conduct (i.e., whether the hypothetical insurance product would have been priced at anything other than $95 even absent the “data security” feature). Determining but-for prices requires an analysis of how, if at all, the product’s “market-clearing” price would have changed in the absence of the allegedly illegal conduct. However, prices are determined not solely by what consumers are willing to pay but also by what sellers are willing to accept. If properly designed and implemented, a conjoint survey may provide an estimate of consumers’ willingness to pay for a product relative to their willingness to pay for a similar product that has slightly different features. At best, this addresses the “demand” side of the equation. It cannot, however, offer insight into how, if at all, the seller of the product (or its competitors) would change its prices. Consider again the example of the $95 health insurance product. While it may be that consumers would reduce their willingness to pay for

30. The example can be further complicated by adding a third customer—risk-loving Customer C—who values “data security” at $0. Applying the average perception of “value” to Customer C would falsely impute any decline in received value from the removal of this feature.

31. In this instance, the survey respondent’s hypothesized valuation of the relevant feature is irrelevant to the but-for world. That is, if the product is priced the same whether it has the feature at issue or not, the but-for price is the same, even if the consumer perceives receiving less “value.” This outcome may occur in a market for differentiated products, often characterized by substantial investments by sellers on branding and advertising. See, e.g., B.C. Giri et al., Multi- Manufacturer Pricing and Quality Management Strategies in the Presence of Brand Differentiation and Return Policy, 105 COMPUTERS & INDUS. ENGINEERING 146 (2017).

32. Greg M. Allenby et al., Using Conjoint Analysis to Determine the Market Value of Product Features, in PROCEEDINGS OF THE SAWTOOTH SOFTWARE CONFERENCE ON PERCEPTUAL MAPPING, CONJOINT ANALYSIS AND COMPUTER INTERVIEWING 343 (2013) (emphasis added).

33. Id. at 346–47 (emphasis added).

34. See Greg M. Allenby et al., Computing Damages in Product Mislabeling Cases: Plaintiff’s Mistaken Approach in Briseno v. ConAgra, 45 PROD. SAFETY & LIAB. REP. 208 (2017) (“[I]t is important to remember that consumer valuations of the misrepresented feature are not the same as the market price premium associated with the alleged misrepresentation . . . If the analysis employed does not also account for costs and other market forces such as competition among suppliers, the resulting damages estimates may be significantly overstated.”).

35. Order Denying Plaintiffs’ Amended Motion for Class Certification at 8, In re NJOY, Inc. Consumer Class Action Litigation, 120 F. Supp. 3d 1050 (C.D. Cal. Aug. 14, 2015), ECF No. 325.

36. Saavedra v. Eli Lilly & Co., No. 2:12-cv-9366-SVW, 2014 U.S. Dist. LEXIS 179088, at .*11, .*18, .*33 (C.D. Cal. Dec. 18, 2014) (involving alleged misrepresentations regarding risk

37. Lisa Cameron et al., The Role of Conjoint Surveys in Reasonable Royalty Cases, LAW360 (Oct. 16, 2013), https://www.law360.com/articles/475390/the-role-of-conjoint-surveys- in-reasonable-royalty-cases.

38. Id. (“It is the WTP of the marginal consumer that is equivalent to the price premium associated with the infringing level of the attribute; this marginal consumer can be identified by offering respondents a ‘no buy’ option.”).

39. In re Dial Complete Mktg. & Sales Practices Litig., 320 F.R.D. 326, 337 (D.N.H. 2017).

40. Id. at 336–37.

41. Notably, whether a conjoint analysis relies on the average, median, or marginal consumer does not address the issue described above. That is, it appears to be ill-suited for valuation of abstract product features such as “data security.”

42. In re Dial, 320 F.R.D. at 336.

43. Put differently, Dial chooses a price that will yield sales of X soap bars. In the presence of the false label, Dial can sell X soap bars at the price of $Y. However, once the false label is removed, Dial can no longer sell X soap bars—because some customers are no longer willing to pay $Y—and must therefore reduce the price to sell the target number of units. This price reduction would represent harm from the false claim.

44. In this scenario, damages for some consumers (i.e., those who would continue to purchase the allegedly lower-quality product at the same price) would be zero. Consumers who would choose not to buy the product in this but-for world would be injured, but the amount of damages would depend on a given consumer’s second-best available option.

45. See, e.g., Weiner v. Snapple Beverage Corp., No. 07 Civ. 8742(DLC) 2010 U.S. Dist. LEXIS 79647, at .*3 (S.D.N.Y. Aug. 3, 2010).

46. See, e.g., Eric T. Anderson & Duncan I. Simester, Effects of $9 Price Endings on Retail Sales: Evidence from Field Experiments, 1 QUANTITATIVE MARKETING AND ECON. 93 (2003); Robert M. Schindler & Patrick N. Kirby, Patterns of Rightmost Digits Used in Advertised Prices: Implications for Nine-Ending Effects, 24 J. CONSUMER RES. 192 (1997); Mark Stiving & Russell S. Winer, An Empirical Analysis of Price Endings with Scanner Data, 24 J. CONSUMER RES. 57 (1997).

47. See, e.g., Allenby et al., Economic Valuation, supra note 26, at 429 n.6 (“In a conjoint setting, we abstract from the problem of omitted characteristics as the products we use in our market simulators are defined only in terms of known and observable characteristics. Thus, the standard interpretation of the market wide shock is not applicable here. Another interpretation is that the market wide shock represents some sort of marketing action by the firms (e.g. advertising). Here, we are directly solving the firm pricing problem holding fixed any other marketing actions.”).

48. Rossi Report, supra note 25, at 27–28.

49. Rossi Report, supra note 25, at 46.

50. Reply Expert Report of Peter E. Rossi at 9 n.9, In re Anthem, Inc. Data Breach Litig., No. 15-MD-02617-LHK (N.D. Cal. May 5, 2017) (citing Allenby et al., Economic Valuation, supra note 26). Notably, this article outlines a series of assumptions upon which its theory is based. Determining whether these assumptions hold for a particular product or industry at issue in a litigation would require an inquiry into the facts of the specific case. Additionally, as the authors point out, “there is no guarantee that a Nash equilibrium exists for heterogeneous logit demand.” Allenby et al., Economic Valuation, supra note 26; see also Greg M. Allenby et al., Valuation of Patented Product Features, 57 J. L. & ECON. 629 (2014).

51. Reply Expert Report of Peter E. Rossi at 9, In re Anthem, Inc. Data Breach Litig., No. 15-MD-02617-LHK (N.D. Cal. May 5, 2017).

52. Allenby et al., Economic Valuation, supra note 26, at 440 (“[T]he quality standards for design and analysis of conjoint data have to be much higher when used for economic valuation than for many of the typical uses for conjoint.”).

53. Cohen et al., supra note 6, at 3.

54. Notably, to yield meaningful information from which survey results can be extrapolated to the population at issue, the survey should be properly designed, and the population properly sampled. See, e.g., Allenby et al., Valuation of Patented Product, supra note 52, at 641 (“Considerations of sample representativeness are critical to the reliability and generalizability of any survey, conjoint or otherwise. No survey evidence should be considered admissible or relevant unless evidence of representativeness is provided.”).

55. Cohen et al., supra note 6, at 4.

2. If it can be shown, for example, that the market price premium for data security was $1— and the prevailing but-for price would therefore have been $94—that amount would apply to all consumers that would have bought that product in the but-for world, including Customer A (despite that customer personally valuing data security at $2).58 The issue that persists even with the market price premium approach is that in the real world, there may not be a single product or a single price premium that is relevant to the assessment of harm for the entire proposed class. For example, while the plaintiffs’ expert in Anthem provided an extended discussion of how healthcare pricing varies substantially across geographies, product offerings, and customer segments—and, indeed, of how “prices” consumers pay can be a complex combination of premiums, deductibles, copayments, and coinsurance59—he nonetheless concluded that a “market price” can be used to show that “all class members have suffered the same loss commensurate or proportional to the price paid by

56. For example, attitudes toward, and preferences for, data security may vary across consumers depending on age, educational attainment, income, or other factors. Id. at 6.

57. See id. In fact, an improperly designed conjoint analysis may indicate that respondents are “irrational” and place a negative value on data security. Improperly designed conjoint analyses may also indicate an unreasonable range in the valuation of the feature at issue, including some respondents valuing the feature above the total price of the product. However, if the aggregation of all results—even unreasonable ones—yields a positive valuation, the conclusion would be that the positive valuation was “common” to the class.

58. Notably, the security-conscious Customer B would not have purchased the but-for product for $94, meaning the improperly defined “bargain” induced that consumer to purchase a product she otherwise would not have.

59. Rossi Report, supra note 25, at Section III.

60. Id. at 26.

61. Id. at 23.

Leave a Reply

Your email address will not be published. Required fields are marked *