You may have seen these ads on TV: Real people stand in a warehouse and are asked which cars are "most reliable." A Honda driver might predict, "Hondas are the most reliable." But then the cover is whisked off in a flourish: The most reliable car is actually the Chevy! In one recent ad (viewable here), the narrator reports:
"Based on a recent nationwide survey, Chevy is more reliable than Toyota, Honda, and Ford."
Should statements like these convince you? The website Jalopnik publishes journalism about cars. In this column, a journalist walks readers through how they might think critically about reliability claims in the Chevy commercials. For example, the journalist writes that the reliability claim was based on a survey.
...the survey was sent in 2018 to owners of 2015 model year cars, and the reporting of the “repairs in the past 12 months” relates to those 2015 cars in their third year of service. Chevy reiterated that “Independent statisticians reviewed the materials and concluded with 95% confidence that Chevy’s percentage of no parts repaired or replaced is better than Toyota, Honda, Ford or 23 other brands.”
The journalist adds:
Anyone who has studied research and statistics will tell you that it’s not difficult to form the conclusions you want to based on your use of what is called an “operational definition.” For this particular study, Ipsos is operationally defining reliability as the repairs a car has had within a 12 month period once the car has reached three years old.
a) How did Chevy operationalize "reliability" in their study?
b) What are some alternative ways to operationalize a car brand's "reliability?"
c) Can you think of downsides to the operationalization that Chevy used?
The journalist pointed out that, other than the operationalization of reliability, the survey's number of respondents might matter. To wit:
Setting aside the fact that self-reported surveys can be flawed, fewer than 49,000 respondents completed them out of almost 840,000. What Chevrolet did not provide is a breakdown of those respondents by brand, because if we were looking at the percentage of repairs made by various brand owners this could make a difference.
For example, if only 1,000 of those 49,000 respondents were Honda owners and 100 of them reported repairs within that 12 month period, that means that Honda would have a 10 percent repair rate. But if 15,000 of the respondents were Chevrolet owners and 1,200 of them reported repairs, that is an 8 percent repair rate. Yet this distribution was not provided so it’s difficult to ascertain exactly how those percentages compare within each sample size of brand owners.
d) What do you think of the journalist's reasoning, above?
In another argument, the journalist compares the self-report survey used by Chevy with two other sources of reliability data, Consumer Reports and J.D. Power:
...in a 2018 ranking of brand reliability, Consumer Reports ranked both Honda and Toyota well above Chevrolet, [which] occupied the lower part of the list along with several other domestic brands.
But like Chevrolet’s survey, this data is also self-reported based on the ownership experiences of Consumer Reports subscribers.
I also reached out to J.D. Power, as they are a respected reference when it comes to these kinds of “quality” rankings and asked them to comment on the methodology behind Chevrolet’s study. They declined. (Recall that Chevy has touted J.D. Power awards in this very same series of ads.)
e) Why might the journalist extol Consumer Reports or J.D. Power over Chevy's Ipsos survey? What might these organizations doing differently--if anything--in their measurement of reliability?
f) Reflect on how the availability heuristic (the pop-up principle), cherry picking, bias blind spots, or the present-present bias (all from Chapter 2) might lead individuals to make faulty conclusions about which car brands are most reliable.
Thanks to Marianne Lloyd for sharing this idea!