Here's an interesting set of frequency claims about what percentage of U.S. adults feel awkward. The data were presented at the content farm StudyFinds, under the headline, "That's awkward! 68% still often feel as insecure as they did as teens"
Here's a snippet from the journalist's summary:
A survey of 2,000 U.S. respondents ages 25 to 45 found that as teens, people were most self-conscious about their body shape (65%), hairstyle (61%), and smile (61%). Many continue to feel this way, despite the average person feeling at their most awkward at age 17.
As adults, over half are still uncomfortable about their body shape (55%), hairstyle (53%), and smile (52%). More men than women are currently self-conscious about their glasses/contact lenses (51% vs. 39%), height (54% vs. 42%), hairstyle (57% vs. 48%), and smile (55% vs. 49%).
You might be wondering why a researcher might collect such specific data about what people feel awkward about. Turns out this study was "Conducted by OnePoll on behalf of Smile Express, a doctor-monitored at-home aligner treatment for adults"--in other words, the poll was commissioned by a private company that sells ways people can improve their appearance.
Questions
a) How might the commercial nature of this survey affect how the study was conducted? For example, what results might Smile Express be hoping to find? How might that affect the question writing strategy and sampling strategy?
b) Which of the four big validities should we prioritize when reading about a frequency claim such as "over half are still uncomfortable about their... smile (52%)"?
c) To help you consider external validity, here's some information about the study's sample. Based on this quote, do you know anything about the representativeness of the study's sample?
This random double-opt-in survey of 2,000 U.S. adults ages 25–45 was commissioned by Smile Express between Aug. 7 and Aug. 10, 2023. It was conducted by market research company OnePoll, whose team members are members of the Market Research Society and have corporate membership to the American Association for Public Opinion Research (AAPOR) and the European Society for Opinion and Marketing Research (ESOMAR).
Discussion of question c:
To understand the paragraph above, you need to know what a "double-opt-in" survey is. As the name implies, an "opt-in" survey is one people volunteer for. For example, people might volunteer to participate in a study in exchange for cash, gift cards, or frequent flyer miles. A double-opt-in study is one in which people opt in twice; first they volunteer to be contacted by the commercial polling company, and then they opt in to a particular survey or poll.
To test the effectiveness of opt-in polls, Pew Research once compared three commercial opt-in polls to three Interent panel samples which had used random sampling of addresses. They found that the opt-in polls resulted in larger errors compared to known benchmarks. For example, people in the opt-in polls were more likely to be receiving social security benefits than the general population and were less likely to have served in the military. In other words, opt-in surveys are not particularly representative, especially when compared to Internet panels, which use probability samples.
Given that opt-in polls are not particularly representative, why, then, might the StudyFinds journalist have described this sample as "random"? Maybe she misinterpreted the study's sampling strategy?
Advanced students will enjoy critically reading the website of the market research company OnePoll and identifying ways that it describes its research (its FAQ includes the question, "How do you ensure that data is statistically significant"?)