8. Ethics

8.2. The Belmont Principles: Respect for Persons, Beneficence, and Justice

 

 

 

 

 

In 1979, the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research released the Belmont Report. Named after the conference center where the commission drafted the report, the report laid out three key principles for ethical research: respect for persons, beneficence, and justice. UNT Digital Library

Learning Objectives

  1. Identify the key principles of ethical research and how exactly they affect the work that scientists do.
  2. Explain why sociologists might want to give their research participants protections such as anonymity and confidentiality.
Title page of an original archived copy of the Belmont Report.
In 1979, the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research released the Belmont Report. Named after the conference center where the commission drafted the report, the report laid out three key principles for ethical research: respect for persons, beneficence, and justice. UNT Digital Library

In the United States, the foundational document guiding government regulations and professional norms regarding research ethics is the Belmont Report, which grew out of the federal government’s efforts to establish a regulatory framework for research in the 1970s. As part of the 1974 National Research Act, Congress required all institutions receiving federal support to establish institutional review boards (IRBs) to oversee research projects and protect the rights of human subjects. The legislation also launched the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, which in 1979 produced the Belmont Report—named after the Maryland manor where the commission met. Inspired by the Nuremberg Code, the Belmont Report put forward three overarching ethical principles for research on human subjects: respect for persons, beneficence, and justice (National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research 1979). Below we summarize the implications of these three principles (you can also read the full text of the report online):

1. Respect for persons: Individuals should be treated as autonomous agents able to make their own decisions. This principle implies informed consent—honoring your research participants’ autonomy means that you should tell them about what you are studying, how you are studying it, and what risks and benefits their participation entails. Showing respect for participants also means you should safeguard their privacy, a topic we discuss below in regards to protection of identities. Finally, the principle of respect for persons applies to groups with diminished autonomy—such as children and those with disabilities—for whom scientists must show special care when involving them in research.

2. Beneficence: While the word “beneficence” is typically associated with unconditional acts of kindness and charity, in a research context it is a moral imperative, the Belmont Report argues. Beneficence in this sense involves two principles: (a) “do not harm” (in line with the no harm to participants principle we have previously discussed) and (b) “maximize possible benefits and minimize possible harms.” When researchers act with beneficence, they respect the decisions of their human subjects and secure the well-being of these individuals.

3. Justice: This principle focuses on who should receive the benefits of research and bear its burdens. An injustice occurs when some benefit to which a person is entitled is denied without good reason, or when some burden is imposed unduly or arbitrarily. The tenet of justice reinforces the need for informed consent: participants must receive clear information not only about the purposes of the study, but also its risks and rewards.

We’ve already discussed the principle of no harm to participants that is explicit in the Belmont’s beneficence tenet and was egregiously lacking in many of the biomedical and social scientific studies we described at length above. In this section, we will discuss in greater detail the key concepts of informed consent and protection of identities that follow from the Belmont Report’s overarching principles.

Informed Consent

Cambridge Analytica logo.
Facebook was embroiled in controversy after journalists uncovered evidence that the political consulting firm Cambridge Analytic had used the social media platform to gather personal details about Facebook users without their consent, which the company used to help the 2016 Trump presidential campaign target its political advertising. Wikimedia Commons

What does it mean to treat our human subjects as autonomous agents—individuals capable of making decisions for themselves? It means that we seek out their voluntary participation based on informed consent. In any legitimate scientific study—certainly any study an institutional review board is willing to approve—we must have formal procedures in place to make sure that all human subjects involved in the research freely choose to participate in it. Their consent should also be “informed,” meaning that they should be given understandable and accurate information about the study, not some vague or misleading statements about it.

These formal disclosures are usually included in a consent form (see the sidebar) that researchers provide to subjects prior to the start of the study. In some cases, subjects are asked to sign the consent form indicating that they have read it and fully understand its contents. In other cases, subjects are simply provided a copy of the consent form, and researchers are responsible for making sure that subjects have read and comprehend its terms before starting any data collection. In either case, researchers should go over the form with their subjects at length—highlighting important sections of it, giving them ample time to read and review it, patiently addressing any questions or concerns they have about it, and providing them with a copy of the form to take home with them.

What sort of information should be laid out in a consent form and discussed during the consent process? First, researchers should describe what the general aims and goals of the study are, who is conducting and sponsoring the research, and how the research will be conducted—including what participants will do, and for how long. They should clearly communicate the potential risks associated with it, which obviously would include any psychological or safety risks but also any dangers to privacy or reputation. The informed consent process should also cover the possible benefits of participation—anything from monetary compensation to the opportunity for participants to share their personal views on an important issue. Finally, researchers must describe how they will protect the identities of their participants (covered in the next section) and whom they should contact for additional information about the study or their rights as study participants. This informed consent process should happen before data collection begins so that the knowledge can help participants decide whether or not they wish to be involved.

There are important practical implications of this general approach of informed consent:

  • We cannot force anyone to participate in our research.
  • We cannot include a person in our study without their knowledge or consent.
  • We need to provide clear and accessible information to potential participants about the study they are being asked to be involved in.
  • We cannot ask potential participants as a condition of their participation to not pursue legal recourse if the study somehow ends up harming them.

These last two points bear further discussion. In providing information about the study to potential participants, we need to describe the study in layperson’s terms—not with dense scientific jargon—and we need to make their rights and any potential risks clear and concrete to them, rather than concealing them in ambiguous language or legalese. We also need to make sure that every potential participant fully understands what is being asked of them—an especially important consideration for more vulnerable populations like minors or immigrants, who may have particular trouble comprehending the parameters of the research study they are agreeing to.

On the final point, it is essential not to confuse informed consent with any strategy to avoid liability. Human subjects should not be forced or expected to release a researcher or other parties from legal consequences should something go wrong while the research is being conducted. And in giving their informed consent, subjects should not be asked to waive any of their legal rights.

Informed consent is not always necessary when conducting research. Remember how the U.S. federal government’s definition of research on “human subjects” that we can use quoted at the beginning of this chapter stated that it involved “intervention or interaction with the individual” or the use or analysis of “private information.” When we study public speeches given by other human beings, or published books written by them, or publicly released videos or audio recordings created by them, we are not technically conducting research on “human subjects”; we are studying the publicly shared information that those individuals produced. In these cases, we do not need to ask those individuals for permission to study their public communications, which required no intervention or interaction with the individual for us to obtain. Likewise, when we observe public spaces—street corners, for instance—there is normally an assumption that no consent is needed to write about or otherwise record what one observes (though there are caveats to this general view, as the uproar over Laud Humphreys’s study of “public” restrooms suggests).

At the same time, it is not always clear what is “private” or “public” information. An everyday example of why researchers should insist on informed consent is the widespread and arguably intrusive analysis of user data for research conducted by companies and advocacy groups. Whenever you sign up for a new app or online service, you typically have to wade through thousands of words of legal disclosures—the dreaded end-user license agreement, or EULA—and click on the “Agree” button before you can upload your cat pictures. Embedded in these disclosures is often an acknowledgement that the provider of the service is using your data for research purposes. They want to know the demographics and habits of their users so that they can improve their service or hawk advertising to marketers who want to reach specific audiences. They often sell their data—including names, mailing addresses, email addresses, and other identifying information—to third parties who could use it for anything from sending you unsolicited emails to developing strategies to encourage (or discourage) you from voting.

In 2018, Facebook faced a major scandal when news reports revealed that a British consultancy called Cambridge Analytica had harvested the data of 87 million Facebook users and—without their knowledge or consent—had used their data to develop “psychographic” profiles that helped political organizations target more effective advertisements at supporters and swing voters across media platforms. The company’s clients included the 2016 presidential campaign of Donald Trump, a Super PAC supporting the Trump campaign, and the 2016 British campaign in favor of a referendum vote to leave the European Union. Individuals close to Trump helped run and fund the company. The data harvesting first occurred through a popular app, “This Is Your Digital Life,” that succeeded in getting hundreds of thousands of users to complete a paid survey said to be part of an academic study. These users agreed to language asking for their consent before taking the survey, yet Facebook allowed Cambridge Analytica to use their personal data for purposes other than what they actually consented to. The company was also able to pull data about the users’ friends—thereby expanding the company’s sample to encompass millions of people on Facebook. As a result of the scandal, Cambridge Analytica folded, and Facebook wound up paying billions of dollars in fines levied by U.S. and UK regulators.

While the deception uncovered in the Facebook/Cambridge Analytica data scandal was shamelessly brazen, it is clear that the “consent” process that companies typically use—an online agreement spelling out the data collection and its uses in fine print—is still a pale imitation of what an informed consent procedure looks like (or at least should look like) in academic research. An ethical researcher needs to be utterly clear about the risks and benefits of the research, convey that information in an accessible and straightforward way, and fully address any questions or concerns without coercion of any kind—which is often the opposite of what the EULA’s impenetrable legalese and “click to proceed” pressure tactics entail.

That said, there are exceptional cases when ethical researchers can justify not fully informing their subjects of the purpose or specific procedures of a study at its outset. In these situations, researchers may be allowed to speak vaguely about their research or even mislead their participants outright. Institutional review boards may approve such strategies when there are serious concerns that participants who learn too much about the study beforehand will subsequently alter their responses or behavior, hampering efforts to get an accurate picture of the phenomenon being examined. For instance, suppose the purpose of a study is to examine to what extent subjects abandon their own views to conform with “groupthink.” To measure this phenomenon in the lab, researchers convene a group of participants who are encouraged to listen to each other’s opinions on a topic before voicing their own. In this research context, disclosing the study’s purpose beforehand might sensitize subjects to the idea of groupthink and—consciously or not—make them act differently than they would have without that information.

Not being fully upfront about the nature of a study is often unavoidable if the researcher wants to conduct a valid analysis of a highly stigmatized phenomenon. For that reason, ethnographers sometimes conduct covert observations—much like undercover officers do when conducting drug busts—so that the people they observe act naturally (and perhaps tolerate the researcher’s presence to begin with, as we discuss further in the next chapter). The social stigma associated with certain behaviors or beliefs makes informed consent particularly tricky for in-depth interviews. Here, the researcher usually knows the identities of the participants and must rely on their self-reporting regarding how they think or feel. Given that respondents can choose how they express themselves in these situations, they may tailor their answers to show themselves in a more favorable light. Specifically, they may hide less desirable behaviors or attitudes while exaggerating more desirable ones. In these cases, so-called social desirability bias will distort the researcher’s understanding of what’s actually going on. For instance, if a sociologist is conducting in-depth interviews with subjects to suss out their underlying racial biases, telling them the actual point of the study will prompt a good number of respondents to provide answers that paint themselves more positively than they actually are. Very few people will comfortably acknowledge that they are unrepentant bigots; at the very least, they will use euphemisms or code words and otherwise dial back the attitudes they might state bluntly to friends and family.

Weighing the pros and cons of not providing informed consent is a tricky matter, and sociologists and institutional review boards frequently have divergent opinions about what sorts of details can be justifiably concealed. Yet even if researchers have a strong case for keeping certain information from participants, they can and should conduct a debriefing session immediately following the data collection process. This ensures that the human subjects are eventually informed, even if it cannot occur before they participate in the study. In a debriefing, the researchers explain the true purpose of the study and why deception was necessary. They also provide a full accounting of the potential risks or harm that the participant might have experienced during the experiment or might still suffer in the future. In all cases, the researchers need to provide ways to contact them or others who can offer assistance while the participants process their experiences afterward.

When obtaining informed consent from our potential participants, we should also keep in mind that some people in our pool of recruits may not be as able as others to give their genuine consent to participate in research. Here, we should revisit the Belmont Report’s principle of “respect for persons.” Recall that this principle not only aims to treat individuals as autonomous agents, but it also tries to provide protection for persons with diminished autonomy. The latter type of human subjects are seen as members of vulnerable populations—people who may be at risk of experiencing undue influence or coercion.

Mugshots of Whitey Bulger from 1956 and 2011.
A mugshot of Whitey Bulger from 1956. When Bulger was serving a sentence in a federal prison, the future mob boss and fugitive volunteered for a Project MKUltra experiment on LSD, which researchers said was intended to find a cure for schizophrenia, but which was secretly related to the CIA’s studies of mind control. USP Atlanta, via Wikimedia Commons
Mugshot of Whitey Bulger from 2011.
A mugshot of Bulger from 2011, when he was finally captured. U.S. Marshals Service, via Wikimedia Commons

Minors and people who are in jail or prison are among these vulnerable groups. For instance, the incarcerated may feel compelled to participate in research in the belief that they will receive a reduction in their sentences. Another reason that institutional review boards look closely at research involving prisoners is the grim historical record of how scientists have callously exploited this population for their purposes. For instance, as part of the Project MKUltra experiments discussed earlier, CIA scientists drew many of their human subjects from American prisons and detention centers in Japan, Germany, and the Philippines. Even when individuals volunteered for MKUltra’s experiments, they were often deceived about the true nature of the research. The mob boss Whitey Bulger—one of the FBI’s most wanted fugitives before his capture in 2011 after 17 years in hiding—was in federal prison on bank robbery charges when he volunteered to be part of an study that scientists said was intended to find a cure for schizophrenia. In reality, the CIA was examining the long-term effects of LSD on the brain as part of its interest in mind control (Gross 2019). Bulger and other inmates signed up for the study in exchange for lighter sentences, but over the course of taking LSD every day for more than a year, Bulger deeply regretted that decision. He began questioning his sanity and contemplated suicide, Bulger later recounted, and he felt particular rage toward the doctor running the experiment, whom he called “a modern-day Dr. Mengele”—a reference to the sadistic Nazi doctor who experimented on inmates at the Auschwitz concentration camp (Curran 2011).

For vulnerable populations, the rules for seeking consent are understandably more stringent. Minors must have the explicit consent of a parent or legal guardian in order to participate in research, and institutional review boards typically require the children themselves to understand and agree to (as much as possible at their given age) the purpose and procedures of the study, as well as its potential risks and benefits. Researchers may be required to ask younger children to provide verbal assent in place of a signature. They may also need to structure the consent process in such a way as to minimize any pressures that parents or guardians themselves may place on children to cooperate. As for research on prisoners, institutional review boards generally look dimly on such studies, given the inherent inequality in power between those incarcerated and those who might study them.

Protection of Identities: Anonymity and Confidentiality

Ronald Kessler and Mark Felt sitting outside and talking.
Journalist Ronald Kessler speaks with one of the most famous confidential informants of all time, Mark Felt (1913–2008), who in 2005 revealed that he was the “Deep Throat” source who helped break the Watergate story and bring about the resignation of U.S. president Richard Nixon in 1974. Ronald Kessler, via Wikimedia Commons

Another important principle of ethical research is protecting the identities of those participating in the study. At the most basic level, participants have a right to know what information about them is or is not shared with others, and these particulars should be spelled out to them during the informed consent process described earlier. It may also be the case that not sufficiently concealing the identities of research subjects endangers their privacy and their public reputations. For instance, if reading the study allows someone to piece together who a specific participant is—which can be very likely when sociologists study a group of people who already know each other—that participant might feel discomfort and embarrassment and could even be in danger of being manipulated through the use of that information. Furthermore, participants who talk about stigmatized behaviors, such as mental health problems, substance abuse, or criminal activity, could be singled out by employers (who routinely do web searches to vet job applicants as well as current employees) or even law enforcement (who sometimes decide to prosecute individuals who admit to crimes in writing or other media). And even beyond the ethical principle of treating participants with respect and protecting them from harm, researchers have very practical reasons for promising to safeguard identities. Without such assurances, interview respondents may not feel comfortable speaking honestly about their views or experiences, in the fear that others will criticize, shun, or retaliate against them.

To address these potential problems, researchers often promise to maintain either the anonymity or the confidentiality of their research subjects. Anonymity means that not even the researcher conducting the study should be able to connect specific pieces of information provided by participants with their actual identities. An example of anonymity in scientific research is an internet survey in which no identification numbers are used to track who is responding to the survey and who is not.

While anonymity is the most effective way to avoid any breaches of privacy in research, it can be impractical to guarantee. For instance, sociologists who use ethnographic observation and face-to-face interviewing get to know about participants in person as part of their interactions. It can be difficult or even impossible in these situations to keep participants’ identities hidden from the researchers themselves.

As a result, sociologists often promise confidentiality rather than anonymity. Under this arrangement, participants allow the researcher to have access to identifying details in the data being collected, but only the researcher should be able to link specific participants to their stored data. In other words, the researcher can identify the specific person who said a particular quote or did a particular thing described in the research report, but they promise not to divulge that person’s identity in any published work or public forum.

A confidentiality agreement implies that a participant’s personal information is secure, but it is important to understand that these guarantees are not ironclad—as much as a well-intentioned researcher might wish them to be so. For one thing, depending on applicable laws, researchers who learn that someone (including the participant) is at “high or imminent risk of harm” may need to contact a health professional or the authorities; the same is true if they learn about the possible abuse of a child (Kodama Muscente 2022). These exceptions need to be mentioned in consent forms so that participants do not have a false sense of security.

Generally speaking, authorities can use court orders to compel researchers to release or discuss the data they collect, which means they may be unable to protect the identities of any participants suspected of involvement in activities under legal scrutiny. Conversations between social science researchers and their human subjects do not automatically enjoy the “privileged communication” status that those between health professionals and patients or between lawyers and clients do. Nor are researchers protected by so-called shield laws, legislation enacted in many states that allows journalists to keep their sources and unpublished notes confidential. In 2003, the U.S. Department of Health and Human Services began allowing researchers to apply for certificates of confidentiality, which prevent them from having to divulge the identities of their human subjects in any legal proceedings, but very few research projects qualify for this level of protection (OHRP 2010). Nevertheless, the American Sociological Association’s code of ethics (discussed further below) is clear that sociologists who promise confidentiality should not cave into that pressure even in the face of legal action: “Confidential information provided by research participants should be treated as such by sociologists even if there is no legal protection or privilege to do so” (American Sociological Association 2018:10).

Researchers have been imprisoned for failing to disclose the identities of their participants. In 1993, sociologist Rik Scarce refused to testify before a federal grand jury investigating an act of vandalism. Then a graduate student at Washington State University, Scarce had been studying radical animal rights activism. Authorities were trying to identify individuals who had broken into research facilities at his university and freed animals used in experiments. They believed Scarce had met the perpetrators and demanded that he discuss his conversations with specific members of the Animal Liberation Front (Scarce n.d.). Citing his moral obligations as a sociologist, Scarce told the court that he would not testify about his conversations with activists, and only agreed—under duress—to disclose non-confidential details not related to his research. He spent 159 days in jail under a contempt order before a judge finally released him (American Sociological Association n.d.).

Activist speaking into a megaphone in front of a crowd of protesters.
Sociologist Rik Scarce spent 159 days in jail after refusing to testify about his interactions with Animal Liberation Front activists (pictured here at a protest in Israel). Roee Shpernik, via Wikimedia Commons

Compared to anonymity, confidentiality is a weaker form of protection of identities, and even these agreements are not always possible or desirable for particular kinds of research. For instance, focus groups that involve interviews with multiple people at the same time (discussed in Chapter 10: In-Depth Interviewing) cannot ensure confidentiality because the individuals involved will inevitably know at least some identifying details about their fellow respondents. Furthermore, some sociologists have argued that not using the real names of their participants in their published work can have negative repercussions that may outweigh the benefits of protecting privacy (Duneier 1999; Chen and Goldstein 2022). As journalists have long maintained, having individuals provide their quotes on the record with their names attached creates accountability, making it less likely that sources will be able to lie or manipulate without consequences. Some participants may also prefer to see their names in the finished work, perhaps because they want the public recognition or feel their experiences are silenced or trivialized by the use of a pseudonym. Acknowledging these concerns, some institutional review boards will allow researchers to follow this more journalistic practice of identifying respondents if the risks of the research are minimal and the persons involved explicitly request the use of their real names—sometimes, by signing a “release of information” form distinct from the consent form.

The consent form itself should provide all the key details about how the participant’s identity will be protected, including whether the researchers promise anonymity or confidentiality and how they will go about ensuring those conditions. For instance, when participant data is confidential, researchers will typically assign IDs or pseudonyms to their interview respondents and then keep a password-protected and encrypted document that has the real names associated with those IDs or pseudonyms. The consent form should describe what security measures are in place and who has access to the secure files. If the participants are being taped or video-recorded, the consent form should note that fact and clearly state what will happen afterward to those recordings, which by their very nature include a wealth of identifying details. For instance, recordings may be transcribed afterward and then destroyed; the transcripts may be stripped of any identifiers, such as mentions of the names of the respondents or their family and friends, adding a further layer of protection.

Whether the identities of a study’s participants are anonymous or confidential, it may still be possible for an individual who personally knows that person to spot them in the published paper—perhaps based on the configuration of details provided about their age, gender, race, education, and so on. Qualitative researchers may try not to provide too much in the way of detail about a particular individual in order to minimize this risk. When quantitative researchers release their datasets for public consumption, they will sometimes provide the data in an aggregate form that conceals any microdata—the responses of specific individuals—so that people’s privacy is protected. For similar reasons, government agencies and polling organizations routinely restrict access to data collected from small geographical units, such as zip codes or small cities, given the risk that—within such a narrow pool of respondents—someone could conceivably connect a person’s survey answers to an actual identity.

Sometimes, researchers go so far as to change details—usually minor ones—in order to make individuals unrecognizable in published work even to people who know them. Again, the field of journalism has a very different moral take on this practice. (Victor, a former journalist, writes about these divergent sets of professional ethics in the sidebar The Ethnographer and the Journalist below.) Most journalists would probably be aghast at even slight tweaks to facts within a write-up. In their view, it would cross the ethical line that separates omission (justifiable in some contexts, such as when reporters omit the names of whistleblowers who fear retaliation) from falsification (inherently wrong).

It’s also worth noting that the ethical standards for protecting privacy in the digital age are evolving and the subject of much confusion among researchers. After all, there is a vast amount of data now available about almost every individual across the online spaces they frequent. Sociologists wrestle with how much of the data—including freely shared social media posts by people who are not public figures—can be used for research purposes without violating reasonable expectations of privacy. For their part, institutional review boards are not always in agreement about what procedures should be in place to protect the creators of publicly shared digital content, much less data retrieved from less freely accessible spaces online.

The Journalist and the Ethnographer, by Victor Tan Chen

Meme template for Axios reporter Jonathan Swan’s interview with President Donald Trump.
Journalists are known for their adversarial interviews, which seek to hold powerful people to account by posing tough questions and pushing back on false claims. Sociologists tend not to pursue this approach, instead stressing empathy and nonjudgmental listening in their in-depth interviews. Axios, via Know Your Meme

I’m a sociologist now at Virginia Commonwealth University, but I used to be a newspaper reporter (at New York Newsday), and as labor of love I still edit a magazine called In The Fray, a publication devoted to personal stories on global issues.

When I had aspirations to be the next Bob Woodward back in college, I remember stumbling upon The Journalist and the Murderer (1990), a book by New Yorker writer Janet Malcolm. It begins with an incendiary paragraph:

Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible. He is a kind of confidence man, preying on people’s vanity, ignorance or loneliness, gaining their trust and betraying them without remorse. Like the credulous widow who wakes up one day to find the charming young man and all her savings gone, so the consenting subject of a piece of nonfiction learns—when the article or book appears—his hard lesson. Journalists justify their treachery in various ways according to their temperaments. The more pompous talk about freedom of speech and “the public’s right to know”; the least talented talk about Art; the seemliest murmur about earning a living.

The Journalist and the Murderer is an account of the relationship between bestselling journalist Joe McGinniss and the subject of one of his true-crime books, physician Jeffrey R. MacDonald. During the course of McGinniss’s research for the book, MacDonald was tried and convicted of the murders of his wife and two children. The Journalist and the Murder excoriated McGinniss for allegedly “conning” his subject—first by befriending him, and then betraying that confidence.

The unethical behaviors that Malcolm describes in her book are extreme, but they speak to an aspect of journalism that many people find troubling: the way that it uses and manipulates its subjects and then casts them aside, all in pursuit of a sensationalistic headline. This sort of behavior may account in part for why journalists rank abysmally low in Gallup polling on honesty and ethics across various professions. It’s part of the reason I decided to go to grad school myself: I love journalism and believe it plays a vital role in our democracy, but I got tired of the ambulance chasing and other less-than-savory things you sometimes have to do.

Institutional review boards and the profession’s code of ethics help sociologists avoid these sorts of problems by setting up protections for the people we interview and observe. This often includes the promise of confidentiality, which can shield our respondents from the public humiliation or retribution at times endured by the subjects of news articles after publication.

Before we pat ourselves on the back, however, sociologists do still run into problems at times in terms of how we present our research to respondents and how they ultimately respond to our work. As someone who teaches research methods, I particularly like Jonathan Rieder’s Canarsie and Annette Lareau’s Unequal Childhoods as examples of how sociologists have dealt with this difficult ethical terrain—particularly the appendix to Lareau’s book where she describes candidly and thoughtfully the hostile reactions some of her respondents had to their portrayal in her book, in spite of the fact that she hid their identities.

Like journalists, can we also be confidence men and women—gaining trust and betraying it? Furthermore, do we have to do that—in order to gain access to begin with, and in order to be truthful to the reality we describe? That’s the age-old question in research ethics, of course.

Interestingly, journalists would say we are guilty of the exact opposite professional sin: being “overprotective” of our respondents. The fact that their identities—and sometimes those of the cities, companies, etc., we research, too—are hidden leads to a number of complications. First, it’s hard to prove to people—particularly skeptical journalists—that what we’ve written is true. What’s to stop us from fabricating our data whole cloth? One obvious safeguard would be the peer review process—and yet it’s not hard to imagine how a determined fabulist could get around even that hurdle.

An ethnographer needs to be very cautious in making claims because of the inability in many cases to prove that what you wrote is, without a doubt, true. Of course, in part that’s not even up to you, because of the ethical/IRB need or norm of protecting respondent identities. However, I do think one of the strengths of ethnography is its ability to stumble across unexpected situations or outcomes, which in turn can help refine or challenge our theories (with all the caveats that the sample is almost always small and unrepresentative, etc.). But those findings will naturally lead to skepticism because they don’t fit with people’s preconceptions—and, if they’re unflattering to certain people or groups, they may also lead to vicious pushback, however unwarranted it is.

Fact-checking helps journalists to avoid this problem. I’ve worked as a fact-checker before: what typically happens is the reporter gives you the contact info for their sources, and you call them up and verify each quote and fact. It’s harder to make up stuff when someone is looking over your shoulder in this way. Even when there’s no outright fabrication involved, we as sociologists can alienate readers with our methods. We care about protecting our subjects to the point that in our published work we change (hopefully inconsequential) details, create composite characters, and otherwise alter the reality that we actually observed. For some readers, this is a no-no. Consider, for instance, the outcry over the revelation that James Frey changed or fabricated details in his memoir A Million Little Pieces (and this was a memoir—a genre of literature that has long had a tradition of embellishing the past).

Of course, disgraced journalists like Jayson Blair and Stephen Glass remind us that journalism has failed to catch many acts of dishonesty—and with today’s news budgets so strapped, publications no longer have as many resources to verify the information in each article. And to be honest, daily journalism operates routinely with a pretty low standard of verifiability. Yes, sources are increasingly recorded on tape or even video, providing documentary evidence, but much of the time reporters are just writing things down in their spiral notebooks. They simply don’t have the time to do much else, given deadline constraints. Also, recording an interview changes the dynamic—encouraging the source to use her bland “on the record” voice—and journalists don’t necessarily want that. But not taping a source means that they might quote people who then go on to say they were misquoted, and it becomes a he-said-she-said situation. (That happened to me once: a low-level government official made an off-the-cuff comment that he later regretted, and afterward started telling people I made up the quote. I called him and chewed him out for doing that, but there was no way for me to “prove” to other people he had lied because I hadn’t recorded him.) This is something that happens more often than you’d think, and that’s because journalists (like ethnographers) are dealing with messy real-world constraints.

As someone who has experience interacting with journalists, I know they also look with great skepticism at “anonymous” sources. As they see it, stories based on information collected in this way are by their very nature untrustworthy. Journalistic norms (and sometimes a publication’s own policies) emphasize that there has to be a powerfully compelling reason to grant someone anonymity in an article. Now, it’s also very true that many people need a promise of confidentiality in order to feel comfortable telling their story completely and truthfully. And it goes without saying that sources—even nonelites—will exploit the fact that their real names are being used in order to profit from the attention in some way. For example, when I was working as a reporter, at times I had the hunch that someone was telling me a sob story in order to garner sympathy and get donations from the newspaper’s readers.

If sociologists would say interviewees are more willing to be candid about their personal lives and personally held beliefs if they have the protection of a pseudonym, journalists would stress the fact that hiding their identities can also encourage them to lie: no one can come after them for making up a damaging story about someone else, for instance. And respondents themselves sometimes don’t want confidentiality. They may be disappointed to learn their real names won’t be published. When I was working as a journalist, I found that people would divulge sensitive details to me or other reporters—for example, about some trauma they’d experienced—and afterward they would tell us they were happy to see their name in print. It gave them a sense of validation to see their story out there and have other people know they actually experienced this. Sometimes, they were contacted afterward by people who related to their story or wanted to help them, and they said they were grateful for that opportunity.

In short, if journalism opens itself up to pernicious forms of exploitation—what I think Janet Malcolm was getting at—in terms of using people and not considering more carefully the consequences of quoting them in a story, the use of anonymity by sociologists can also make it easy to twist our respondents’ words and otherwise exploit them. So it seems both fields have their own Achilles heels, and perhaps we just need to accept they go about things in different ways that are morally justified on their own terms.

It’s important to recognize the various ethical and practical tradeoffs of all these approaches—not just the distinct practices of journalism and ethnography, but also the different ones used within each tradition. After all, you will find many different approaches to the protection of identities among the most ethical of ethnographers—all of whom, let’s stipulate, are trying to do right by both their respondents and their research. Some people just use pseudonyms, some people change details (but only a little), and some people go all out and create composite characters. I can see the ethical rationale for all these approaches. (And in any case, I can’t imagine a room full of ethnographers could be forced to pick any one strategy as the professional best practice, even under pain of death.)

To the extent that sociology wants to be part of public debates on important issues, though, I do feel skepticism about our data is a key reason for lay readers to dismiss our work. Partly, this is because readers just don’t understand the reasons that we believe practices like confidentiality are so important—they’re used to how journalists do their job. But I can imagine they’d have problems even if they understood our reasoning. Why should they trust us? Especially on controversial topics, how do they know we’re not lying, or at least fudging the facts? It’s not just the question of honesty; it’s also a question of style. Using pseudonyms comes across as a bit hokey—especially for place names, which I imagine sound like the egghead equivalent of “Gotham” or “Metropolis” to non-sociologists.

I’m not sure how to deal with these problems. I do think it’d be helpful if sociologists read more journalism (and journalists more sociology) and learn from some of the best practices of the other approach. For sociologists, reading classic works of journalism—from Let Us Now Praise Famous Men to Friday Night Lights—can be incredibly illuminating. It can allow us to draw from the literary beauty, perceptiveness, and heft of these writers in ways that serve our ideas. It can inspire us to write without jargon, make our theories more intelligible to lay readers, and not be afraid to reveal to readers the emotional power of our narratives. Those are the best ways, I think, that we can ensure sociology gets read by the people who could best benefit from its messages.

This sidebar was adapted from two posts that originally appeared on the sociology blog orgtheory.net.

Key Takeaways

  1. The Belmont Report, a key government document that provided a legal framework for research ethics in the United States, specified three principles for the ethical conduct of research: respect for persons, beneficence, and justice.
  2. One way that sociologists can avoid harming research participants is by protecting their identities via practices of confidentiality and anonymity.
  3. Sociologists should refrain from misleading participants. They should provide adequate information about their study through procedures that ensure informed consent.

Exercises

  1. Let’s say a college professor has asked her students to participate in a research study she is conducting. In the syllabus of the course, the professor clearly informs them that they are required to take part in the study. Has the professor obtained her students’ informed consent in an ethical manner?
  2. Let’s say another professor informs his students that they are not required to take part in a research study he is conducting, but those who agree to participate will be given extra credit. Has the professor obtained his students’ informed consent in an ethical manner?
  3. A group of researchers are conducting a study in a lower-income neighborhood. They tell the potential participants that the study is on “social issues” and that they will be given gift cards if they agree to take part. Has the research team obtained the potential participants’ informed consent in an ethical manner?

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

The Craft of Sociological Research by Victor Tan Chen; Gabriela León-Pérez; Julie Honnold; and Volkan Aytar is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book