By Dvora Yanow and Peregrine Schwartz-Shea
Editor’s note: This is a condensed version of an essay appearing in Qualitative & Multi-Method Research [Newsletter of the APSA Organized Section for Qualitative and Multi-Method Research] Vol. 12, No. 2 (Fall 2014). The shorter version is also due to appear in the newsletters of APSA’s Immigration & Citizenship and Law & Courts sections. The original, which is more than twice the length and develops many of these ideas more fully, is available from the authors (Dvora.Yanow@wur.nl, psshea@poli-sci.utah.edu).
This post is contributed by Dvora Yanow (Wageningen University) and Peregrine Schwartz-Shea (University of Utah).
Pre-script. After we finished preparing this essay, a field experiment concerning voting for judges in California, Montana, and New Hampshire made it even more relevant. Three political scientists—one at Dartmouth, two from Stanford—mailed potential voters about 300,000 flyers marked with the states’ seals, containing information about the judges’ ideologies. Aside from questions of research design, whether the research passed IRB review is not entirely clear (reports say it did not in Stanford but was at least submitted to the Dartmouth IRB; for those who missed the coverage, see this link and political scientist Melissa Michelson’s blog (both accessed November 3, 2014). Two bits of information offer plausible explanations for what have been key points in the public discussion:
- Stanford may have had a reliance agreement with Dartmouth, meaning that it would accept Dartmouth’s IRB’s review in lieu of its own separate review;
- Stanford and Dartmouth may have “unchecked the box” (see below), relevant here because the experiments were not federally funded, meaning that IRB review is not mandated and that universities may devise their own review criteria.
Still, neither explains what appear to be lapses in ethical judgment in designing the research (among others, using the state seals without permission and thereby creating the appearance of an official document). We find this a stellar example of a point we raise in the essay: the discipline’s lack of attention to research ethics, possibly due to reliance on IRBs and the compliance ethics that IRB practices have inculcated.
* * *
Continuing our research on US Institutional Review Board (IRB) policies and practices (Schwartz-Shea and Yanow 2014, Yanow and Schwartz-Shea 2008) shows us that many political scientists lack crucial information about these matters. To facilitate political scientists’ more effective interactions with IRB staff and Boards, we would like to share some insights gained from this research.
University IRBs implement federal policy, monitored by the Department of Health and Human Services’ Office of Human Research Protections (OHRP). The Boards themselves are comprised of faculty colleagues (sometimes social scientists) plus a community member. IRB administrators are often not scientists (of any sort), and their training is oriented toward the language and evaluative criteria of the federal code. Indeed, administering an IRB has become a professional occupation with its own training and certification. IRBs review proposals to conduct research involving “human subjects” and examine whether potential risks to them have been minimized, assessing those risks against the research’s expected benefits to participants and to society. They also assess researchers’ plans to provide informed consent, protect participants’ privacy, and keep the collected data confidential.
The federal policy was created to rest on local Board decision-making and implementation, leading to significant variations across campuses in its interpretation. Differences in practices often hinge on whether a university has a single IRB evaluating all forms of research or different ones for, e.g., medical and social science research. Therefore, researchers need to know their own institutions’ IRBs. In addition, familiarity with key IRB policy provisions and terminologies will help. We explain some of this “IRB-speak” and then turn to some procedural matters, including those relevant to field researchers conducting interviews, participant-observation/ethnography, surveys, and/or field experiments, whether domestically or overseas.
IRB-speak: A primer
Part of what makes IRB review processes potentially challenging is its specialized language. Regulatory and discipline-based understandings of various terms do not always match. Key vocabulary includes the following.
- “Research.” IRB regulations tie this term’s meaning to the philosophically-contested idea of “generalizable knowledge” (CFR §46.102(d)). This excludes information-gathering for other purposes and, on some campuses, other scholarly endeavors (e.g., oral history) and course-related exercises.
- “Human subject.” This is a living individual with whom the researcher interacts to obtain data. “Interaction” is defined as “communication or interpersonal contact between investigator and subject” (CFR §46.102(f)). But “identifiable private information” obtained without interaction, such as through the use of existing records, also counts.
- “Minimal risk.” This research is when “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests” (CFR §46.102(i)). But everyday risks vary across subgroups in American society, not to mention worldwide, and IRB reviewers have been criticized for their lack of expertise in risk assessment, leading them to misconstrue the risks associated with, e.g., comparative research (Schrag 2010, Stark 2012).
- “Vulnerable populations.” Six categories of research participants “vulnerable to coercion or undue influence” are subject to additional safeguards: “children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons” (CFR 46.111(b)). Federal policy enables universities also to designate other populations as “vulnerable,” e.g., Native Americans.
- Levels of review. Usually, IRB staff decide a proposed project’s level of required review: “exempt,” “expedited,” or “convened” full Board review. “Exempt” does not mean that research proposals are not reviewed. Rather, it means exemption from full Board review, a status that can be determined only via some IRB assessment. Only research entailing no greater than minimal risk is eligible for “exempt” or “expedited” review. The latter means assessment by either the IRB chairperson or his/her designee from among Board members. This reviewer may not disapprove the proposal, but may require changes to its design. Projects that entail greater than minimal risk require “convened” (i.e., full) Board review.
- Exempt category: Methods. Survey and interview research and observation of public behavior are exempt from full review if the data so obtained do not identify individuals and would not place them at risk of “criminal or civil liability or be damaging to the subjects’ financial standing, employability, or reputation” if their responses were to be revealed “outside of the research” (CFR 46.101(b)(2)(ii)). Observing public behaviors as political events take place (think: “Occupy”) is central to political science research. Because normal IRB review may delay the start of such research, some IRBs have an “Agreement for Public Ethnographic Studies” that allows observation to begin almost immediately, possibly subject to certain stipulations.
- Exempt category: Public officials. IRB policy explicitly exempts surveys, interviews, and public observation involving “elected or appointed public officials or candidates for public office” (45 CFR §46.101(b)(3))—although who, precisely, is an “appointed public official” is not clear. This exemption means that researchers studying public officials using any of these three methods might—in complete compliance with the federal code—put them at risk for “criminal or civil liability” or damage their “financial standing, employability, or reputation” (CFR §46.101(b)(2)). The policy is consistent with legal understandings that public figures bear different burdens than private citizens.
- Exempt category: Existing data. Federal policy exempts from full review “[r]esearch involving the collection or study of existing data, documents, [or] records, … if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects” (§46.101(b)(4)). However, university IRBs vary considerably in how they treat existing quantitative datasets, such as the Inter-University Consortium for Political and Social Research collection (see icpsr.umich.edu/icpsrweb/ICPSR/irb/). Some universities require researchers to obtain IRB approval to use any datasets not on a preapproved list even if those datasets incorporate a responsible use statement.
- “Unchecking the box.” The “box” in question—in the Federal-wide Assurance form that universities file with OHRP registering their intention to apply IRB regulations to all human subjects research conducted by employees and students, regardless of funding source—when “unchecked” indicates that the IRB will omit from review any research funded by sources other than the HHS (thereby limiting OHRP jurisdiction over such studies). IRB administrators may still, however, require proposals for unfunded research to be reviewed.
Procedural matters: Non-experimental field research
The experimental research design model informing IRB policy creation and constituting the design most familiar to policy-makers, Board members and staff means that field researchers face particular challenges in IRB review.
As the forms and online application sites developed for campus IRB uses reflect this policy history, some of their language is irrelevant for non-experimental field research designs (e.g., the number of participants to be “enrolled” in a study or “inclusion” and “exclusion” criteria, features of laboratory experiments or medical randomized controlled clinical trials). Those templates can be frustrating for researchers trying to fit them to field designs. Although that might seem expeditious, conforming to language that does not fit the methodology of the proposed research can lead field researchers to distort the character of their research.
IRB policy generally requires researchers to inform potential participants—to “consent” them—about the scope of both the research and its potential harms, whether physical, mental, financial or reputational. Potential subjects also need to be consented about possible identity revelations that could render them subject to criminal or civil prosecution (e.g., the unintentional public revelation of undocumented workers’ identities). Central to the consent process is the concern that potential participants not be coerced into participating and understand that they may stop their involvement at any time. Not always well known is that federal code allows more flexibility than some local Boards consider. For minimal risk research, it allows: (a) removal of some of the standard consent elements; (b) oral consent without signed forms; (c) waiver of the consent process altogether if the “research could not practicably be carried out without the waiver or alteration” (CFR §46.116(c)(2)).
Procedural matters: General
IRB review process backlogs can pose significant time delays to the start of a research project. Adding to potential delay is many universities’ requirement that researchers complete some form of training before they submit their study for review. Such delay has implications for field researchers negotiating site “access” to begin research and for all empirical researchers receiving grants, which are usually not released until IRB approval is granted. Researchers should find out their campus IRB’s turnaround time as soon as they begin to prepare their proposals.
Collaborating with colleagues at other universities can also delay the start of a research project. Federal code explicitly allows a university to “rely upon the review of another qualified IRB…[to avoid] duplication of effort” (CFR §46.114), and some IRBs are content to have only the lead researcher proceed through her own campus review. Other Boards insist that all participating investigators clear their own campus IRBs. With respect to overseas research, solo or with foreign collaborators, although federal policy recognizes and makes allowances for international variability in ethics regulation (CFR §46.101(h)), some US IRBs require review by a foreign government or research setting or by the foreign colleague’s university’s IRB, not considering that not all universities or states, worldwide, have IRBs. Multiple review processes can make coordinated review for a jointly written proposal difficult. Add to that different Boards’ interpretations of what the code requires, and one has a classic instance of organizational coordination gone awry.
In sum
On many campuses political (and other social) scientists doing field research are faced with educating IRB members and administrative staff about the ways in which their methods differ from the experimental studies performed in hospitals and laboratories. Understanding the federal regulations can put researchers on more solid footing in pointing to permitted research practices that their local Boards may not recognize. And knowing IRB-speak can enable clearer communications between researchers and Board members and staff. Though challenging, educating staff as well as Board members potentially benefits all field researchers, graduate students in particular, some of whom have given up on field research due to IRB delays, often greater for research that does not fit the experimental model (van den Hoonard 2011).
IRB review is no guarantee that the ethical issues relevant to a particular research project will be raised. Indeed, one of our concerns is the extent to which IRB administrative processes are replacing research ethics conversations that might otherwise (and, in our view, should) be part of departmental curricula, research colloquia, and discussions with supervisors and colleagues. Moreover, significant ethical matters of particular concern to political science research are simply beyond the bounds of US IRB policy, including recognition of the ways in which current policy makes “studying up” (i.e., studying societal elites and other power holders) more difficult.
Change may still be possible. In July 2011, OHRP issued an Advanced Notice of Proposed Rulemaking, calling for comments on its proposed regulatory revisions. As of this writing, the Office has not yet announced an actual policy change (which would require its own comment period). OHRP has proposed revising several of the requirements discussed in this essay, including allowing researchers themselves to determine whether their research is “excused” (their suggested replacement for “exempt”). Because of IRB policies’ impact, we call on political scientists to monitor this matter. Although much attention has, rightly, been focused on Congressional efforts to curtail National Science Foundation funding, as IRB policy affects all research engaging human participants, it deserves as much disciplinary attention.
References
Schrag, Zachary M. 2010. Ethical imperialism: Institutional review boards and the social sciences, 1965–2009. Baltimore, MD: Johns Hopkins University Press.
Schwartz-Shea, Peregrine and Yanow, Dvora. 2014. Field research and US institutional review board policy. Betty Glad Memorial Symposium, University of Utah (March 20-21). http://poli-sci.utah.edu/2014-research-symposium.php
Stark, Laura. 2012. Behind closed doors: IRBs and the making of ethical research. Chicago: University of Chicago Press.
US Code of Federal Regulations. 2009. Title 45, Public Welfare, Department of Health and Human Services, Part 46, Protection of human subjects. www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.html.
van den Hoonaard, Will. C. 2011. The seduction of ethics. Toronto: University of Toronto Press.
Yanow, Dvora and Schwartz-Shea, Peregrine. 2008. Reforming institutional review board policy. PS: Political Science & Politics 41/3, 484-94.