Imagine you are at a party of friends and family, and a group of strangers arrive with one of your friends. They tell you that they are here to enliven the party, to make it better. They distribute two kinds of chits — red and blue ones — seeming randomly.
They ask all the blues to congregate in one room where they have a group conversation; with the reds they have individual conversations.
In each of these conversations, they start offering varying amounts of money, if the individuals concerned are willing to spit into the soup being served at the party. Unknown to you, they are actually conducting an experiment to find out if people behave differently in private and public, especially when it comes to accepting offers of money for deviant behaviour.
You are told they are doing this because it is apparently a significant question in Economics and your friend gave them access to a place where this experiment could be conducted at low cost to them.
If you are not abhorred by the idea, then you may be in the good and very accomplished company of people involved with the body of work that has just been recognised for the Nobel prize in Economics. Not necessarily the high quality work that the awardees have done — and the awardees, Abhijit V Banerjee, Esther Duflo and Michael Kremer, have also received a lot of praise for their affability and humility — but surely work that appears to have been inspired and privileged by theirs, like the one described below. Work or research that often seems to be acutely indifferent to the sensibilities and sensitivities of the social world around it, even as it tries to shape it in its own imagination.
[T]he “novelty” may lie more in the departure made from mainstream Economics at the time the laureates built up the larger enterprise around RCTs.
The 2019 Nobel recognises the influence the idea of field-based Randomised Control Trials (RCTs) has come to assert on the discipline. Given that the focus of much mainstream Economics is marginal analysis — how does one outcome of interest change when one potentially influencing variable is changed, keeping everything else constant (ceteris paribus) —RCTs provide an appropriate tool for economists to try and tease out causal relationships empirically with data.
The advent of field RCTs has made it more acceptable for mainstream applied economists in research institutions, predominantly from the global West, to be engaged with the data collection process. It has also encouraged the use of an additional tool to empirically assess claims of causality. None of this may necessarily be new to the discipline, and the “novelty” may lie more in the departure made from mainstream Economics at the time the laureates built up the larger enterprise around RCTs. An enterprise that has now not only come to dominate Development Economics but crept into other disciplines as well.
[L]ike the proverbial hammer that keeps looking for the next nail, Economics and the disciplines that they increasingly colonise, look for the next thing that they can randomise...
In the interest of full disclosure, I must state here that much of my training has been in the use of these methods. I have worked professionally with an organisation that perhaps was one of the first to conduct a large field-based experiment in the United States. I also continue to teach a doctoral course using these methods and use it — where appropriate and feasible — for my research. I am not an outsider to this enterprise. I speak very much as part of the “system”.
The indiscriminate use of RCTs has not been without controversy, with other Nobel laureates like Angus Deaton and Joseph Stiglitz clearly being outspoken in their criticism of them. Yet, like the proverbial hammer that keeps looking for the next nail, Economics and the disciplines that they increasingly colonise, look for the next thing that they can randomise (and potentially earn tenure and other laurels).
In keeping with the RCT frenzy, is a recent National Bureau of Economic Research (NBER) paper titled, “Scabs: The Social Suppression of Labor Supply” (Breza, Kaur, and Krishnaswamy 2019) that examines how “social norms have the potential to alter the functioning of economic markets.”
The researchers test if daily wage workers in Odisha will accept wages lower than what is prevailing in the market, if these offers are made privately as opposed to being made publicly. What is “randomised” is whether the offer is made privately or publicly, much like the party example above.
An NBER paper is usually the first platform for a study that the discipline deems worthy of attention and our paying attention to this study is surely not out of line with what the discipline does.
Information filed with the American Economic Association’s Registry for Randmised Control trials states that the research’s “primary goal is to provide explicit evidence for worker collusion” (Breza, Krishnaswamy, and Kaur 2016).
The term “collusion” is typically used pejoratively to refer to activities by which prices are kept artificially high (leading to a “price floor”) by formal or informal agreements among providers of a good or service. For instance, the Merriam-Webster dictionary defines it as a ”secret agreement or cooperation especially for an illegal or deceitful purpose”.
To understand “the source of the wage floor inside the village” the researchers “partner” with existing employers in Odisha. In their NBER paper, the authors repeatedly describe the poverty, the high unemployment rates, the fact that “minimum wage laws are ignored” (Breza et al , 2019, 7) and the situation of helplessness that workers in Odisha (one of the poorest states in India) find themselves in.
As part of the “partnership” that they enter into, they explicitly state that “through an agreement with the employer, we pay part (approximately 75%) of the labor cost for the task in exchange for being able to control to whom we offer the job and under what wage and circumstances” (Breza et al, 2016).
The desire to take control of the decision of who is offered a job and under what wage, is driven by the researchers' desire to make these choices independent of other factors that might influence the decision to accept below market wages.
The protocol for the interaction with the worker after the potential employer has introduced the study enumerator is described as following:
The employer describes the job task, location and timings for the work on the specified date. He then introduces the enumerators as individuals “from a research institute” in the state capital who are studying agriculture and who would like to do a brief survey with the worker. The employer then tells the worker to let him know his work decision after speaking with the enumerator. This creates a natural handoff from the employer to the enumerator (Breza et al 2019, 9).
Therefore, the protocol explicitly ensures that it is not the potential employer but a member of the research team who offers a particular wage, in this case either the prevailing wage or a wage that is 10% less than the prevailing market wage in a context where the authors are aware that minimum wage laws are ignored.
A natural question that should arise is: why should such an activity not be deemed illegal? And if it is, what legal and ethical frameworks allow researchers trained and working in the best universities of the world, to break the law with such impunity? Beyond issues of legal impropriety, the authors admit to actively inducing people to violate what they know and describe as a social norm. They write that, “About 80% of workers state that it is 'unacceptable’ or 'very unacceptable’ for an unemployed worker to offer to work below the prevailing wage” (Breza et al 2019, 8).
The ethics of researchers using field experiments to understand questions of “worker collusion” in a context where market wages are already lower than minimum wages needs to be questioned.
Yet, the researchers go around trying to induce unemployed people to accept a hidden offer to work below the prevailing wage. I wonder how the researchers would react if they themselves were the subjects of an experiment in which they were privately made offers to partake in activities that they unambiguously considered unacceptable.
This is not to deny that questions about factors influencing market wages and possible “collusion” might indeed be salient in certain contexts. In highlighting the generalisability and value of their study, the researchers themselves point to the presence of “cartel-like behaviour” among the National Association of Securities Dealers Automated Quotations (NASDAQ) traders and real estate agents in the US.
Keeping aside the incredulity of daily wage agricultural workers in Odisha being considered similar to NASDAQ traders, a question that the Economics discipline needs to be asking is, are developing economies being turned into simply convenient sites to answer questions of disciplinary interest. Given that NASDAQ traders are located much closer to the researchers, why are studies seeking to identify “scabs” and “collusion” not being conducted there?
The ethics of researchers using field experiments to understand questions of “worker collusion” in a context where market wages are already lower than minimum wages needs to be questioned. The study’s objectives are described as “understanding the source of the wage floor inside the village therefore has potential bearing on understanding determinants of the wage in the labor market as a whole” (Breza et al 2016).
One might have imagined if resources are being brought to bear in such a context, the meaningful question to ask would instead have been: why are wages as low as they are? Are the researchers hoping that their study identifies mechanisms to help bring down wages even lower than they are, by identifying the “scabs”? Or is this just the more convenient question to ask with the methods at hand? Whose interests are really being served here?
Are researchers -- predominantly those sitting in the global West -- exploiting exploitative conditions for their own convenience?
As individuals who intervened in a context characterised by extreme deprivation, the researchers should also be made accountable for the implications of their study. Did the researchers consider what might have transpired in the villages where they privately induced the poorest of workers to indulge in deviant behaviour? What happened to the so called “scabs”, that they helped create after they were finished with their surveys? And, the researchers should also let us know why people living at the margins of existence deserve a title like “scabs” for accepting offers most likely essential for their existence?
Are researchers -- predominantly those sitting in the global West -- exploiting exploitative conditions for their own convenience?
I admit, I might have been unfair in picking one study, there could have been others. For instance, one in which over 1,200 people were shown videos of ethnic violence. A study which lies in the field of International Relations and Political Science, outside the discipline of Economics. Randomising the exposure to “videos of actual ethnic violence and police repression of protests”, the researchers were interested in understanding “how violence exposure affects the way Kashmiris feel about the Indian state and how they respond to Pakistani irredentism”(Nair and Sambanis 2019).
The appeal of RCTs is that they potentially allow researchers to compare groups that are like each other, except in their “exposure” to the factor whose influence on the outcome they would like to test. By using random assignment (example, via a toss of a coin) to assign individuals to the exposure (or “treatment”), they hope the “control” group would remain “uncontaminated”. With “contamination” defined as keeping the “control” group away from the influence of the intervening factor, the “contamination” of the real world by the researchers themselves seems increasingly to be a neglected issue.
After all, how else can one explain the active and conscious display of such videos as in the violence study to susceptible individuals in a highly volatile context. That this is occurring in disciplines which have historically made us a lot more aware and sensitive of how the social and political world operates, is even more alarming.
The nature of field experiments demands that what need to be safeguarded are not just the participants, but communities and societies where the experiments are being conducted.
The examples mentioned here are not isolated cases and they raise issues that go beyond these particular studies. The Kashmir experiment was approved by the Yale Institutional Review Board, while the Odisha study was approved by that of Columbia University as well as the Institute for Financial Management and Research (IFMR), the latter located in India.
IRBs are charged with examining questions of ethics and potential harm to study participants. It would be useful to know whether the concerned IRBs grappled with the issues we are raising and how they justified approving the study protocol nevertheless. Going forward, it would also be instructive to have IRBs make public their explanations for allowing studies of this nature to proceed.
In a recent interview to The New York Times, the 2018 Nobel laureate, Paul Romer expressed a desire to atone for the discipline's overzealous support of markets over government, stating:
I’m afraid economists have really been serious contributors to this problem. This whole ideology of ‘government is bad, government is the problem’ has I think provided cover for rich people and rich firms to take advantage of things for their selfish benefit”(Badger 2019).
In doing so, Romer takes a critical look at his own work and how it shaped the discipline.
At a time when RCTs have received the highest recognition possible in the field of Economics, it would be wonderful to have Romer’s successors as Nobel laureates, reflect and state their position on the issues raised here and by many others. This would be useful not only in the interest of the discipline but also their own legacies.
Further, I would like to urge Banerjee, Duflo and Kremer, the 2019 laureates, to call for a moratorium on experiments on the most vulnerable populations. At least until a time when safeguards no less stringent than those put in place for regulating medical clinical trials, are also put in place for field experiments. The nature of field experiments demands that what need to be safeguarded are not just the participants, but communities and societies where the experiments are being conducted.
Given the failure of IRBs, the safeguards should be overseen by institutions accountable to those being subject to these experiments and not entities much closer to the researchers as they currently are.
I hope we do not need further experiments to generate evidence on the need for such accountable oversight.